article
stringlengths
0
456k
abstract
stringlengths
0
65.5k
suppose that one wishes to implement two channels of communication ( e.g. , electrical wires that carry encoded bit - streams ) , one between points and , the other between and .as the reader may have guessed , is positioned to the north , to the east , and so on ; consequently , given the desire that neither channel skirt around the other ( rather , we should like shorter , more direct lines of communication ) , the channels must _ cross_. in three spatial dimensions , this is unproblematic : one has available not only ` north / south ' and ` east / west ' but also ` up / down ' , which confers headroom enough for one channel to pass over the other , carried by a bridge for example . in two dimensions ,however , the channels not merely cross but moreover _ intersect _ ; and it is not at all clear that the channels respective signals can survive this intersection ( if the channels are electrical wires , for example , then their intersection implies electrical _ connection _ , whence it is no longer possible to determine on which channel a current originated ) .thus , one naturally asks : _ is it possible , in a suitable , two - dimensional model , to cross two channels in such a way that each successfully conveys its data , in particular without problematic interference at the intersection ?_ it is upon this question , which we call the _ cross question _ ( _ cq _ ) , that we focus in the present paper .our answer is affirmative , and , moreover , our proof constructive : we exhibit cellular automata ( for it is these that we adopt as our formalism ) that successfully cross channels without impairing capacity . we consider also the efficiency ( according to several measures ) of our automata . as a historical note ,we point out that this work was originally inspired by the low - level description of neural activity given in , in which a figure ( reproduced here as fig .[ fig : geb ] ) appears with the caption , `` n this schematic diagram , neurons are imagined as laid out as dots in one plane .two overlapping pathways are shown in different shades of gray .it may happen that two independent `` neural flashes '' simultaneously race down these two pathways , passing through one another like two ripples on a pond s surface ''no further consideration is given in to the details of how these signals pass through each other .neither should further explanation be expected : the restriction to two dimensions is made purely to allow the depiction on paper of a ( three - dimensional ) phenomenon in which neurons are in fact free to bend around each other and so to cross without intersecting .nonetheless , fig .[ fig : geb ] inspires the present work by prompting the ( intrinsically two - dimensional ) cq , which we answer below .one may wonder , given that real - life neurons have available to them a third dimension , whether there is any need to consider the cq ; we claim that there is : aside from the question s academic interest , we note in sect .[ sec : conapp ] practical contexts in which restriction to two dimensions is natural and beneficial . so as to formalize the ideas above , andin particular so as to be able to state more rigorously the cq , we model communication channels as _ cellular automata _ ; more precisely , we take as our model cellular automata augmented with the ability to accept _ input _ ( specifically , the messages to be carried by the channels so modelled)cf .we give now the relevant definitions .a _ cellular automaton _ is a tuple satisfying the following . is a regular lattice of _cells_. is a finite set of _states_. is a tuple , , is arbitrary , but is nonetheless fixed for use in definition [ def : caconf ] . ] of finitely many _ neighbourhood offsets _ ( which are distinct ) , where , for each offset and each cell , is a _neighbour _ of . contains amongst its coordinates the additive identity of , whence each cell has itself as a neighbour . is a _ transition function_. [ def : caconf ] suppose that , , and are as above .configuration _ is a function assigning to each cell a state ; the subscript indexes ( discretely modelled ) time .the initial configuration must be specified as part of the automaton s description ; thereafter , each configuration is determined by the last , , by way of the transition function : for each cell and each time , ( recall the fixed order , , ) . in modelling communication channels as cellular automata , it is convenient for us to modify the above definitions as follows .[ pt : modsubset ] we allow the cells to form a _ subset _ of a lattice , rather than necessarily the whole lattice . [ pt : modvary ] we allow the transition function to _ vary _ from cell to cell and from time - step to time - step . [ pt : modinput ] crucially , we allow an automaton to accept _ input _ ( the messages to be carried by the channels ) ; to this end , we equip the automaton with _sources _ : cells not governed by transition functions , but rather to which are supplied messages .consequently , we have the following .a _ cellular automaton with sources _ ( _ cas _ ) is a tuple satisfying the following . is a subset of some lattice ( , say ) ; elements of are _ cells _modification ( [ pt : modsubset ] ) above).it is often desirable to impose the condition that be finite , though this stipulation is not necessary in order that the claims of the present paper hold ( of course , the claims still hold when such restriction is made ) . ] is the set of _ sources _ of ( cf .modification ( [ pt : modinput ] ) above ) . is a finite set of _ states _ , and contains a distinguished element _ blank _ , denoted ` ' .messages to be carried by our channels are composed of states in ; let us suppose for convenience when transcribing messages , then , that . is a tuple of finitely many_ neighbourhood offsets _( which are distinct ) , where , for each offset and each cell , is a _ lattice - neighbour _ of ; if , furthermore , , then is a _neighbour _ of . for each non - source cell , is the _ transition function _ for . ` ' represents time , thus allowing the transition function to vary not only from cell to cell ( hence the subscript ` ' ) but also from time - step to time - step ( cf .modification ( [ pt : modvary ] ) above).note that , for present purposes , we require only ( finitely representable ) transition functions that depend upon the _ parity _ ( an element of ) , rather than the value ( an element of ) , of time ; thus we avoid prohibitively ( or even infinitely ) complicated and memory - hungry descriptions of transition functions .further , recall from footnote [ fn : lfin ] that one may suppose to be finite , whence he need consider only finitely many distinct transition functions . ]let be a cas .a _ message _ ( in ) is a function ; is the message s _ length _ ( denoted ) .we often describe messages in sequence form , , .let be the set of messages ( of any length ) in .an _ input _ for is a map assigning to each source a message . a _ configuration _ is a function assigning to each cell a state ; the subscript discretely indexes time , whilst is an input . the initial configuration is defined such that , for all sources , , and , for all non - sources , maps to an arbitrary element of ( so , as with standard cellular automata , the initial configuration in particular of the non - source cells is a ` free variable ' that forms part of the system s description ) .below , we frequently encounter the initial configuration mapping each non - source cell to the blank state ; we call this _-initialization_. subsequent configurations are given by hence , each source assumes in order the states , , , , , , ( i.e. , the message supplied by to source , followed by blank states ) , and each non - source cell is initialized according to the choice of initial configuration and subsequently governed by its transition function . ( the formulation of transition functions given here allows , for example , the ` spontaneous appearance of information ' , whereby non- states may appear in neighbourhoods entirely populated by states .whilst this is undesirable in many contexts , it is unproblematic for present purposes , and so our definitions do not preclude such appearance . )so as to honour modification ( [ pt : modsubset ] ) above , we intend transition function to take as its arguments the states only of the _ neighbours_not generally all _ lattice_-neighbours of ( followed by a time index ) .we stipulate , then , that not depend on its argument ( ) whenever is not a neighbour ( but merely a lattice - neighbour ) of ; i.e. , for such , we stipulate that , for each and each , . as an aside , we note that , of the three modifications introduced above to the standard definition of cellular automata , only the third ( namely , provision for accepting input ) admits systems that are not attainable without such modification ; the other two ( namely , ( [ pt : modsubset ] ) partial lattices of cells , and ( [ pt : modvary ] ) cell - heterogeneity of transition functions ) are mere notational conveniences .this is because ( [ pt : modsubset ] ) can be augmented by a special ` non - cell ' state that invites ignorance by transition functions at which each member of is held ; and ( [ pt : modvary ] ) a family of time - heterogeneous transition functions can be simulated by a single time - homogeneous function provided that each cell ` labels itself ' ( and maintains a clock)by extending to whence ascertains which to simulate . thus restoring state sets finiteness . ][ def : chan ] for , to be negative is arguably unintuitive why write ` ' when ` ' would do ?but has the desirable effect that the relation ` between and there is a channel ' is an_ equivalence relation_. ] a cas ( with cells and ) is said to be a _ channel from to with delay _ , written , if , for all and all inputs , thus , is a channel from to with delay if and only if the states of are exactly reproduced at time - steps later ( here , we adopt the conventions ( 1 ) that all configurations for negative time indices are the constant blank function , and ( 2 ) that we -initialize ) . note that we often take in definition [ def : chan ] to be a _ source _ of a cas .[ eg : line ] for , let be the cas , where is an arbitrary set of states and is the projection onto the first coordinate. then each non - source cell acquires as its state that of the cell at the previous time - step , and so any message supplied to the source propagates to a cell after time - steps .hence , for any , .example [ eg : line ] captures the essence of the way in which communication channels may be expressed as cellular automata with sources : if the transition functions are such that messages states are passed from cell to cell , then a channel may be established from a source to another cell .below , we consider in response to the cq less trivial examples of cas channels .we focus hereafter on a specific subclass of cass , which we now define . a _ square - celled , four - neighbour cas _ ( or _ 4-cas _ ) is a cas with cells taken from and with neighbourhood offset tuple .hence , when is viewed as a square grid , the neighbours of a cell are the cell itself and those four cells orthogonally adjacent . given a regular lattice of cells and a binary relation of neighbourhood between cells , a subset of the lattice , i.e. , a set of cells , is said to be _ connected _ if , for every pair of cells in , there exist and such that , and is a neighbour of for each .a cas is said to be _ connected _ if its set of cells is connected in the enveloping lattice under the neighbourhood relation given by .[ rem : bound ] suppose that is a finite , connected set of cells ( suppose also an enveloping lattice endowed with a neighbourhood relation , which , for the purposes of this remark , is viewed _ geometrically _ ; so as to ensure that the concepts used here are well defined , we assume lattice and neighbourhoods as for 4-cass ) .then has an _ external boundary _ ,i.e. , a tuple ( ) of edges of square cells such that , for each , and meet at a vertex ( say ) , so that the tuple forms a _ closed curve _ in the plane of the geometrically - considered enveloping lattice ; the vertices are distinct , so that the closed curve is _ simple _ ; each edge is the boundary between a member of ( let denote this unique cell ) , and a cell in the enveloping lattice but not in ; and lies entirely within the bounded region described by the curve .it is intuitively clear that this boundary is unique up to reversal and cyclic permutation .lies entirely within the curve , whence termination , and hence existence of a boundary , follows .uniqueness follows from consideration of the _ area _ enclosed by the curve . ][ def : cross ] a cas that is a channel both from to and from to for four distinct cells and is said to _ cross _ these channels if is connected ; [ pt:1touch ] the external boundary of is such that , if ( where the bar notation is as in remark [ rem : bound ] , and where , without loss of generality , ) , then either or ; and [ pt : alltouch ] there exist four edges , , and ( with ) on the external boundary of such that is a cyclic permutation of either or .intuitively , point ( [ pt:1touch ] ) of definition [ def : cross ] stipulates that each of , , and be touched at most once by the boundary of ( possibly by several consecutive edges , for example when the cell is on a ` corner ' of ) ; point ( [ pt : alltouch ] ) stipulates that the boundary touch all of , , and , and that it do so in such an order that is opposite , opposite .and is more restrictive than is necessary ; if , however , there exists a method of crossing channels subject to this restriction ( and we demonstrate below that such does exist ) , then a fortiori there still exists such a method when the restriction is removed . ]that each is opposite the corresponding gives that , if a cas crosses two channels as per definition [ def : cross ] , then the channels do indeed cross in the intuitive sense .thus , in order affirmatively to answer the cq , it is sufficient to exhibit a solution to the following problem ( hereafter the _ 4-cas problem _ ) . _ find a 4-cas that crosses two channels ( regardless of the choice of state set )_. we exhibit just such a 4-cas ( in fact , two such ) in sect .[ sec : sol ] it has long been known that three xor ( ) gates can , using only two dimensions , cross two paths carrying _ binary _ signals ; see fig .[ fig : xor](a ) .this scheme is correct since , given upper and lower input bits and respectively , the circuit produces upper and lower output bits and respectively . geometrically speaking ,as bit follows its path from ( upper ) input to ( lower ) output , it is twice combined via xor with the same value ( namely , ) , thus leaving it unchanged ; similarly , is ultimately unchanged by its twice being combined with .furthermore , such logic circuits can be implemented in two - dimensional cellular automata such as conway s _ life _ ( see , for example , ) ; thus , the scheme of fig .[ fig : xor](a ) solves the cq in the case of a binary alphabet .the scheme generalizes naturally to alphabets of size ( w.l.o.g . taken to be ) via replacement of the xor operation with addition / subtraction modulo ; see fig . [fig : xor](b ) . and are exchanged .this solves the cq in the special case of messages taken from the alphabet .( b ) generalization of the scheme to the alphabet ; addition and subtraction are performed modulo , and ` m ' and ` s ' label each subtraction operation s minuend and subtrahend respectively.,height=94 ] since addition / subtraction modulo for fixed can clearly be implemented in cellular automata ( e.g. , via transition functions that perform a look - up from entries ) , the generalized scheme of fig . [fig : xor](b ) offers a solution to the cq .however , the novel solutions advocated in sect . [ sec : sol ] of the present paper have three chief advantages over this scheme . the solution of fig .[ fig : xor](b ) depends upon the size of the alphabet : addition and subtraction are performed _modulo _ , whence one must have a priori knowledge of in order to be able to implement these operations .the solutions of sect .[ sec : sol ] are independent of the choice of alphabet ; they can be implemented before is known , can accommodate the alphabet s changing during transmission of the messages to be crossed , and require of the alphabet no additive structure . the solution of fig .[ fig : xor](b ) necessitates computational processing e.g . , calculation or look - up from entries at the addition / subtraction nodes .the solutions of sect .[ sec : sol ] require of these nodes mere transfer ( between cells ) of states , which incurs no computational cost . the solution of fig .[ fig : xor](b ) preserves only the _ information content _ of the messages being crossed , whereas the solutions of sect .[ sec : sol ] physically move the messages states from cell to cell .this latter , more general approach allows the crossing not only of streams of messages symbols , but also of streams of _ physical objects _ ( whereas , of course , physical objects can not be duplicated and combined modulo so as to be crossed by the scheme of fig .[ fig : xor](b))see the discussion of road junctions in sect .[ sec : conapp ] we recall also previous cellular - automatic solutions to the channel - crossing problem ( see for example and the references therein ) , but note that these do not allow the crossing of messages consisting of _ arbitrary _ symbols , but rather reserve states that encode wires boundaries , or utilize wires of width strictly greater than one , or similar ; this is in contrast with the solutions presented in the following section .in this section , we describe two solutions to the 4-cas problem .the solutions both adhere to the same general scheme , which we now discuss .consider first the 4-cas given in fig .[ fig : x ] .( the lattice structure relating neighbours of and so on is left implicit in the geometry of fig . [fig : x ] ; in particular , we give the cells symbolic names , rather than labelling cells with elements of the enveloping lattice . we treat subsequent 4-cass similarly . ).,height=90 ] we claim that there is no choice of transition functions for the three non - source cells , and such that acts simultaneously as a channel from to for both ( so certainly there is no choice such that crosses these channels ) .intuitively , this is because , were these channels established , then ( by virtue of the geometry of ) would have to encode in its state the previous states of both and ( whence and could subsequently attain their respective states ) ; however , assuming the non - trivial case where , there is no injective map from to , and so the ( single ) state of of possibilities cannot encode the _ pair _ of previous states of and which there are possibilities . nonetheless , it is clear that either one of the channels _ in isolation _ can be implemented via suitable choice of transition functions : if takes as its state that of at the previous time - step , and that of ( more formally , if and are the projection onto the first coordinate , where , recall , the first coordinate of is ) , then ; if _ instead_ , where the second coordinate of is , then .further , there is a compromise between these two single channels : if takes its state _ alternately _ from and .g ., if we have the transition function ( and if ) then _ half _ of each message ( specifically every other state ) is passed with delay from to .this behaviour is encapsulated in table [ tab : x](a ) , which shows the state of each cell in each time - step ( arbitrarily assuming -initialization , and supposing supply of messages a , b , c , to and 0 , 1 , 2 , to ) .rcrc ( a ) & .the behaviour of , ( a ) as originally introduced and ( b ) as modified so as to emphasize the roles of .[ cols=">,^,^,^,^,^",options="header " , ] note in particular that and , and that , by virtue of the geometric layout of , these channels are crossed .further , consideration of the efficiency measures of definition [ def : meas ] gives that and ; hence , with respect to these measures , offers a better solution to the 4-cas problem than . is _ optimal _ with respect to these three measures ( at least when messages states may have _ physical presence _ as described in the second bullet point of sect .[ sec : intrel ] ) .we defer justification to .] intuitively , the reason for the improvement ( over ) offered by is that ( a ) the sets of cells of that act as copies of ( these sets are the respective neighbourhoods of , , and ) have greater pairwise overlaps than is the case with the corresponding sets of cells in ; and ( b ) the sets of cells of acting as copies of ( these sets are and ) or of ( and ) are better shaped specifically , they are ` l'-shaped rather than straight to pass data to / receive data from the rest of the cells .we consider in the present paper the problem of crossing two channels ; in particular , we stipulate exactly two spatial dimensions , whence the crossed channels necessarily intersect .we formalize this situation in sect .[ sec : intapp ] using _ cellular automata _ , modified so as to endow the systems with the ability to accept inputs ( i.e. , the messages to be carried by the channels ) . in sect .[ sec : intrel ] , we recall previous approaches to this question , but note that the solutions presented here have over these approaches the advantage ( amongst others ) of being able to cross streams not merely of _ information _ , but also of _ physical objects_. in sect . [ sec : solgen ] , we exhibit a scheme whereby channels may be crossed ; the key is a simple sub - automaton that , whilst unable to cross channels carrying arbitrary messages , is at least able successfully to cross _ couplet _ messages ; then arbitrary messages can be crossed by splitting them into couplets , crossing with , and recombining the couplets into the original messages . in sect .[ sec : solimp ] , we implement this scheme as a 4-cas , thus answering the motivating question of the present work : _ it is , in two dimensions , indeed possible to cross channels without impairing their capacity_. in sect .[ sec : solopt ] , we consider the efficiency of , introducing measures that capture the time / space costs incurred in using the system to cross channels ; we go on to exhibit a system that improves upon with respect to these efficiency measures .we finish by noting some potential applications of the present paper s theoretical contribution ( namely , that channels can be crossed in two dimensions without disrupting their messages ) .we consider , then , practical settings in which is inherent a restriction to two dimensions . a potential application , suggested by peter covey - crump , is in _ chip design_. components on a chip are connected by conductive tracks printed onto the chip s layers .if two tracks cross on a single layer , then they are necessarily electrically connected ; if such connection is not desired ( e.g. , if one track is to form a connection between components and , and the other , _ independently _ , between and ) , then one track must sidestep to another layer and bridge over the other track using ` vias ' ( connections between corresponding points on different layers ) , which incurs expense ( not least because of the necessity of the chip s consisting of several layers ) .we suggest that the present paper s scheme i.e . , the approach of splitting each signal into two streams ( which are then crossed without loss ) , though not necessarily implemented via cellular - automatic means may offer scope for a preferable alternative to this costly , multi - layer bridging ( though we concede that it is as yet far from clear how , on a single layer , sub - automaton can be implemented efficiently ) .another potential application , similar to the above in that ` bridging ' into a third dimension is costly , is in _civil engineering_. specifically , _ road junctions _ playing the roles of , and can be used to implement the scheme of the present paper , so as to cross two roads without the need for ( and expense of ) a bridge carrying one over the other .the form of these three junction types may for example be ( ) a division of alternate cars ( or alternate blocks of consecutive cars ) into two parallel streams ( consisting , therefore , alternately of cars / blocks and gaps ) ; ( ) a re - merging of the streams ; and ( ) a level - crossing - style junction whereby two ` car - gap - car - gap ' streams are crossed , the cars of each stream synchronizing with and passing through the gaps of the other .we defer the development of these applications and others to future work .this paper is based upon ( and extended from ) unpublished research presented in .accordingly , we thank richard brent , the supervisor of that project , for his detailed comments ; and samson abramsky and peter covey - crump , for kindly taking the time to examine the project and for their insightful discussion .we thank the three anonymous _ncma 2012 _ referees for their helpful suggestions and comments .we acknowledge the generous financial support of the leverhulme trust , which funds the author s current position .blakey , e. , a cellular - automatic implementation of communication channels , with particular consideration of several intersecting channels , msc dissertation , oxford university ( unpublished ; available at ` http://www.maths.bris.ac.uk/~maewb/cachan.pdf ` ) .
in three spatial dimensions , communication channels are free to pass over or under each other so as to cross without intersecting ; in two dimensions , assuming channels of strictly positive thickness , this is not the case . it is natural , then , to ask whether one can , in a suitable , two - dimensional model , cross two channels in such a way that each successfully conveys its data , in particular without the channels interfering at the intersection . we formalize this question by modelling channels as cellular automata , and answer it affirmatively by exhibiting systems whereby channels are crossed without compromising capacity . we consider the efficiency ( in various senses ) of these systems , and mention potential applications .
in recent years , advances in neural network design and optimization , combined with dedicated computing architectures through the use of gpus , led to a dramatic increase in the performance of computer vision systems . the ilsvrc-2012 challenge winners krizhevsky et al demonstrated how given a large dataset of labeled images , one could design a neural network and train it to achieve state - of - the - art classification results .availability of increasingly powerful hardware has enabled training of deeper and deeper neural networks with corresponding improvements in performance .research efforts have resulted in multiple advances including novel designs and their combination with existing computer vision techniques like region proposals , optical flow or long short term memories .all this has accelerated the pace at which computer vision systems approach human performance .while large systems try to learn models for an exhaustive list of visual concepts , we take the approach of incrementally learning new concepts as they appear in new videos submitted for analysis .one of the reason for taking that path is that we need to ensure that enough training material is available to learn new models .while data is in some cases available , it is in the general case difficult to find good video training material .also , the computing costs related to training a model for a large number of concepts at once would not be sustainable .finally , the quality of the learned model can be improved if the learning is guided : starting learning higher level visual concepts , ie , before refining the model to learn more specific ones like , or . in the following section we present a system designed by following these requirements .we then focus on the role of the ontology , motivating it as a critical component of the system . at the time of writing , the system is implemented andincludes a visual concepts ontology under construction , inspired by existing visual ontologies .this paper is thus more positioned towards motivating the described approach rather than giving experimental results on its benefits .this section describes the dextro system which can be considered as two parts which come together to form an active learning loop . the first part is the dextro api which allows anyone to analyze and understand their video by providing a link .the computer vision models combined with a human in the loop system analyze the video and promptly return results .the second part is the dextro training system which continues to collect training videos and uses human annotators to collect data for those videos which are eventually used as training data for the computer vision models to expand their visual concept classification .the system is illustrated in figure 1 .[ fig : dextro ] the initial computer vision system starts with a limited set of visual concepts that it is trained on .as videos are processed via the dextro api the model is unable to confidently classify the video contents , given its limited training .thus the video is sent to a human in the loop system where human annotators look at the video and classify it .instead of giving human annotators free reign on how to classify the video , we limit them to use terms from our visual ontology .this process gives us information on what type of concepts are present in the videos being sent to our api and we share that information with the dextro training system .the training system then surfs the web for publicly available video data likely to contain the visual concepts that are in our ontology and have been most frequently used by human annotators to classify the videos .the video data collected by the training system is verified and annotated by humans via a service like amazon mechanical turk .this data is then used to retrain our computer vision system , introducing new concepts to it .future videos with those concepts thus do not require a human for classification , completing the active learning loop .we compare this to how humans learn about new concepts .first there is the phase of noticing something new that we do not know about .then we involve an expert or someone more knowledgeable about the concept to help us identify and understand the same .once we have learned the new concept we can use it in future situations .as section [ sect : virtuous ] illustrated , there are multiple areas where the use of a comprehensive ontology can improve the process of incrementally learning a visual concept detection model .* guiding the concept selection process for human annotators : * human workers select visual concepts to annotate videos .better than presenting a list of concepts , the use of an ontology will allow us to present the concept hierarchy , improving the time taken to reach the correct concept .the use of the ontology will also allow us to finely tune the level of granularity at which we want the annotations to be .if we are training on a video set in a specific domain , like car racing , then we will allow a deeper granularity in the types of e.g. , while we may want to stop at for generalist video sets . * bootstrapping model learning through the concept hierarchy : * when a model is learned for a concept , the concept hierarchy can be used to propagate this model to the concept s parents , children and siblings . for example , the model for can be used to bootstrap the model for .the bootstrapping will enable faster learning of the model and thus a lower usage of computing resources .* automated timeline construction : * we can use the ontology to bootstrap from coarse human annotations of what concepts appear in a video to detailed timelines of when and where those concepts occur .for example , if we are told a video contains a and no other animals , we can automatically draw on trained models for other animals such as a to locate the ( just as a human might ) . in general , the ontology can tell us when the models we have are sufficient to make the visual distinctions needed in a particular video by machine , and when we need more help from humans . * using relationships and categories to improve concepts detection : * when a concept is detected in a video , relationships can be followed in the ontology to others concepts , indicating the potential presence of those , either in the same frame , or previously , or later in the video . for example ,if a is detected , and the ontology specifies that , this relation can be followed to trigger the activation of .categorical information can also be used .for example , if a is detected , other visual concepts and activities in the category will be expected .an attention model can then be used to focus on features in this category .* semantic search on the annotated videos : * a classic application of ontology based system is to provide semantic search .for example , querying for will return video sequences containing and , even if there is no explicit model learned for the concept .similarly , browsing a hierarchy of concepts is a better experience than a flat list of tags .* bootstrapping model learning using linked data : * thanks to the rich interconnection of public domain datasets provided in projects like dbpedia and wikidata , we are able to link concepts to a large repository of associated media : wikimedia commons .this enables bootstrapping and enriching the training set for a large number of concepts .mappings between concepts in the ontology and linked - data resources is made possible by the ecosystem of standards and tools associated with modern ontology engineering . * more accurate and efficient contextual model : * contextual models exploit inter - relationships between different visual concepts , and can significantly improve classification results .for example , simple co - occurrence statistics can be constructed with training data and used at test time . in a real - world scenariowhere the number of concepts can be large , the co - occurrence matrix is not only expensive to construct , but also is very sparse , hence , not representative of real - world data .ontology - aware hierarchical design of a co - occurrence matrix can remedy this .while the combination of numerical and symbolic techniques may be a longer term research agenda , ontologies can bring significant improvements to visual concept detection .we have presented a system featuring an active learning loop for visual concept detection , in which the ontology plays a central role for guiding the learning process .the systems presented in this paper is a work in progress .the active learning neural network visual concept learning loop is in place .we are currently focusing on building and integrating a comprehensive ontology structuring the hundreds of concepts our system is currently able to detect .our next steps will be using the ontology to perform the improvements discussed in this paper : bootstrapping model learning , and guiding the categorization process using the concept hierarchy ._ christian szegedy , vincent vanhoucke , sergey ioffe , jonathon shlens , zbigniew wojna . rethinking the inception architecture for computer vision_. proceedings of ieee conference on computer vision and pattern recognition , 2016 . _ ross girshick , jeff donahue , trevor darrell , jitendra malik .rich feature hierarchies for accurate object detection and semantic segmentation_. conference on computer vision and pattern recognition , 2014 ._ yu - gang jiang , zuxuan wu , jun wang , xiangyang xue , shih - fu chang , exploiting feature and class relationships in video categorization with regularized deep neural networks_. arxiv preprint arxiv:1502.07209 , 2015 _ m. vacura , v. svatek , c. saathoff , t. franz and r. troncy , `` describing low - level image features using the comm ontology , '' 2008 15th ieee international conference on image processing , san diego , ca , 2008 , pp .doi : 10.1109/icip.2008.4711688 _
the maturity of deep learning techniques has led in recent years to a breakthrough in object recognition in visual media . while for some specific benchmarks , neural techniques seem to match if not outperform human judgement , challenges are still open for detecting arbitrary concepts in arbitrary videos . in this paper , we propose a system that combines neural techniques , a large scale visual concepts ontology , and an active learning loop , to provide on the fly model learning of arbitrary concepts . we give an overview of the system as a whole , and focus on the central role of the ontology for guiding and bootstrapping the learning of new concepts , improving the recall of concept detection , and , on the user end , providing semantic search on a library of annotated videos . , , , computer vision , ontology , deep learning , neural networks , active learning
born for a particular need in a cohort study , is now a framework of knowledge discovery in databases on its own , used in several application domains , e.g. . intended to an educational and scientific usage, the system is articulated into several modules for preparing and mining binary data , and filtering and interpreting the extracted units .thus , from binary data ( possibly obtained from a discretization procedure ) , allows one to extract itemsets ( frequent , closed , generators , etc . ) and then to generate association rules ( non - redundant , informative , etc . ) .building concept lattices is also possible .the system includes many classical algorithms of the literature , but also others that are specific to .the software is freely available at http://coron.loria.fr . mainly written in java ,is compatible with the unix , mac and windows operating systems and is of command - line usage .the methodology was initially designed for mining biological cohorts , but it is generalizable to any kind of database .it is important to notice that the whole process is guided by an expert , who is a specialist of the domain related to the database .his role may be crucial , especially for selecting the data and for interpreting the extracted units , in order to fully turn them into knowledge units . in our case ,the extracted knowledge units are mainly association rules . at the present time , finding association rules is one of the most important tasks in data mining .association rules allow one to reveal `` hidden '' relationships in a dataset .finding association rules requires first the extraction of frequent itemsets .the methodology consists of the following steps : definition of the study framework ; iterative step : data preparation and cleaning , pre - processing step , processing step , post - processing step ; validation of the results and generation of new research hypotheses ; feedback on the experiment .the life - cycle of the methodology is shown in figure [ loop ] .coron is designed to satisfy the present methodology and offers all the tools that are necessary for its application in a single platform .[ [ pre - processing . ] ] pre - processing .+ + + + + + + + + + + + + + + these modules propose several tools for manipulating and formatting large data .the data are described by binary tables in a simple text - file format : some individuals in lines possess or not some properties in column .the main possible operations are : ( i ) discretization of numerical data , ( ii ) conversion of different file formats , ( iii ) creation of the complement of the binary table , and ( iv ) other projection operations such as transposition of the table .[ [ data - mining . ] ] data mining .+ + + + + + + + + + + + extracting itemsets and association rules is a very popular task in data mining .concept lattices are mathematical structures supported by a rich and well established formalism , namely , formal concept analysis .a concept lattice is represented by a diagram giving nice visualization of classes of objects of a domain .thus , the data mining modules of the system offer the following possibilities : * itemset extraction : frequent , closed , rare , generators , etc .this task is performed by a large collection of algorithms based on different search strategies ( depth - first , level - wise , etc . ) . * association rules generation : frequent , rare , closed , informative , minimal non - redundant , duquenne - guigues basis , etc .these rules are given with a set of measures such as support , confidence , lift , conviction , etc . * concept lattice construction .[ [ post - processing . ] ] post - processing .+ + + + + + + + + + + + + + + + extracted units from the data mining step may be very numerous , and hide some units of higher interest .thus , proposes some filtering operations that should be done in interaction with a domain expert .the analyst may filter rules w.r.t .the length of its components , and/or the presence of a given property .he may also retain the best extracted units w.r.t .a measure of interest .it is also possible to color some properties of a list of association rules .[ [ toolbox . ] ] toolbox .+ + + + + + + + finally , auxiliary modules allow one to visualize equivalence classes of itemsets , randomly generate binary data , etc .has been used for the following tasks : extraction of knowledge of adaptation in case - based reasoning , gene expression data analysis , information retrieval , recommendations for internet advertisement , biological data integration , and finally , cohort studies .currently , we are studying how to integrate in platforms using graphical data - flows , such as knime , whose popularity is increasing ( http://www.knime.org ) .this would allow to interact with many other useful tools , most importantly avoiding a command - line usage . also , other tools will be integrated in to consider complex data , mainly numerical , see e.g. .finally , we have recently set up a forum to gather questions , comments and suggestions from users ( http://coron.loria.fr/forum/ ) . in this paper , we have given a brief overview of the system .for more details , please refer to the project s website at http://coron.loria.fr .the authors would like to thank the following persons for their participation in the development of : f. collignon , b. ducatel , s. maumus , p. petronin , t. bouton , a. knobloch , n. sonntag , y. toussaint .9 l. szathmary , s. maumus , p. petronin , y. toussaint and a. napoli , vers lextraction de motifs rares .actes de _ extraction et gestion de connaissances ( egc ) , rnti - e-6 , cpadus - ditions toulouse _ , pages 499510 , 2006 m. r. berthold , n. cebron , f. dill , t. r. gabriel , t. koetter , t. meinl , p. ohl , c. sieb , and b. wiswedel , knime : the konstanz information miner .demonstration at _ knowledge discovery in databases ( kdd ) _, 2006 l. szathmary , a. napoli and p. valtchev , towards rare itemset mining , _ ieee international conference on tools with artificial intelligence ( ictai ) _ , pages 305312 , 2007 m. daquin , f. badra , s. lafrogne , j. lieber , a. napoli and l. szathmary , case base mining for adaptation knowledge acquisition. proc . of the _ international joint conference on artificial intelligence ( ijcai ) _ , pages 750755 , 2007 m. kaytoue , s. duplessis and a. napoli , using formal concept analysis for the extraction of groups of co - expressed genes .proc . of the _ international conference on modelling , computation and optimization in information systems and management sciences ( mco ) , ccis , springer _ , 439449 , 2008 d. i. ignatov and s. o. kuznetsov , concept - based recommendations for internet advertisement. proc . of the _ concept lattices and their applications ( cla ) _ , pages 157166 , 2008 e. nauer and y. toussaint , classification dynamique par treillis de concepts pour la recherche dinformation sur le web . actes de _ 5me confrence de recherche en information et applications ( coria ) _ ,pages 7186 , 2008 a. coulet , m. smal - tabbone , p. benlian , a. napoli and m .- d .devignes , ontology - guided data preparation for discovering genotype - phenotype relationships . _ bmc bioinformatics _ , vol . 9 , 2008 l. szathmary , a. napoli , s.o .kuznetsov , zart : a multifunctional itemset mining algorithm , proc . of the _conf . on concept lattices and their applications ( cla ) _ ,pages 2637 , 2007 l. szathmary , p. valtchev , a. napoli and r. godin , constructing iceberg lattices from frequent closures using generators , proc . of the _ international conference on discovery science ( ds ) , lncs 5255 , springer _ , pages 136147 , 2008 l. szathmary , p. valtchev , a. napoli and r. godin , efficient vertical mining of frequent closures and generators , proc . of the _ international symposium on intelligent data analysis ( ida ) , lncs , springer _ , pages 393404 , 2009 m. kaytoue , s. duplessis , s. o. kuznetsov and a. napoli , two fca - based methods for mining gene expression data , proc . of the _ international conference on formal concept analysis ( icfca ) ,lncs 5548 , springer _, pages 251266 , 2009 b. ganter and r. wille , formal concept analysis , mathematical foundations , springer , 1999
is a domain and platform independent , multi - purposed data mining toolkit , which incorporates not only a rich collection of data mining algorithms , but also allows a number of auxiliary operations . to the best of our knowledge , a data mining toolkit designed specifically for itemset extraction and association rule generation like does not exist elsewhere . also provides support for preparing and filtering data , and for interpreting the extracted units of knowledge . discovery , data mining , itemset extraction , association rules generation , rare item problem
evolutionary processes on the bacterial genome are dynamic and complex , with a tremendous range of mutation events occurring at a number of different physical scales . aside from point mutations at the level of nucleotides ,a wide variety of evolutionary mechanisms involve cutting and rejoining of genetic material . in this paperwe step back from the relatively frequent rearrangements that occur at the single nucleotide level to look at these larger scale changes .we will reprise some of the mathematical approaches to their study , and show how it may be possible to view them in a single algebraic modelling framework .examples of larger scale changes include _ deletion _ , _ translocation _ , _ duplication _ and _ inversion_. these processes respectively delete a segment of dna from the genome , relocate a segment to another region on the genome , make a copy of a segment and insert the duplicate into the genome , or invert a segment excise it and reinsert it with the opposite orientation .these mutations are facilitated by the actions of enzymes that reside within the bacterial cell and are encoded by genes on the chromosome .of all the changes that may occur on the genome of a single celled bacterium , many may be fatal to the organism , either by disrupting some function essential to life , or by disrupting the replication process , and more generally may not be observed because the change has conferred a significant cost to their fitness . in that contextit is remarkable that we know as much as we do . we know , for instance , that of the mutation events listed above , inversion is most common , at least in bacteria . stepping back from the focus on changes at the sequence level , biological processes giving rise to _ knotting _ in dnahave been observed for some time , at least as far back as 1981 .however their importance in chemistry had been recognised decades earlier by frisch and wassermann , who defined _ topological isomerism _ in which chemically identical polymers differ only by their topology ( here we use the word topology " in the mathematical sense , so that differing by topology means differing _ as knots _ ) .for instance , polymer chains were able to be generated in the laboratory that formed links , or _catenanes _ , as they are known in the biochemical literature . from a topological viewpoint ,the structures observed at this stage were simple hopf links or trefoil knots , however a wider variety of knots were later found to arise in dna ( e.g. ) .we will refer to the study of changes at a sequence level ( such as inversion ) as _ local _ evolution , and those at a topological level as _topological _ evolution .while the local view of the bacterial genome focuses on the sequence and ignores the topology ( or knotting ) , the topological view does exactly the opposite .it is important to acknowledge that both viewpoints have sound justifications .the sequence ( local ) view is natural because genes , or functional segments of dna , are sequences , and they are transcribed in a linear way that does not take into account any twisting or topological characteristic of the location of the gene .other features encouraging a sequence view include the observation that genes that form part of the same metabolic pathway are often clustered in the same region of the genome . on the other hand the topological view allows us to address a range of questions to do with the origins and maintenance of knotting , linking and supercoiling .for instance , the distribution of knots in the wild is not random , suggesting either that selection favours certain knot forms , or that the mechanism that gives rise to knotting leads to some knots more often than others . studyingthe distribution of knots has helped uncover information about action of the site - specific recombinase that is responsible for knotting in some circumstances ( for example .ultimately , what we are studying in local and topological processes are two alternative projections from the actual configuration of a bacterial genome , including both nucleotide sequence data ( a one - dimensional projection ) and topological data ( three dimensions ) , as in figure [ fig : config ] .motivating both these approaches to bacterial evolution are questions about the construction of phylogenies : understanding the processes that drive the changes in the structure at a local or topological level gives information about the relationships among taxonomic units . in this paperwe will describe some algebraic structures that may provide a link between these processes that draws out the biological commonalities .this is fertile ground for future developments .the application of algebraic methods to biology extends beyond the use of knot theory in dna described in this paper .for instance , a significant body of work now applies ideas from algebraic geometry to phylogeny through the use of algebraic varieties .the use of varieties in biology is often termed algebraic biology " and is closely related to the field of algebraic statistics " .the geometric viewpoint also has applications in viral capsid assembly and rna folding , areas in which combinatorics and graph theory play a significant role , in addition to the study of radiation - induced chromosomal aberrations .finally , a strain of research applies group theory to problems in evolutionary biology , for instance and as well as the recent study of inversion distance closely related to this survey .the biological mechanisms that give rise to changes such as inversion and knotting are actions of enzymes that involve cutting and rejoining dna double - helices .the two main families of enzymes involved in these processes are the topisomerases and the site - specific recombinases ( see , for instance , for a review of these ) .topoisomerases are essential in cell replication because they cut the phosphodiester bonds between dna base pairs , enabling untangling of the coiled or knotted dna ( see e.g ) .in general these do not require a specific site for cleavage , and cut at most two dna strands ( one double - helix ) .cutting one strand can allow the relaxing of super - coiling , while cutting two strands can allow the passage of one double - helix through another and aid in reducing knotting .in contrast , site - specific recombinases do require a specific sequence for cleavage , and act by cutting and rejoining two double - helices .these enzymes act in a two step process , first forming a synaptic complex that involves a certain configuration of the substrate dna , and then causing strand exchange .there is a rich variety of such recombinases , broadly falling into two families , the resolvase ( examples include tn3 ) or invertase ( gin ) family , and the integrase family ( phage , cre , flp , and xer system ) . while the resolvase / invertase family generally requires topological alignment of sites that are directly repeated or inverted ( respectively ) , the integrase family are more versatile and can act on a wider range of substrate arrangements ( see , or the introduction to for surveys of these recombinases ) .these types of action can take place as a result of cuts to a single double - helical strand at the site of recombination , or to two double - helical strands , as shown in figure [ fig : dna - cutting - rejoining ] .[ fig : dna - cutting - rejoining ] actions of site - specific recombinases can produce a wide variety of possible knots under laboratory conditions , such as that shown in figure [ fig : real.knot ] .interestingly , in bacteriophage capsids the distribution of knot types in the wild is not uniform across knot types , or even across knots of the same crossing number .for instance , the achiral figure - of - eight knot is surprisingly scarce .the observed distribution of knots suggests that the knot type may influence fitness , and so may carry some information about metabolism , or it may give information about the mechanism that gives rise to knotting .as noted , topoisomerase plays an indispensable role in cell replication . when cells replicate , the two strands in the double helical dna split and complementary nucleotides are synthesized along each of the arms of the replication , in a process akin to unzipping a zip .when the dna is circular , replication begins at an _ origin _ at which the two strands are pulled apart and replication proceeds along the forks on each side of the origin , ending at a point called the _terminus_. the problem is that this process can not resolve the twisting that occurs without cutting the dna at some point .enter topoisomerase .various types of topoisomerase exist , one cutting a single strand to allow untwisting and reconnecting to occur ( type i ) , another cutting both strands of the double helix , as described in figure [ fig : dna - cutting - rejoining]a and [ fig : dna - cutting - rejoining]b ( type ii ) . in other words , the dna cutting and rejoining that is effected by topoisomerases is essential to the reproductive processes of bacteria .consequently , understanding topoisomerase action is a goal for the development of anti - bacterial and anti - cancer drug treatments ( for example , ) .knotted structures are regularly being observed in dna , and their origins are a topic of active research ( e.g. ) . examples also arise in proteins , where a stevedore knot ( with six crossings ) has been observed .such a knot can be generated with a single change of crossing from the unknot , provided the unknot is suitably arranged before the crossing is changed ( the mechanisms in this case are unclear ) .inversions in bacteria are often studied with a view to phylogeny reconstruction because focussing on the inversion process avoids the obfuscating effects of horizontal transfer .that is , one may attempt to reconstruct the evolutionary history behind a set of organisms under the assumption that the only evolutionary process is inversion , yielding , at least , an approximation of the true phylogeny . in order to do this completely, one must be able to decide , for any set of related genomes , a nearest common ancestor .this is called the reversal median problem " , because of the link to computer science problems known as reversals ( e.g. the pancake flipping problem ) .it is typically expressed in terms of attempting to find a common ancestor that minimizes the total number of evolutionary steps ( or inversions required ) to each genome .equivalently , one wants to minimize the average distance to each genome .of course one may use distance based methods such as neighbour - joining to construct an inversion - based phylogeny ( a good survey of these approaches can be found in the book ) . in any case, one defines a metric on the set of bacterial genomes , given by setting to be the minimal number of inversions required to transform genome to genome .the genome itself can be considered to be a word in the alphabet by fixing a starting point on the circle , but it is more natural in this context to consider instead the genome as a sequence of _ genes _ ( ignoring intervening dna ) , or even better as a sequence of preserved _ regions _ among a given set of genomes .the first modelling of inversions in this way was as a permutation of the set defined by if and otherwise .in other words , the sequence is reversed .the initial statement of the inversion distance problem , made in , numbered a set of gene loci common to both genomes , and considered inversions defined as above .being on a circle , they ask for the minimal number of inversions between two genomes , without regard to either mirror images or rotations around the circle .that is , an arrangement of regions in the order around the circular genome is the same arrangement in three dimensions as , and even , because the difference is merely a rotation or reflection of the whole genome . in group - theoretic terms , one might say that these arrangement are equivalent _ up to the action of the dihedral group _ . subsequent work , mainly by bioinformaticians and computer scientists , treated a form of the problem in which the chromosome is linear , rather than circular .this is based on the modelling assumption that inversion events are equally likely , irrespective of the length of dna inverted .this model led to several interesting algorithms , usually involving a translation into a graph theory problem .research into the problem then shifted in two directions : to treating _ signed _ inversions , and to finding an actual sequence of inversions that realizes the inversion distance , sometimes called _ sorting by reversals _ .a signed inversion takes a sequence and not only reverses the order but changes the sign : , as in figure [ fig : signed ] .].,width=317 ] effectively , signed inversions track not only the position of the region on the genome , but also the orientation . giving each gene an orientation as well asa relative position is more realistic , and surprisingly turns out to be more tractable ( the unsigned inversion sorting problem has been shown to be np hard ) .using signed inversions is more realistic because it takes account of the fact that each preserved region has an implied orientation .there is an inherent polarity in dna built into the biochemistry , with the two strands of the double - helix having mutually opposite orientations : inverting a segment of double - stranded dna results in each strand of the inverted segment joining to the remaining parts of the complementary strand .a polynomial time algorithm for finding the minimum number of signed inversions between two genomes was presented in , based on the breakpoint graph " , and a _ linear _ time algorithm was given in 2001 by .the breakpoint graph approaches to this problem have been translated to an algebraic formalism at a similar time .the book by provides a survey of combinatorial methods such as these , and their links to phylogeny .several extensions to this family of problems have been pursued .broadly these proceed in two directions : considering additional mutation processes such as transpositions or block interchanges ; or adding a cost function to the length of an inversion within the distance calculation on the basis that inversion lengths are not uniformly distributed .this field in general is now reaching a mature stage of development , and has become a branch of computational algorithmics , studied in many cases without reference to biological motivation .the use of tangle algebras to model the processes giving rise to knotting in dna provides an excellent and unfortunately uncommon example of the application of algebra to biology . the tangle algebra approach to knotting in dna began with the study of tn3 resolvase acting on unknotted dna to produce a range of different knots in proportions that could be placed in an order that decreased exponentially . because the enzyme binds to the dna at a specific site ,any topological action of the enzyme on the dna can be considered in a small three dimensional region of the cell containing the site .this motivated the use of _ tangles _ that had been introduced by . in order to understand how this enzyme was acting, it was assumed that the enzyme was acting in a consistent processive " way at the site it was bound before releasing the dna .the distribution of knots was then inferred to reflect the different times of release of the enzyme .the model arising from this assumption had already produced testable , and verified , predictions of knot products , but the tangle algebra approach made it possible to write down tangle equations that reflected the progressive repeat action of the resolvase .a tangle is a box , or circle , with two strings passing through it , whose endpoints are at opposite pairs of corners of the box ( consider the circle to be on the circumference of a 3-ball with strings coming from the sw , se , nw and ne directions ) .tangles are multiplied " by concatenating the boxes side by side and joining the strings up , as in figure [ fig : tangle ] .the tn3 resolvase latches on to the synaptosome ( a specific region of the dna where strands are crossing in the right way ) and through cutting and rejoining has the effect of multiplying the synaptosome tangle by another fixed tangle . under the hypothesis that the substrate was arranged in a specific way, they were able to show that these equations had a single solution , supporting the conjecture made in that the knotting was the result of the tn3 staying latched over - long and acting by addition of the tangle more than once , using the fact that tangles arising in this way are rational " .rational tangles are those that are obtained from a trivial tangle by successive twists swapping either the ne / se strings or the sw / se strings .a similar approach was taken to study the effect of gin recombinase .surveys of this and related approaches are widespread , but some good sources are and .tangles continue to be used to describe the synaptic structure giving rise to recombination , for example extending the model to include 3-string tangles , and have also been used to make predictions of the possible knots that may arise under different hypotheses about the substrate arrangement .other algebraically related approaches to dna configurations have been attempted , including the use of the action of affine lie groups through a translation of knot surgery into tilings of the plane on which the lie group acts , as well as the study of self - assembly of dna polyhedra .within the _ local _ view , the genome is fundamentally a circular sequence of nucleotides , represented by the alphabet , and so can be thought of as an element of a formal language for some integer in a realistic range for the size of a genome of between and about .of course , this should be considered modulo the action of the group of symmetries that acts on the genome , thought of as a circle with evenly - spaced points representing the nucleotides . in the case of a genomeconfigured as an unknot ( a simple circle ) , the group is the dihedral group on letters , . in the context of studying relationships among a set of genomes , and as noted in the introduction , it is not always helpful to make comparisons at the nucleotide level because the resolution is too fine .instead , one can look at the set of genes on the genomes , and more recently , use genome sequence data to identify regions of dna ( effectively words in the alphabet ) that are preserved ( up to orientation ) in each of the genomes under investigation .this may seem counter - intuitive , since the goal would seem to be to identify points of difference among genomes rather than similarity .however , for investigating the action of mutations such as inversion , one seeks to ignore base - pair changes ( at nucleotide level ) to focus on the movements of larger segments .recent studies of sets of eight genomes of _ yersinia pestis _, the cause of bubonic plague , have found between 60 and 80 preserved regions . in one of these studies , for example ,each genome can be considered to be a permutation of the 78 regions modulo again the dihedral group , in this case . given a set of genomes related by inversions , and a set of regions of dna that are common to each genome, inversions can be thought of as generators of a group acting on the set of possible genomes that permutes these regions .the set of all possible inversions relating these genomes is then the set of signed permutations on these regions , which is isomorphic to the finite coxeter group of type , also called the hyperoctahedral group .one may think of the elements of this group as signed permutation matrices , in which each row and each column contains exactly one non - zero entry , which is either or .the finite coxeter groups are groups generated by reflections in real euclidean space , and are well - studied , having connections to many parts of algebra . in particular , they arise in the representation theory of finite groups of lie type , where they appear as weyl groups ( see for example ) . in that contextthey are studied with respect to a particular presentation ( generating set and relations ) that corresponds with the dynkin diagrams that arise in lie theory .the standard generators for the type coxeter group are the transpositions and the map sending and fixing all . in this framework , we regard the set of inversions as a subgroup of generated by the biologically plausible signed permutations that the model allows . since such inversions generate the whole group , any pair of genomes are connected by a unique group element , which may potentially be represented by a number of different sequences of inversions . the minimal number of inversions required to write this group element we may call the _ inversion length _ of the group element , and is precisely the inversion distance between the genomes . in other words ,the inversion distance problem is translated into the question of the behaviour of a length function with respect to a set of non - standard generators representing the inversions .it has been known for a long time that in general , given a set of generators for a permutation group , finding the length of a given permutation in terms of those generators is np hard .clearly this is not the case for all _ particular _ groups or sets of generators : for instance , using standard generators for a finite coxeter group . at issue in the context of inversions is whether it is possible to find the length of a group element in terms of the generators mandated by the biological model .the algebraic approach provides an alternative framework that can be generalized to more sophisticated and realistic scenarios .for example , it is known that inversions fix the position of the terminus of replication relative to that of the origin , breaking the genome into two evenly balanced replichores .while this fact has been incorporated in a limited way into recent approaches to the inversion distance problem , a group - theoretic framework makes this restriction simple to represent as the stabilizing subgroup of the terminus ( existing approaches involving fixing the terminus have assumed the inversions are symmetric about the origin ) .group theorists have studied alternative length functions on these groups , and it is possible progress will be made along similar lines .taking a group - theoretic approach allows the translation of related questions , such as that of reconstructing phylogeny , into algebraic questions , for instance about the cayley graph of a group .recent work by the author and collaborators has applied this group theoretic approach to the inversion distance problem for a model in which inversions act on only two regions at a time , and in which orientation is ignored . when the number of regions that inversions may act on is restricted, many standard approaches fail because they assume any inversion of a region is as likely as the inversion of the complementary region , allowing the problem to be considered as if the genome were linear .that is , any model in which inversions act on only a restricted number of regions must account for the circular structure of the genome .this kind of model is not unrealistic because it is also known that inversions of shorter segments occur more frequently than those of long segments . in ,the need to work with circular permutations is handled by lifting the permutation to the group of periodic permutations of the integers , known as the _ affine symmetric group_. results about the length function in this group are then able to be used ( in particular a length formula given by ) .while this lifting from circular to affine permutations is not trivial and provides some theoretical challenges , it nevertheless produces a polynomial time algorithm and indicates that the affine symmetric group is the right " place to study circular permutations .this is especially the case when the assumption of uniformly likely inversions is removed from the model .the application of group theory and other algebraic ideas to local evolutionary mechanisms will also allow generalizations such as the incorporation of other known types of mutation into this model .for instance , translocation is another invertible operation that can be studied .deletion requires more care as it is not invertible : while horizontal transfer does involve insertion , and so could be considered an inverse operation , it generally does not reinsert a piece recently deleted from the same chromosome . incorporating non - invertible actionswill require modelling the action as that of a semigroup rather than a group .for all of these extensions and variations , extensive theoretical and computational tools that have been developed within the world of algebraic research can be brought to bear .for instance , the power of computational systems such as gap and magma ( and their many packages ) are barely used in biology ( the aforementioned work of the author implements the algorithm using gap ) . finally , it is worth noting that many assumptions made about inversions , such as the fixing of the terminus of replication , are actually not quite so rigid .it is more correct , for instance , to say that the terminus stays within a small distance of the antipode of the origin .some statistical approaches have already been taken to the inversion distance problem ( e.g. , ) , and the logical development of this theme is to use models involving group actions in a probabilistic setting a genuinely multidisciplinary endeavour .integrated computational systems such as sage , that can call on gap or magma as well as statistical packages such as r , are likely to play an important role .while the questions about inversions and about knotting that we have described above tend to be addressed separately in the modelling literature , the biological mechanism giving rise to inversions and knotting is widely acknowledged as being the same : cutting and rejoining of dna double - helices . in the case of the action of site - specific recombinasesthe difference between knotting and inversion is in the arrangement of the dna ( the substrate ) when the recombination takes place .for instance , the action of the resolvase enzymes tn3 or transposons at the _ res _ site give rise to knotting and linking when the sites are aligned in the same orientation ( as direct repeats ) and the dna is twisted in a certain way ( see figure [ fig : knot.process ] ) .( recall from section [ s : tangles ] that tn3 is an enzyme whose action has been analyzed using tangle algebras ) .on the other hand , the action of the gin recombinase gives rise to inversion when the two _ gix _ sites are aligned with the same orientation and the dna twisted slightly differently ( see figures [ fig : inversion.process.lit ] and [ fig : inversion.process ] ) .on the _ res _ site on a substrate ( a ) that is twisted gives rise to a series of products through processive recombination .the products are ( b ) a hopf link ( catenane ) , ( c ) a figure-8 knot in which the sequence is as in the original substrate , ( d ) another link ( the whitehead link ) , and ( e ) a six - crossing knot . figure adapted from .,width=317 ] .note that rearrangements a , c and e all have identical sequence data , whereas b and d are inversions .this illustrates the interplay between inversion and knotting , because while a , c and e are isomers , their knotting is distinct , giving the trivial knot , the trefoil , and a six - crossing knot respectively .the inversions b and d themselves are not topologically identical , with b being the trivial knot and d a five - crossing knot.,width=317 ] the recombination events represented in figures [ fig : knot.process ] and [ fig : inversion.process ] can be represented as braid closures in a remarkably elegant way . in each casethere is a substrate braid ( `` plat - closed '' ) , and the action of the recombinase is to add a generator ( see figure [ fig : sigma_i ] ) to the substrate braid before it is plat - closed . a braid on strings is a set of strings joining two parallel lines of points , such that the strings pass continuously downwards .the set of braids on strings forms a group whose multiplication is performed by placing one diagram below the other and joining corresponding strings ( an example is in figure [ fig : tn3.plat ] ) .the _ braid group _ is generated by braids in which an adjacent pair of strings is interchanged .we denote these generators by , being the braid that interchanges strings and , with the string passing behind ( figure [ fig : sigma_i ] ) .the inverse is the same but with the crossing reversed so that the string comes in front ; in general the inverse of any braid can be obtained by taking its reflection in a mirror placed below the lower points .this is easy to see in the context of the operation in the group being stacking diagrams on top of each other .good references for an introduction to braid groups are the books by and ..,width=317 ] braids are widely used in the mathematical study of knots .there are two standard ways in which a braid can be transformed into a knot ( or link ) .the first , more common way , is to join the string in position at the bottom with the string in the same position at the top . the second way is possible with an even number of strings , and involves joining adjacent pairs of strings at the top and the bottom .the latter is called _plat closure_. an example of the plat closure of a braid is given by the dashed lines at top and bottom of figure [ fig : tn3.plat ] . in the case of the action of tn3 ( shown in figure [fig : knot.process ] ) , giving rise to knotting , the base braid is , and the recombinase acts by adding a power of as a prefix .the first step of this process is shown in figure [ fig : tn3.plat ] .the family of products of processive recombination of the action are plat closures of the braids where is the number of twists added due to the recombinase staying bound to the substrate . , and figure [ fig : tangle ] for the tangle version ) ., width=317 ] on the other hand , in the case of the gin inversion synapse ( figure [ fig : inversion.process ] ) , we see the substrate has one fewer twist and the base braid is .the sequence of products of processive recombination is then given by the plat closure of for .note , given a certain substrate , the form of the braid whose closure gives the substrate is not unique .for instance , in the substrate in figure [ fig : knot.process ] we could push all crossings except the last to the left pair of strands , so that the substrate could instead be produced from the plat closure of , instead of .the equivalence of braids under the standard closure is well - studied ( braids produce the same knot under standard closure if they can be reached from each other via a sequence of markov moves ) , and similar information is known for plat closure .this raises the question of whether more actions of recombinase are performed that are not detected by changes in topology or sequence , and hence go undetected by experiment . despite this, we do know that every knot can be represented as the plat closure of a braid , and hence that _ every _ substrate for a recombination reaction can indeed be represented in this way .( to see that any knot can be represented as the plat closure of a braid , imagine placing the knot between two parallel lines , and pulling loops that turn up down to the bottom line , and loops that turn down up to the top line ) .in addition , we are able to derive some standard results on the path of the processive recombination when the substrate can be arranged as in figure [ fig : inversion.process](a ) . for a start , it is always possible to draw such a substrate as the plat closure of a braid on _ four _ strands , known as a _4-plat_. second , the braid word can be used to express the simplest case of an unknotted substrate with crossings .the study of recombination events on substrates that can be arranged as 4-plats was part of the motivation for the tangle model of site - specific recombination due to .consider an arbitrary orientation on the unknotted chromosome , arranged as in figure [ fig : inversion.process](a ) .if there are an even number of crossings , then when arranged so that all crossings are below the recombination sites , the middle two strands ( where recombination occurs ) will have opposite orientations ( the recombination sites are in inverted repeat ) . then recombination has the effect of joining strands of opposite orientation , resulting in an inversion .subsequent recombinations alternate between re - orienting and inverting . in terms of braids , if we represent the substrate with crossings as , we have that the products are inversions when is odd and preserve the orientation when is even . on the other handif the substrate is arranged with an odd number of crossings , we may write the braid in form for some . in this case if we follow the strand in the second position from the left down through the braid and up the other side after plat closure , it emerges on the outside ( fourth ) strand .the effect of the additional twist given by recombination is to rejoin this strand with the second strand , and the result is a link .subsequent events alternately restore the strand to a single loop or create links , so that the product is a link precisely when is odd .the recombinations described in the previous paragraph involve transformations of form given in figure [ fig : transf](a ) or [ fig : transf](b ) .however , not all recombination events produce rejoining in this form .some , such as the action of cre on the site _ loxp _ , or the action of xercd to resolve plasmid dimers , have the effect shown in figure [ fig : transf](c ) .these recombinations are shown here in figure [ fig : xercd ] .( c ) can be represented as the plat closure of a bmw tangle diagram ., width=661 ] while these reactions initially appear not amenable to the braid analysis because the strings turn upwards , it is possible for the substrate , and even the action , to be represented as a braid , albeit with a bit more effort .this is because even if the diagram is drawn with upturned strands , these up - turnings ( respectively down ) can be pulled down ( resp .up ) into the plat closure , as shown in figure [ fig : xercd.braids ] .( it should also be noted that the three - dimensional synaptic arrangement for the same reaction can often be projected onto the plane with apparently different alignments of recombination sites ) . while braid groups can be used to model such actions , there are other diagram algebras that may provide an alternative model ( an _ algebra _ is a set with two operations consisting of linear combinations of elements that can be multiplied like a vector space in which we can multiply ) .the birman - murakami - wenzl algebra ( or bmw algebra ) is similar to braids in that its basis elements consist of strands that connect two lines of parallel points .however in this algebra the strings are allowed to return up ( or down ) to the level of their origin .the set of such diagrams no longer forms a group because not all diagrams have inverse " diagrams .however a multiplication can be defined in the same way as braids , and linear combinations of such diagrams form an algebra over laurent polynomials in two variables $ ] ( see for more details ) .the generators of the bmw algebra include the braid group generators as shown in figure [ fig : sigma_i ] as well as the generators given by the diagrams shown in figure [ fig : e_i ] ..,width=317 ] as a consequence of its origins arising from knot invariants , the multiplicative structure of the bmw algebra is not given simply by concatenation of diagrams , but includes some additional relations such as and , for some parameters , to account for closed loops and twists arising in products ( see figure [ fig : bmwrel ] for an illustration of the first of these ) .there is also a skein relation " that arises out of the requirement for the algebra to have a trace that relates to jones knot invariant ..,width=566 ] the substrates of cre and xercd shown in figure [ fig : xercd ] can be expressed in braid notation as plat closures .for instance the substrate of xercd can be written as a product of the braid group generators as ( which may be simplified to ) .the recombination action may then be written as an element of the bmw algebra as the plat closure of .this product could also be represented in terms of braids alone by the plat closure of the six - strand braid , as shown in the right hand picture in figure [ fig : xercd.braids ] . in both cre andxercd the representation of the recombination as a plat closure of braids instead of in the bmw algebra incurs a minor penalty , namely replacing multiplication by a single bmw algebra generator ( or ) with multiplication by a product of two braid group generators .this is a general property : multiplication by ( for even ) and multiplication by ( or ) are equivalent as plat closures .but it is also somewhat special because it depends on the action of the recombinase being expressible with the action of at the top .while one of the features of the tangle algebra approach is that it is able to cover a wider variety of circumstances , it is also worth considering whether the connections provided by other algebraic structures may be sufficient compensation for a possibly restricted sphere of operation .the moral of the story is that there may be a range of possible formalisms using diagram algebras that preserve the synaptic structure observed through experiment .in addition to the insights afforded by the tangle algebra approach , the above discussion demonstrates that these recombination processes may also be viewed as transformation in the braid group or in the birman - murakami - wenzl algebra .these alternative algebraic approaches may lead to connections with algebraic models of local evolutionary processes such as inversion , as described in the next section . ,in which the same process is represented using bmw algebra generators .note , there are many alternative ways one might represent these processes as plat closures ., width=317 ]we have described a modelling framework using plat closures of braids that can incorporate the biological mechanisms behind numerous recombinase actions giving rise to inversion and knotting .however there is an important piece of the inversion story that is omitted by this analysis , namely the role of selection , at least in the case of bacterial evolution .selection is presumably behind a number of features of bacterial genomes that have been observed , and that we have described above ( section [ sec : alg.inversions ] ) .for instance , the observation that the terminus of replication is always close to the antipode of the origin is a consequence of a fitness cost due to the effect on the replication process of unbalanced replichores .the models of inversion using braids or tangles do not ( and probably can not ) take into account the relative positions of the origin and terminus . similarly , there is also evidence , described earlier , that shorter inversions occur more frequently than longer inversions , and this is not a feature that is immediately accessible through a braid or tangle analysis , because braids and tangles are preserved under isotopy while specific locations on strands are all equivalent under isotopy .both of these selection features can , however , be studied using finite reflection groups such as the coxeter group of type , as described in section [ sec : alg.inversions ] .limitations are inevitable in any model : models can not address every question that may arise . in the evolutionary problems we have describedthere are two approaches that have links with algebra that either have already , or have the potential to provide biological insight : the tangle and plat closure models of the topological effects of dna recombination ; and the coxeter group models of inversions .the interesting point is that these algebraic structures have links that may yet bring forth a family of models that unifies both approaches .we will now describe some of the algebraic connections that exist between the hyperoctahedral group , or coxeter group of type , and the tangle algebras .these algebraic connections are at present not used in any biological context ; the purpose of outlining these connections is in the hope that they might provide the key to a unified picture .the first algebraic connection to detail is that between the coxeter group of type and the braid groups .the coxeter group of type can be looked at in many useful ways , one of which is as a quotient of the _ affine _ braid group . as we have seen , a braid on strings is a set of intertwined strings falling from one set of points on a line to another set on a lower line .affine braids have the sets of points at the top and at the bottom of the rim of a hollow _ cylinder _ ( with slightly thickened walls ) , or equivalently can be thought of as _periodic_. ( an alternative view has them as regular braids with a rigid pole at one end that strings can loop around see for example ) .the affine braid group is generated by the usual generators of the braid group for , ( swaps and 1 around the back of the cylinder ) , as well as the braid that begins at position 1 , passes behind the other vertical strings and returns to position 1 at the bottom .these generators are shown in figure [ fig : affine.braids ] .the regular braid group can be obtained from the affine braid group by a projection that amounts to squashing the cylinder flat from the front . in this projection , and redundant as a generator , swapping strands and behind the other strands . by lifting our view to the affine braid group we can see both the symmetric group ( the coxeter group of type ) , and the coxeter group of type as quotients of the same object .a common view of the symmetric group is as a quotient of the regular braid group by the relations , but beginning from the affine braid group the quotient is by these relations together with . taking the quotient from instead by the relations and gives the type coxeter group that arises in the study of inversions : the image of acts as the permutation .this standard approach of obtaining the coxeter group of type from the affine braid group makes plain the contrasting viewpoints of local and topological evolution .coming from the braid group , the coxeter group of type is a ( signed ) permutation group on the endpoints of the strands of the braids .in contrast , through studying inversions the coxeter group of type is a permutation group of linear regions along the whole chromosome , and there is no necessary correspondence between these regions and the endpoints of a braid visualization of the recombination process .hence we have two genuinely distinct ways in which this coxeter group may arise through models of site - specific recombination .other important algebras also arise as quotients of the braid groups , including the iwahori - hecke algebras .these are deformations of group algebras of finite coxeter groups such as the hyperoctahedral group and the symmetric group , and are obtained from the affine braid group algebra by taking the quotient by the relations and , where are indeterminants .this forces the relation in the iwahori - hecke algebra .it should be noted that these quotients of the affine braid group are linked to a specific presentation with generators , whereas the generators for the inversion group that are required are not the images of this quotient .the fact that coxeter groups and iwahori - hecke algebras arise in closely related contexts is no surprise .most coxeter groups arise as weyl groups through the representation theory of finite groups of lie type ( these are the _ crystallographic _ finite reflection groups ) .they appear as double coset representatives of borel subgroups in a group of lie type . in this context ,iwahori - hecke algebras arise by taking these double cosets as a basis for an algebra , and are important in studying the way induced representations of the lie group decompose into irreducible representations .the parameters and have a group - theoretic meaning in this original motivating way of looking at these algebras .tangle algebras themselves , aside from being seen through the prism of plat closures of braids , arise out a wider family of diagram algebras .the kauffman tangle algebra has a basis consisting of tangles diagrams of strings connecting a row of points with a row of points , in which strings may return to the level from which they originated .an example is shown in figure [ fig : mn - tangle - eg ] .tangles as used in dna topology are generally ( 2,2)-tangles ( there are recent exceptions , such as and .the plat closures of kauffman tangles may be a natural way to represent recombination , because of reactions such as that of xercd that involve a change of direction of the strand ( see figure [ fig : xercd ] ) , and because the algebra allows flexibility in the number of nodes at each end . in this casethe action of the recombinase can be represented by multiplication by the non - braid generator . even here , it is possible to manoeuvre the tangle to make it into a braid plat closure , as shown in figure [ fig : xercd.braids ] .there is another algebraic connection through knot invariants .tangle algebras played a key role in the development of knot invariants such as the jones polynomial and its homfly generalization .interestingly , the use of tangle algebras in the homfly paper was within a few years of their appearance in the work of ernst and sumners ( they were first defined by conway much earlier ) .this role of tangle algebras in knot invariants provides another link to the iwahori - hecke algebras , since the hecke algebras also appear in jones work on his invariant .there is one final algebraic thread that draws these stories together .the standard braid group and its quotient the symmetric group are part of a lie theory story , in that the symmetric group is a type weyl group , arising from the representation theory of the lie group , the general linear group .the iwahori - hecke algebra arises in the decomposition of induced cuspidal representations in this case .the group arising from inversions however is also part of the lie theory story , because it is the weyl group of type , corresponding to the representation theory of the orthogonal group .the algebra that plays an analogous role to that of the iwahori - hecke algebra for the orthogonal group is the birman - murakami - wenzl algebra we have seen above .this algebra can also be viewed as a diagram algebra of strands running between two sets of points , whose generators are , as discussed above in section [ sec : common.framework ] . as diagram algebras , the bmw algebra and kauffman algebra are isomorphic .diagram algebras such as these we see here appear in widely different contexts .while the symmetric group algebra can be thought of as the standard braids with crossings ignored , if one `` ignores crossings '' in the bmw algebra one obtains the diagrams that form the basis of the brauer centralizer algebra , which was first defined in the 1930s and was motivated by invariant theory . if one considers either the bmw algebra or the brauer algebra and requires that strands may not cross , so that the algebra is generated by the without the , one obtains the temperley - lieb algebra which plays an important role in statistical mechanics .there is a further generalization of these diagram algebras arising from braided monoidal categories that potentially may also be applied to evolutionary mechanisms in dna , and that is _ ribbon categories _ .these categories can be thought of as algebras whose elements consist of ribbon diagrams like bmw algebra diagrams in which the strings are slightly thickened and on which one may have coupons " , represented by boxes on the ribbons . in the category theory context , the ribbon bands are objects and the coupons morphisms . in the evolutionary context the use of ribbons may enable the representation of twists on strands , while coupons might be used to represent recombination events .this potential model remains to be explored .the algebraic models of tangles and inversions that we have described above appear to be distinct algebraic stories .these stories are part of the same big picture sitting inside various instantiations of lie theory , and yet the way the algebraic structures arise is very different .the algebraic models of knotting focus on the topology of strands of dna that are equivalent up to isotopy , whereas the algebraic models of inversion focus on patterns along the strands that ignore isotopy . andwhile the algebraic stories have many differences , there are several clear connections .for instance , both the coxeter group of type and the tangle algebras are closely connected to the affine braid group . andwhile tangles arise in a type context , many site - specific recombination events could be looked at as plat closures of elements of the birman - murakami - wenzl algebra an algebra that plays an important role in type representation theory .another way of viewing the key challenge in unifying these pictures is that one is a way to study inversions using braids , the other with ( signed ) permutation groups .it so happens that there is an intimate link between braids and permutation groups , described above , and this gives some hope that a unified picture may be possible .the presence of structures such as tangle algebras and coxeter group actions in the evolutionary processes of bacterial dna strongly suggests that there should also be a role for these related algebras . determining this role represented at the bottom of figure [ fig : phylo ] is a key open problem requiring the expertise of algebraists working together with evolutonary biologists .( 5,6 ) node phylogenetics ; ( 1.5,4 ) node local evolution ; ( 1.5,2 ) node coxeter group actions ;( 5,0 ) node bmw algebras ; ( 5,-.6 ) node iwahori hecke algebras ; ( 5,-1.2 ) node affine braid groups ; ( 4.5,.5 ) ( 2.5,1.5 ) ; ( 2,2.5 ) ( 2,3.5 ) ; ( 2.5,4.5 ) ( 4.5,5.5 ) ; ( 8.5,4 ) node topological evolution ; ( 8.5,2 ) node tangles , plat closed braids ; ( 5.5,.5 ) ( 7.5,1.5 ) ; ( 8,2.5)(8,3.5 ) ; ( 7.5,4.5)(5.5,5.5 ) ; ( 3,3)(7,3 ) ; ( -1.5,4.5)(4.5,4.5)(4.5,1.5)(-1.5,1.5)cycle ; ( 5.5,4.5)(11.5,4.5)(11.5,1.5)(5.5,1.5)cycle ; ( .5,4.8 ) node _ inversions _ ; ( 10,4.8 ) node _ knots _ ; if a unified picture of bacterial evolution is constructed using the algebraic ideas contained here , then one consequence will be that the tremendously rich theory behind finite reflection groups and their -analogues , the iwahori - hecke algebras , as well as computational algebra systems such as gap and magma , will be made available to biologists as another powerful resource for their models .algebraic models can be expected to throw up new suggestions about the way biological mechanisms behave , and this can lead to new hypotheses for biologists to test .exactly this occurred in the case of a predicted knot through tangle algebra applied to the tn3 recombinase .a central problem of modern biology is the challenge of dealing with very large volumes of data .for this reason , statistical methods have been the main plank of the biologists mathematical scaffolding .an algebraic approach looks at the structures in a way that while not replacing statistical approaches , provides a new angle to tackling large volumes of information . by modelling using group theory and other algebraic structures ,the extensive algebraic results and sophisticated and efficient algorithms of computational algebra can be brought to bear .new computational and bioinformatic tools can then be developed to aid biologists . from the algebraic side ,the connections that are developed between algebraic structures and real biological questions will help to motivate further research in the algebraic structures themselves , and will raise new questions for algebraists .algebraists are familiar with structures such as reflection groups and iwahori - hecke algebras having a wide range of applications in certain parts of science .for instance , reflection groups are natural ways to study symmetries arising in nature ( e.g. crystallography ) , and it will surprise no algebraist to learn that they have already been used on occasion to count genetic arrangements using burnside s lemma ( e.g. ) . similarly , iwahori - hecke algebras are important in the study of quantum groups , which first arose through quantum physics .these algebraic structures arise widely because of their fundamental links to symmetries and patterns that arise in many places .perhaps they may find a greater role in biology as interdisciplinary work evolves .i would like to thank mark m. tanaka , leonard l. scott jr , john j. graham and attila egri - nagy , who read and commented on the manuscript .particular thanks to mmt who introduced me to the field of bacterial genomics in the first place .nr cozzarelli , ma krasnow , sp gerrard , and jh white . a topological treatment of recombination and topoisomerases . in _ cold spring harbor symposia on quantitative biology _ ,volume 49 , pages 383400 .cold spring harbor laboratory press , 1984 .nancy j crisona , robert l weinberg , brian j peter , de witt sumners , and nicholas r cozzarelli .the topological mechanism of phage integrase ._ journal of molecular biology _ , 2890 ( 4):0 747775 , 1999 .s. hannenhalli and p.a .. _ journal of the acm ( jacm ) _ , 460 ( 1):0 127 , 1999 .( preliminary version in _ proceedings of the 27th annual acm symposium on the theory of computing _ , acm , new york , 1995 , 178189 . ) .m. jayaram and r. harshey ._ mathematics of dna structure , function and interactions _ , chapter difference topology : analysis of high - order dna - protein assemblies , pages 139158 .the i m a volumes in mathematics and its applications .springer , 2009 .vaughan f. r. jones . a polynomial invariant for knots via von neumann algebras . _ bull .( n.s . ) _ , 120 ( 1):0 103111 , 1985 .issn 0273 - 0979 .doi : 10.1090/s0273 - 0979 - 1985 - 15304 - 2 .url http://dx.doi.org/10.1090/s0273-0979-1985-15304-2 .r. kanaar , a. klippel , e. shekhtman , j.m .dungan , r. kahmann , and n.r .processive recombination by the phage mu gin system : implications for the mechanisms of dna strand exchange , dna site alignment , and enhancer actions ._ cell _ , 620 ( 2):0 353366 , 1990 . l. h. kauffman and s. lambropoulou . _ lectures on topological fluid mechanics _ , volume 1973 of _ lecture notes in mathematics _, chapter tangles , rational knots , and dna , pages 99138 .springer - verlag , 2009 .w. li , s. kamtekar , y. xiong , g.j .sarkis , n.d.f .grindley , and t.a .structure of a synaptic resolvase tetramer covalently linked to two cleaved dnas ._ science _ , 3090 ( 5738):0 12101215 , 2005 .y. liang , x. hou , y. wang , z. cui , z. zhang , x. zhu , l. xia , x. shen , h. cai , j. wang , et al .genome rearrangements of completely sequenced strains of _ yersinia pestis_. _ journal of clinical microbiology _ , 480 ( 5):0 16191623 , 2010 .v. lpez , m.l .martnez - robles , p. hernndez , d.b .krimer , and j.b .topo iv is the topoisomerase that knots and unknots sister duplexes during dna replication ._ nucleic acids research _ , in press , 2011 .davide marenduzzo , enzo orlandini , andrzej stasiak , de witt sumners , luca tubiana , and cristian micheletti .interactions in bacteriophage capsids are responsible for the observed dna knotting ._ proceedings of the national academy of sciences , usa _ , 1060 ( 52):0 2226922274 , 2009 . rosa orellana and arun ram .affine braids , markov traces and the category . in _ proceedings of the international colloquium on algebraic groups and homogeneous spaces , tifr , mumbai ._ , pages 151 , 2004 .giovanni pistone , eva riccomagno , and henry p. wynn ._ algebraic statistics : computational commutative algebra in statistics _ , volume 89 of _ monographs on statistics and applied probability_. chapman & hall / crc , boca raton , fl , 2001 .isbn 1 - 58488 - 204 - 2 .temperley and e.h .relations between the ` percolation ' and ` colouring ' problem and other graph - theoretical problems associated with regular planar lattices : some exact results for the ` percolation ' problems ._ proceedings of the royal society of london .a. mathematical and physical sciences _ ,3220 ( 1549):0 251280 , 1971 .mariel vazquez , sean d colloms , and de witt sumners .tangle analysis of xer recombination reveals only three solutions , all consistent with a single three - dimensional topological pathway ._ journal of molecular biology _ ,3460 ( 2):0 493504 , 2005 . a.a .vetcher , a.y .lushnikov , j. navarra - madsen , r.g .scharein , y.l .lyubchenko , i.k .darcy , and s.d .topology and geometry in flp and cre recombination . _ journal of molecular biology _ ,3570 ( 4):0 10891104 , 2006 .g. a. watterson , w. j. ewens , t. e. hall , and a. morgan .the chromosome inversion problem ._ journal of theoretical biology _ , 990 ( 1):0 1 7 , 1982 .issn 0022 - 5193 .doi : doi : 10.1016/0022 - 5193(82)90384 - 8 .url http://www.sciencedirect.com/science/article/b6wmd-4f1j81c-hy/2/6d9edcfcfaee64386aa660680e8fa0a5 .
rearrangements of bacterial chromosomes can be studied mathematically at several levels , most prominently at a local , or sequence level , as well as at a topological level . the biological changes involved locally are inversions , deletions , and transpositions , while topologically they are knotting and catenation . these two modelling approaches share some surprising algebraic features related to braid groups and coxeter groups . the structural approach that is at the core of algebra has long found applications in sciences such as physics and analytical chemistry , but only in a small number of ways so far in biology . and yet there are examples where an algebraic viewpoint may capture a deeper structure behind biological phenomena . this article discusses a family of biological problems in bacterial genome evolution for which this may be the case , and raises the prospect that the tools developed by algebraists over the last century might provide insight to this area of evolutionary biology . .
one of the classical problems in queueing theory is to schedule the customers / jobs in a network in an optimal way .these problems are known as the scheduling problems which arise in a wide variety of applications , in particular , whenever there are different customer classes present in the network and competing for the same resources .the optimal scheduling problem has a long history in the literature .one of the appealing scheduling rules is the well - known rule .this is a static priority policy in which it is assumed that each class- customer has a marginal delay cost and an average service time , and the classes are prioritized in the decreasing order of .this static priority rule has proven asymptotically optimal in many settings . in , a single - server markov modulated queueing network is considered and an _ averaged _-rule is shown asymptotically optimal for the discounted control problem .an important aspect of queueing networks is abandonment / reneging , that is , customers / jobs may choose to leave the system while being in the queue before their service .therefore , it is important to include customer abandonment in modeling queueing systems . in , atar et al . considered a multi - class queueing network with customer abandonment and proved that a modified priority policy , referred to as rule , is asymptotically optimal for the long run average cost in the fluid scale .dai and tezcan showed the asymptotic optimality of a static priority policy on a finite time interval for a parallel server model under the assumed conditions on the ordering of the abandonment rates and running costs .although static priority policies are easy to implement , it may not be optimal for control problems of many multi - server queueing systems .for the same multi - class queueing network , discounted cost control problems are studied in , and asymptotically optimal controls for these problems are constructed from the minimizer of a hamilton jacobi bellman ( hjb ) equation associated with the controlled diffusions in the halfin whitt regime . in this article, we are interested in an ergodic control problem for a multi - class queueing network in the halfin whitt regime .the network consists of a single pool of statistically identical servers and a buffer of infinite capacity .there are customer classes and arrivals of jobs / customers are independent poisson processes with parameters , .the service rate for customers is , .customers may renege from the queue if they have not started to receive service before their patience times .class- customers renege from the queue at rates , .the scheduling policies are _ work - conserving _, that is , no server stays idle if any of the queues is nonempty .we assume the system operates in the halfin whitt regime , where the arrival rates and the number of servers are scaled appropriately in a manner that the traffic intensity of the system satisfies in this regime , the system operations achieve both high quality ( high server levels ) and high efficiency ( high servers utilization ) , and hence it is also referred to as the quality - and - efficiency - driven ( qed ) regime ; see , for example , on the many - server regimes .we consider an ergodic cost function given by ,\ ] ] where the _ running cost _ is a nonnegative , convex function with polynomial growth and is the diffusion - scaled queue length process .it is worth mentioning that in addition to the running cost above which is based on the queue - length , we can add an idle - server cost provided that it has at most polynomial growth .for such , a running cost structure the same analysis goes through .the control is the allocation of servers to different classes of customers at the service completion times .the value function is defined to be the infimum of the above cost over all _ admissible _ controls ( among all work - conserving scheduling policies ) . in this article , we are interested in the existence and uniqueness of asymptotically optimal stable stationary markov controls for the ergodic control problem , and the asymptotic behavior of the value functions as tends to infinity . in ,section 5.2 , it is stated that analysis of this type of problems is important for modeling call centers .the usual methodology for studying these problems is to consider the associated continuum model , which is the controlled diffusion limit in a heavy - traffic regime , and to study the ergodic control problem for the controlled diffusion .ergodic control problems governed by controlled diffusions have been well studied in literature for models that fall in these two categories : ( a ) the running cost is _ near - monotone _ , which is defined by the requirement that its value outside a compact set exceeds the optimal average cost , thus penalizing unstable behavior ( see assumption 3.4.2 in for details ) , or ( b ) the controlled diffusion is uniformly stable , that is , every stationary markov control is stable and the collection of invariant probability measures corresponding to the stationary markov controls is tight .however , the ergodic control problem at hand does not fall under any of these frameworks .first , the running cost we consider here is not near - monotone because the total queue length can be when the total number of customers in the system are . on the other hand , it is not at all clear that the controlled diffusion is uniformly stable ( unless one imposes nontrivial hypotheses on the parameters ) , and this remains an open problem .one of our main contributions in this article is that we solve the ergodic control problem for a broad class of nondegenerate controlled diffusions , that in a certain way can be viewed as a mixture of the two categories mentioned above .as we show in section [ s - ergodic ] , stability of the diffusion under any optimal stationary markov control occurs due to certain interplay between the drift and the running cost .the model studied in section [ s - ergodic ] is far more general than the queueing problem described , and thus it is of separate interest for ergodic control .we present a comprehensive study of this broad class of ergodic control problems that includes existence of a solution to the ergodic hjb equation , its stochastic representation and verification of optimality ( theorem [ t - hjb2 ] ) , uniqueness of the solution in a certain class ( theorem [ t - unique ] ) , and convergence of the vanishing discount method ( theorem [ t - hjb3 ] ) .these results extend the well - known results for near - monotone running costs . the assumptions in these theorems are verified for the multi - class queueing model and the corresponding characterization of optimality is obtained ( corollary [ c - unique ] ) , which includes growth estimates for the solution of the hjb .we also introduce a new approximation technique , _ spatial truncation _, for the controlled diffusion processes ; see section [ s - truncation ] . it is shown that if we freeze the markov controls to a fixed stable markov control outside a compact set , then we can still obtain nearly optimal controls in this class of markov controls for large compact sets .we should keep in mind that this property is not true in general .this method can also be thought of as an approximation by a class of controlled diffusions that are uniformly stable .we remark that for a fixed control , the controlled diffusions for the queueing model can be regarded as a special case of the piecewise linear diffusions considered in .it is shown in that these diffusions are stable under _ constant _ markov controls .the proof is via a suitable lyapunov function .we conjecture that uniform stability holds for the controlled diffusions associated with the queueing model . for the same multi - class markovian model , gamarnik and stolyarshow that the stationary distributions of the queue lengths are tight under any work - conserving policy , theorem 2 .we also wish to remark here that we allow to be negative , assuming abandonment rates are strictly positive , while in , and abandonment rates can be zero .another important contribution of this work is the convergence of the value functions associated with the sequence of multi - class queueing models to the value of the ergodic control problem , say , corresponding to the controlled diffusion model .it is not obvious that one can have asymptotic optimality from the existence of optimal stable controls for the hjb equations of controlled diffusions .this fact is relatively straightforward when the cost under consideration is discounted .in that situation , the tightness of paths on a finite time horizon is sufficient to prove asymptotic optimality .but we are in a situation where any finite time behavior of the stochastic process plays no role in the cost . in particular, we need to establish the convergence of the controlled steady states .although uniform stability of stationary distributions for this multi - class queueing model in the case where and abandonment rates can be zero is established in , it is not obvious that the stochastic model considered here has the property of uniform stability. therefore , we use a different method to establish the asymptotic optimality .first , we show that the value functions are asymptotically bounded below by . to study the upper bound , we construct a sequence of markov scheduling policies that are uniformly stable ( see lemma [ lem - uni ] ) .the key idea used in establishing such stability results is a spatial truncation technique , under which the markov policies follow a fixed priority policy outside a given compact set .we believe these techniques can also be used to study ergodic control problems for other many - server queueing models .the scheduling policies we consider in this paper allow preemption , that is , a customer in service can be interrupted for the server to serve a customer of a different class and her service will be resumed later .in fact , the asymptotic optimality is shown within the class of the work - conserving preemptive policies . in ,both preemptive and nonpreemptive policies are studied , where a nonpreemptive scheduling control policy is constructed from the hjb equation associated with preemptive policies and thus is shown to be asymptotically optimal . however , as far as we know , the optimal nonpreemptive scheduling problem under the ergodic cost remains open . for a similar line of work in uncontrolled settings ,we refer the reader to .admission control of the single class model with an ergodic cost criterion in the halfin whitt regime is studied in .for controlled problems and for finite server models , asymptotic optimality is obtained in in the conventional heavy - traffic regime .the main advantage in is the uniform exponential stability of the stochastic processes , which is obtained by using properties of the skorohod reflection map .a recent work studying ergodic control of a multi - class single - server queueing network is . to summarize our main contributions in this paper : * we introduce a new class of ergodic control problems and a framework to solve them .* we establish an approximation technique by spatial truncation . *we provide , to the best of our knowledge , the first treatment of ergodic control problems at the diffusion scale for many server models . *we establish asymptotic optimality results . in section [ s - notation ] ,we summarize the notation used in the paper . in section[ s - main ] , we introduce the multi - class many server queueing model and describe the halfin whitt regime .the ergodic control problem under the heavy - traffic setting is introduced in section [ s - diffcontrol ] , and the main results on asymptotic convergence are stated as theorems [ t - lowerbound ] and [ t - upperbound ] . section [ s - ergodic ] introduces a class of controlled diffusions and associated ergodic control problems , which contains the queueing models in the diffusion scale .the key structural assumptions are in section [ s - assumptions ] and these are verified for a generic class of queueing models in section [ s3.3 ] , which are characterized by piecewise linear controlled diffusions .section [ s3.4 ] concerns the existence of optimal controls under the general hypotheses , while section [ s3.5 ] contains a comprehensive study of the hjb equation .section [ proofs ] is devoted to the proofs of the results in section [ s3.5 ] .the spatial truncation technique is introduced and studied in section [ s - truncation ] .finally , in section [ s - optimality ] we prove the results of asymptotic optimality . the standard euclidean norm in is denoted by .the set of nonnegative real numbers is denoted by , stands for the set of natural numbers , and denotes the indicator function . by denote the set of -vectors of nonnegative integers .the closure , the boundary and the complement of a set are denoted by , and , respectively .the open ball of radius around is denoted by . given two real numbers and , the minimum ( maximum ) is denoted by ( ) , respectively .define and .the integer part of a real number is denoted by .we use the notation , , to denote the vector with entry equal to and all other entries equal to .we also let .given any two vectors the inner product is denoted by . by denote the dirac mass at . for any function and domain define the oscillation of on as follows : for a nonnegative function , we let denote the space of functions satisfying .this is a banach space under the norm we also let denote the subspace of consisting of those functions satisfying by a slight abuse of notation , we also denote by and a generic member of these spaces .for two nonnegative functions and , we use the notation to indicate that and .we denote by , , the set of real - valued functions that are locally -integrable and by the set of functions in whose weak derivatives , , are in .the set of all bounded continuous functions is denoted by . by denote the set of functions that are -times continuously differentiable and whose derivatives are locally hlder continuous with exponent .we define , , as the set of functions whose derivatives , , are continuous and bounded in and denote by the subset of with compact support . for any path , we use the notation to denote the jump at time . given any polish space ,we denote by the set of probability measures on and we endow with the prokhorov metric . for and a borel measurable map , we often use the abbreviated notation the quadratic variation of a square integrable martingale is denoted by and the optional quadratic variation by ] be a given matrix whose diagonal elements are positive , for , and the remaining elements are in .( note that for the queueing model , is a positive diagonal matrix .our results below hold for the more general . )let and be a nonsingular -matrix .define with .assume that we consider the following controlled diffusion in : where is a constant matrix such that is invertible .it is easy to see that ( [ eg - sde1 ] ) satisfies conditions ( a1)(a3 ) .analysis of these types of diffusion approximations is an established tradition in queueing systems .it is often easy to deal with the limiting object and it also helps to obtain information on the behavior of the actual queueing model .we next introduce the running cost function .let be locally lipschitz with polynomial growth and ^{m } \le r(x , u ) \le c_{2 } \bigl(1 + \bigl[(e\cdot x)^{+ } \bigr]^{m } \bigr),\ ] ] for some and positive constants and that do not depend on .some typical examples of such running costs are ^{m}\sum _ { i=1}^{d } h_{i } u_{i}^{m}\qquad \mbox{with } m\ge1,\ ] ] for some positive vector . the controlled dynamics in ( [ q - drift ] ) and running cost in ( [ eg - cost ] ) are clearly more general than the model described in section [ s - diffcontrol ] . in ( [ eg - sde1 ] ), denotes the diffusion approximation for the number customers in the system in the halfin whitt regime and its component denotes the diffusion approximation of the number of class customers .therefore , denotes the total number of customers in the queue . for and diagonal as in ( [ dc5 ] ) , the diagonal entries of and denote the service and abandonment rates , respectively , of the customer classes .the coordinate of denotes the fraction of class- customers waiting in the queue .therefore , the vector - valued process denotes the diffusion approximation of the numbers of customers in service from different customer classes .[ eg - prop ] let and be given by ( [ q - drift ] ) and ( [ eg - cost ] ) , respectively . then ( [ eg - sde1 ] ) satisfies assumptions [ ass-1 ] and [ ass-2 ] , with and for appropriate positive constants and . we recall that if is a nonsingular -matrix , then there exists a positive definite matrix such that is strictly positive definite .therefore , for some positive constant it holds that \le\kappa_{0}^{-1}{\vert}y { \vert}^{2 } \qquad\forall y \in\mathbb{r}^{d}.\ ] ] the set in ( [ eg - ck ] ) , where is chosen later , is an open convex cone , and the running cost function is inf - compact on . let be a nonnegative function in such that ^{{m}/{2}} ] , , for all large enough we see that satisfies ( [ e - lyap ] ) with control .hence , assumption [ ass-2 ] holds by lemma [ l - basic ] .recall the definition of in ( [ e - ru ] ) . for , we define we also let , and where and is given by ( [ e3.3 ] ) .it is well known that is the set of ergodic occupation measures of the controlled process in ( [ e - sde ] ) , and that is a closed and convex subset of , lemmas 3.2.2 and 3.2.3 .we use the notation when we want to indicate the ergodic occupation measure associated with the control . in other words , [ l3.2 ] if ( [ e - lyap ] ) holds for some and , then we have . therefore , .let be the solution of ( [ e - sde ] ) . recall that is the first exit time from for . then by it s formula -\mathcal{v}_{0}(x ) \le \eta t -\mathbb{e}^{u_{0}}_{x }\biggl[\int_{0}^{t\wedge\tau_{r } } r \bigl(x_{s},u_{0}(x_{s } ) \bigr ) \,\mathrm{d } { s } \biggr].\ ] ] therefore , letting and using fatou s lemma , we obtain the bound \le\eta t+\mathcal{v}_{0}(x)-\min_{\mathbb{r}^{d } } \mathcal{v}_{0},\ ] ] and thus \le\eta.\ ] ] in the analysis , we use a function which , roughly speaking , is of the same order as in and lies between and a multiple of on , with as in assumption [ ass-1 ] .the existence of such a function is guaranteed by assumption [ ass-1 ] as the following lemma shows .[ l - key ] define where is the open set in assumption [ ass-1 ] .then there exists an inf - compact function which is locally lipschitz in its first argument uniformly w.r.t .its second argument , and satisfies for all , and for some positive constant .moreover , for all , where is the function in assumption [ ass-1 ] .if denotes the right - hand side of ( [ e - ka3 ] ) , with , then since on . therefore , by assumption [ ass-1 ] , the set is bounded in for every .hence , there exists an increasing sequence of open balls , centered at in such that for all .let be any nonnegative smooth function such that for , and on .clearly , is continuous , inf - compact , locally lipschitz in its first argument , and satisfies ( [ e - ka3 ] ) .that ( [ e - key ] ) holds is clear from ( [ e - ka1 ] ) and the fact that .it is clear from the proof of lemma [ l - key ] that we could fix the value of the constant in ( [ e - ka3 ] ) , say .however , we keep the variable because this provides some flexibility in the choice of , and also in order to be able to trace it along the different calculations .[ r3.4 ] note that if and is inf - compact , then and satisfies ( [ e - ka3 ] ) .note also , that in view of ( [ eg - cost ] ) and proposition [ eg - prop ] , for the multi - class queueing model we have ^{m } \bigr ) \\ & \le & \frac{c_{2 } d^{m-1}}{1\wedge c_{0 } } \bigl(1 + ( 1\wedge c_{0 } ) { \vert}x { \vert}^{m } \bigr ) \\ & \le & \frac{c_{2 } d^{m-1}}{1\wedge c_{0 } } \biggl(1+c_{0}{\vert}x{\vert}^{m } \mathbb{i}_{\mathcal{k}^{c}}(x ) + \frac{1}{\delta^{m } } \bigl[{(e\cdot x)^{+ } } \bigr]^{m } \mathbb{i}_{\mathcal { k}}(x ) \biggr ) \\ &\le & \frac{c_{2 } d^{m-1}}{1\wedge c_{0 } } \biggl(1+h(x ) \mathbb{i}_{\mathcal{k}^{c}}(x ) + \frac{1}{c_{1}\delta^{m}}r(x , u ) \mathbb{i}_{\mathcal{k}}(x ) \biggr ) \\ & \le & \frac{c_{2 } d^{m-1}}{1\wedge c_{0}\wedge c_{1}\delta^{m } } \bigl(1+h(x ) \mathbb{i}_{\mathcal{k}^{c}}(x)+r(x , u ) \mathbb { i}_{\mathcal{k}}(x ) \bigr ) \\ & \le & \frac{c_{2 } d^{m-1}}{1\wedge c_{0}\wedge c_{1}\delta^{m } } \bigl(1+h(x ) \mathbb{i}_{\mathcal{h}^{c}}(x , u)+ r(x , u ) \mathbb { i}_{\mathcal{h}}(x , u ) \bigr).\end{aligned}\ ] ] therefore , satisfies ( [ e - ka3 ] ) .[ r - lsc ] we often use the fact that if is bounded below , then the map is lower semi - continuous .this easily follows from two facts : ( a ) can be expressed as an increasing limit of bounded continuous functions , and ( b ) if is bounded and continuous , then is continuous .[ t - bst ] let .then : for all and such that , then \le k_{0}(1+\beta).\ ] ] . for any , we have .the set of invariant probability measures corresponding to controls in satisfies in particular , is tight in . there exists , with inf - compact , such that = \tilde{\varrho}.\ ] ] using it s formula , it follows by ( [ e - key ] ) that -\mathcal{v}(x ) \bigr ) \nonumber \\ \label{e - tbst1 } & & \qquad \le 1-\frac{1}{t } \mathbb{e}^{u}_{x } \biggl[\int_{0}^{t\wedge\tau_{r } } h(x_{s},u_{s } ) \mathbb{i}_{\mathcal{h}^{c}}(x_{s},u_{s } ) \,\mathrm { d } { s } \biggr ] \\ & & \qquad \quad { } + \frac{1}{t } \mathbb{e}^{u}_{x } \biggl[\int_{0}^{t\wedge\tau_{r } } r(x_{s},u_{s } ) \mathbb{i}_{\mathcal{h}}(x_{s},u_{s } ) \,\mathrm { d } { s } \biggr ] .\nonumber\end{aligned}\ ] ] since is inf - compact , ( [ e - tbst1 ] ) together with ( [ e - ka3 ] ) implies that \nonumber \\[-8pt ] \label{e - tbst2 } \\[-8pt ] \nonumber & & \qquad{}\le 2 + 2 \limsup_{t\to\infty } \frac{1}{t } \mathbb{e}^{u}_{x } \biggl[\int_{0}^{t } r(x_{s},u_{s } ) \,\mathrm{d } { s } \biggr].\end{aligned}\ ] ] part ( a ) then follows from ( [ e - tbst2 ] ) .now fix and such that . the inequality in ( [ e - tbst01 ] ) implies that the set of _ mean empirical measures _ , defined by \ ] ] for any borel sets and , is tight .it is the case that any limit point of the mean empirical measures in is an ergodic occupation measure , lemma 3.4.6 .then in view of remark [ r - lsc ] we obtain for some ergodic occupation measure .therefore , .disintegrating the measure as , we obtain the associated control . from ergodic theory , we also know that = \pi_{v}(r)\qquad \mbox{for almost every } x.\ ] ] it follows that , and since it is clear that , equality must hold among the three quantities .if , then ( [ e - tbst2 ] ) implies that ( [ etbst - a ] ) holds with and .therefore , parts ( c ) and ( d ) follow .existence of , satisfying ( [ erg - h ] ) , follows from assumption [ ass-2 ] and , theorem 3.6.6 .the inf - compactness of follows from the stochastic representation of in , lemma 3.6.9 .this proves ( e ) .existence of a stationary markov control that is optimal is asserted by the following theorem .[ t - exist ] let denote the set of ergodic occupation measures corresponding to controls in , and those corresponding to controls in , for . then : the set is compact in for any . there exists such that . by theorem [ t - bst](d ) , the set is tight for any . let , for some , be any convergent sequence in such that as and denote its limit by .since is closed , , and since the map is lower semi - continuous , it follows that .therefore , is closed , and hence compact .since for all , the equality follows .also is obtained by disintegrating .the reader might have noticed at this point that assumption [ ass-1 ] may be weakened significantly .what is really required is the existence of an open set and inf - compact functions and , satisfying . . in ( h1 ), we use the convention that the ` ' of the empty set is . also note that ( h1 ) is equivalent to the statement that is bounded in for all .if ( h1)(h2 ) are met , we define , and following the proof of lemma [ l - key ] , we assert the existence of an inf - compact satisfying ( [ e - ka3 ] ) .in fact , throughout the rest of the paper , assumption [ ass-1 ] is not really invoked .we only use ( [ e - key ] ) , the inf - compact function satisfying ( [ e - ka3 ] ) , and , naturally , assumption [ ass-2 ] . for , let by theorem [ t - bst](d ) , for any , , we have the bound therefore , since is near - monotone , that is , there exists .let be as in the proof of theorem [ t - exist ] .the sub - optimality of relative to the running cost and ( [ et - exist1 ] ) imply that it follows from ( [ et - exist2 ] ) and theorem [ t - bst](d ) that is tight .since is lower semi - continuous , if is any limit point of as , then taking limits in ( [ et - exist2 ] ) , we obtain since is closed , , which implies that .therefore , equality must hold in ( [ et - exist3 ] ) , or in other words , is an optimal ergodic occupation measure .[ t - hjb1 ] there exists a unique function with , which is bounded below in , and solves the hjb = \varrho_{\varepsilon},\ ] ] where , or in other words , is the optimal value of the ergodic control problem with running cost .also a stationary markov control is optimal for the ergodic control problem relative to if and only if it satisfies where .\ ] ] moreover : for every , there exists such that if is a measurable a.e .selector from the minimizer of the hamiltonian in ( [ e - he ] ) , that is , if it satisfies ( [ e - hjbe ] ) , then for any , + \inf_{b_{\delta } } v^{\varepsilon};\ ] ] for any stationary control and for any , ,\ ] ] where is hitting time to the ball .[ t - hjb2 ] let , , and , for , be as in theorem [ t - hjb1 ] .the following hold : the function converges to some , uniformly on compact sets , and , as , and satisfies = \varrho_{*}.\ ] ] also , any limit point ( in the topology of markov controls ) as of the set satisfies a stationary markov control is optimal for the ergodic control problem relative to if and only if it satisfies where .\ ] ] moreover , for an optimal , we have = \varrho_{*}\qquad \forall x\in \mathbb{r}^{d}.\ ] ] the function has the stochastic representation \nonumber \\[-8pt ] \label{e - strep } \\[-8pt ] \nonumber & = & \lim_{\delta\searrow0 } \mathbb{e}^{\bar{v}}_{x } \biggl[\int_{0}^{{\breve\tau}_{\delta } } \bigl(r \bigl(x_{s},v_{*}(x_{s } ) \bigr)-\varrho _ { * } \bigr ) \,\mathrm{d } { s } \biggr]\end{aligned}\ ] ] for any that satisfies ( [ e - v ] ) . if is a convex set , is strictly convex whenever it is not constant , and is strictly convex for all , then any measurable minimizer of ( [ e - hjbe ] ) converges pointwise , and thus in , to the minimizer of ( [ e - hjb ] ) .theorem [ t - hjb2 ] guarantees the existence of an optimal stable control , which is made precise by ( [ e - v ] ) , for the ergodic diffusion control problem with the running cost function .moreover , under the convexity property in part ( d ) , the optimal stable control can be obtained as a pointwise limit from the minimizing selector of ( [ e - hjbe ] ) .for instance , if we let then by choosing and as in proposition [ eg - prop ] , we see that the approximate value function and approximate control converge to the desired optimal value function and optimal control , respectively . concerning the uniqueness of the solution to the hjb equation in ( [ e - hjb ] ) , recall that in the near - monotone case the existing uniqueness results are as follows : there exists a unique solution pair of ( [ e - hjb ] ) with in the class of functions which are bounded below in .moreover , it satisfies and .if the restriction is removed , then in general , there are multiple solutions . since in our model is not near - monotone in , the function is not , in general , bounded below .however , as we show later in lemma [ luniq1 ] the negative part of grows slower than , that is , it holds that , with as defined in section [ s - notation ] .therefore , the second part of the theorem that follows may be viewed as an extension of the well - known uniqueness results that apply to ergodic control problems with near - monotone running cost .the third part of the theorem resembles the hypotheses of uniqueness that apply to problems under a blanket stability hypothesis .[ t - unique ] let be a solution of = \hat \varrho,\ ] ] such that and . then the following hold : any measurable selector from the minimizer of the associated hamiltonian in ( [ e - v ] ) is in and . if then necessarily and . if , then and . applying these results to the multi - class queueing diffusion model , we have the following corollary .[ c - unique ] for the queueing diffusion model with controlled dynamics given by ( [ eg - sde1 ] ) , drift given by ( [ q - drift ] ) , and running cost as in ( [ eg - cost ] ) , there exists a unique solution , satisfying , to the associated hjb in the class of functions , whose negative part is in .this solution agrees with in theorem [ t - hjb2 ] .existence of a solution follows by theorem [ t - hjb2 ] .select as in the proof of proposition [ eg - prop ] . that the solution is in the stated class then follows by lemma [ luniq1 ] and corollary [ c - exist ] that appear later in sections [ proofs ] and [ s - truncation ] , respectively . with as in the proof of proposition [ eg - prop ], it follows that .therefore , uniqueness follows by theorem [ t - unique ] . we can also obtain the hjb equation in ( [ e - hjb ] ) via the traditional vanishing discount approach as the following theorem asserts .similar results are shown for a one - dimensional degenerate ergodic diffusion control problem in and certain multi - dimensional ergodic diffusion control problems ( allowing degeneracy and spatial periodicity ) in .[ t - hjb3 ] let and be as in theorem [ t - hjb2 ] . for , we define .\ ] ] the function converges , as , to , uniformly on compact subsets of . moreover , , as .the proofs of the theorems [ t - hjb1][t - hjb3 ] are given in section [ proofs ] .the following result , which follows directly from ( [ et - exist2 ] ) , provides a way to find -optimal controls .let be the minimizing selector from theorem [ t - hjb1 ] and be the corresponding invariant probability measures .then almost surely for all , & = & \int_{\mathbb{r}^{d } } r \bigl(x , v_{\varepsilon}(x ) \bigr ) \mu_{v_{\varepsilon } } ( \mathrm{d } { x } ) \\ & \le & \varrho _ { * } + \varepsilon k_{0}(1 + \varrho_{*}).\end{aligned}\ ] ] recall that , with as in lemma [ l - key ] .we need the following lemma . for and , we define ,\ ] ] where we set . clearly , when , we have .we quote the following result from , theorem 3.5.6 , remark 3.5.8 .[ l3.3 ] provided , then defined above is in and is the minimal nonnegative solution of = \alpha v^{\varepsilon}_{\alpha}(x).\ ] ] the hjb in lemma [ l3.3 ] is similar to the equation in , theorem 3 , which concerns the characterization of the discounted control problem .[ l3.4 ] let be any precise markov control and be the corresponding generator .let be a nonnegative solution of where .let be any nondecreasing function such that for all .then for any there exists a constant which depends on , but not on , or , such that define and .then in and solves also hence by , theorem a.2.13 , there exists a positive constant such that implying that we next consider the solution of then if , then applying the maximum principle ( , theorem a.2.1 , ) it follows from ( [ p-009 ] ) that again attains its minimum at the boundary ( , theorem a.2.3 , ) . therefore , is a nonnegative function , and hence by the harnack inequality , there exists a constant such that thus , combining the above display with ( [ p-010 ] ) we obtain this completes the proof .[ l3.5 ] let be as in lemma [ l3.3 ] .then for any , there exists a constant such that \mbox { and } \varepsilon\in[0,1].\ ] ] recall that is the stationary probability distribution for the process under the control in lemma [ l - basic ] . since is sub - optimal for the -discounted criterion in ( [ e - dcost ] ) , and is nonnegative , then for any ball , using fubini s theorem , we obtain \mu_{u_{0}}(\mathrm{d } { x } ) \\ & = & \frac{1}{\alpha } \mu_{u_{0}}(r_{\varepsilon } ) \\ & \le & \frac{1}{\alpha } \bigl(\eta+ \varepsilon k_{0}(1+\eta ) \bigr),\end{aligned}\ ] ] where for the last inequality we used lemma [ l3.2 ] and theorem [ t - bst](a ) .therefore , we have the estimate the result then follows by lemma [ l3.4 ] .we continue with the proof of theorem [ t - hjb1 ] .proof of theorem [ t - hjb1 ] consider the function . in view of lemma [ l3.4 ] and lemma [ l3.5 ], we see that is locally bounded uniformly in ] .therefore , by standard elliptic theory , and its first- and second - order partial derivatives are uniformly bounded in , for any , in any bounded ball , that is , for some constant depending on and , ( , theorem 9.11 , page 117 ) .therefore , we can extract a subsequence along which converges .then the result follows from theorems 3.6.6 , lemma 3.6.9 and theorem 3.6.10 in .the proof of ( [ s-014 ] ) follows from lemma [ l3.4 ] and lemma [ l3.5 ] .[ r - tight ] in the proof of the following lemma , and elsewhere in the paper , we use the fact that if is a any set of controls such that the corresponding set of invariant probability measures is tight then the map from the closure of to is continuous , and so is the map .in fact , the latter is continuous under the total variation norm topology , lemma 3.2.6 .we also recall that and are closed and convex subsets of and .therefore, is compact in .note also that since is compact , tightness of a set of invariant probability measures is equivalent to tightness of the corresponding set of ergodic occupation measures . [l3.6 ] if \} ] are tight . moreover ,if along some subsequence , then the following hold : as , is a stable markov control , . by ( [ e - tbst01 ] ) and ( [ et - exist2 ] ) ,the set of ergodic occupation measures corresponding to \} ] , and also part ( a ) holds .part ( b ) follows from the equivalence of the existence of an invariant probability measure for a controlled diffusion and the stability of the associated stationary markov control ( see , theorem 2.6.10 ) .part ( c ) then follows since equality holds in ( [ et - exist3 ] ) .we continue with the following lemma that asserts the continuity of the mean hitting time of a ball with respect to the stable markov controls .[ l3.7 ] let , for some , be a collection of markov controls such that in the topology of markov controls as .let , be the invariant probability measures corresponding to the controls , , respectively .then for any , it holds that \mathop{\longrightarrow}_{n\to\infty } \mathbb{e}^{\hat{v}}_{x}[{\breve \tau}_{\delta } ] \qquad\forall x\in b^{c}_{\delta}.\ ] ] define .it is easy to see that is inf - compact and locally lipschitz .therefore , by theorem [ t - bst](d ) we have and since , we also have .then by , lemma 3.3.4 , we obtain + \mathbb{e}^{\hat{v}}_{x } \biggl[\int _ { 0}^{{\breve\tau}_{\delta } } h(x_{s } ) \,\mathrm{d } { s } \biggr ] < \infty.\ ] ] let be a positive number greater than . then by ( [ p-020 ] ) , there exists a positive such that \le \frac{1}{r } \mathbb{e}^{v}_{x } \biggl[\int _ { 0}^{{\breve\tau } _ { \delta}}h(x_{s } ) \mathbb{i}_{\{h > r\}}(x_{s } ) \,\mathrm{d } { s } \biggr ] \le\frac{k}{r}\ ] ] for . from this assertion and ( [ p-020 ] ) , we see that \mathop{\longrightarrow}_{r\to\infty } 0.\ ] ] therefore , in order to prove the lemma it is enough to show that , for any , we have \mathop { \longrightarrow}_{n\to\infty } \mathbb{e}^{\hat{v}}_{x } \biggl[\int _ { 0}^{{\breve\tau}_{\delta } } \mathbb{i}_{\{h\le r\}}(x_{s } ) \,\mathrm{d } { s } \biggr].\ ] ] but this follows from , lemma 2.6.13(iii ) .[ l3.8 ] let be as in theorem [ t - hjb1 ] , and satisfy ( [ e - he ] ) .there exists a subsequence , such that converges to some , uniformly on compact sets , and satisfies = \varrho_{*}.\ ] ] also , any limit point in the topology of markov controls of the set , as , satisfies moreover , admits the stochastic representation \nonumber \\[-8pt ] \label{el3.7-strep } \\[-8pt ] \nonumber & = & \mathbb{e}^{v_{*}}_{x } \biggl[\int _ { 0}^{{\breve\tau}_{\delta } } \bigl(r \bigl(x_{s},v_{*}(x_{s } ) \bigr)- \varrho _ { * } \bigr ) \,\mathrm{d } { s } + v_{*}(x_{{\breve\tau}_{\delta } } ) \biggr].\end{aligned}\ ] ] it follows that is the unique limit point of as . from ( [ s-014 ] ) , we see that the family \} ] is uniformly bounded in for .consequently , \} ] . for any , we have \ge \bigl(\inf_{b_{r}^{c}\times\mathbb{u } } \tilde{h } \bigr ) \mathbb { e}_{x}[{\breve\tau}_{r } ] \qquad\forall x\in b_{r}^{c}.\ ] ] it is also straightforward to show that }{\mathbb{e}_{x}[{\breve\tau}_{\delta}]}=1 ] is in , which implies that .on the other hand , by ( [ euniq1b ] ) we obtain for all such that , which implies that the restriction of to the support of is in .it follows that .we next prove theorem [ t - hjb2 ] .proof of theorem [ t - hjb2 ] part ( a ) is contained in lemma [ l3.8 ] . to prove part ( b ) ,let be any control satisfying ( [ e - v ] ) .by lemma [ luniq1 ] the map is inf - compact and by theorem [ t - hjb2 ] and ( [ e - key ] ) it satisfies this implies that .applying it s formula , we obtain \le k_{0}(1+\varrho_{*}).\ ] ] therefore , . by ( [ e - key ] ), we have \le \mathcal{v}(x ) + t + \mathbb{e}^{\bar{v}}_{x } \biggl [ \int_{0}^{t } r \bigl(x_{s } , \bar{v}(x_{s } ) \bigr ) \,\mathrm{d } { s } \biggr],\ ] ] and since , this implies by ( [ 416a ] ) that \le 1+k_{0}(1+\varrho_{*}).\ ] ] since , it follows by ( [ 416b ] ) that = 0.\ ] ] therefore , by it s formula , we deduce from ( [ e - hjb ] ) that \le\varrho_{*}.\ ] ] on the other hand , since the only limit point of the mean empirical measures , as , is , and , then in view of remark [ r - lsc ] , we obtain .this proves that equality holds in ( [ e - t3.4new ] ) and that the `` '' may be replaced with `` . ''conversely , suppose is optimal but does not satisfy ( [ e - v ] ) .then there exists and a nontrivial nonnegative such that converges to , weakly in , along some subsequence . by applying it s formula to ( [ e - hjbe ] ) ,we obtain -v^{\varepsilon}(x ) \bigr ) + \frac{1}{t } \mathbb{e}^{v}_{x } \biggl[\int _ { 0}^{t\wedge\tau_{r } } r_{\varepsilon } \bigl(x_{s},v(x_{s } ) \bigr ) \,\mathrm{d } { s } \biggr ] \nonumber \\[-8pt ] \label{eeee3.54 } \\[-8pt ] \nonumber & & \qquad \ge\varrho_{\varepsilon}+\frac{1}{t } \mathbb{e}^{v}_{x } \biggl[\int_{0}^{t\wedge\tau_{r } } f_{\varepsilon } \bigl(x_{s},v(x_{s } ) \bigr ) \,\mathrm{d } { s } \biggr].\end{aligned}\ ] ] define , for some , .\ ] ] since is bounded from below , by theorem [ t - hjb1](c ) we have . invoking , corollary 3.7.3 , we obtain = 0,\ ] ] and = \mathbb{e}^{v}_{x } \bigl[v^{\varepsilon}(x_{t } ) \bigr].\ ] ] therefore , taking limits in ( [ eeee3.54 ] ) , first as , and then as , we obtain taking limits as in ( [ e - discrepancy ] ) , since has a strictly positive density in , we obtain which is a contradiction .this completes the proof of part ( b ) .the first equality ( [ e - strep ] ) follows by lemma [ l3.8 ] , taking limits as . to show that the second equality holds for any optimal control , suppose satisfies ( [ e - v ] ) . by ( [ e - key ] )we have , for and , \le \mathcal{v}(x ) + \sup_{b_{\delta } } \mathcal{v}^{- } + \mathbb { e}^{\bar{v}}_{x } \biggl [ \int_{0}^{\tau_{r}\wedge{\breve\tau}_{\delta } } \bigl(1+r \bigl(x_{s } , \bar{v}(x_{s } ) \bigr ) \bigr ) \,\mathrm{d } { s } \biggr].\ ] ] it follows that ( see , lemma 3.3.4 ) < \infty,\ ] ] and since we must have = 0.\ ] ] by the first equality in ( [ el3.7-strep ] ) , we obtain , with as defined in ( [ el3.7 - 3001 ] ) with replaced by . thus , in analogy to ( [ eee3.50 ] ) , we obtain = \mathbb{e}^{\bar{v}}_{x } \bigl[v_{*}(x_{{\breve\tau}_{\delta } } ) \bigr].\ ] ] the rest follows as in the proof of lemma [ l3.8 ] via ( [ eee3.51 ] ) .we next prove part ( d ) .we assume that is a convex set and that is strictly convex in if it is not identically a constant for fixed and .we fix some point .define it is easy to see that on both and do not depend on .it is also easy to check that is a closed set .let be the limit of , where is the solution to ( [ e - hjb ] ) and is the corresponding limit of .we have already shown that is a stable markov control .we next show that it is , in fact , a precise markov control . by our assumption, is the unique minimizing selector in ( [ e - ve ] ) and , moreover , is continuous in . by the definition of is clear that the restriction of to does not depend on .let on . using the strict convexity property of it is easy to verify that converges to the unique minimizer of ( [ e - hjb ] ) on .in fact , since is open , then for any sequence it holds that .this follows from the definition of the minimizer and the uniform convergence of to .therefore , we see that is a precise markov control , on , and pointwise as .it is also easy to check that pointwise convergence implies convergence in the topology of markov controls .we now embark on the proof of theorem [ t - unique ] .proof of theorem [ t - unique ] the hypothesis that implies that the map is inf - compact . also by ( [ e - key ] ) and ( [ e - hjb - hat2 ] ), it satisfies therefore , from which it follows that .this proves part ( a ) . by ( [ e - key ] ), we have \le \mathcal{v}(x ) + t + \mathbb{e}^{\hat{v}}_{x } \biggl [ \int_{0}^{t } r \bigl(x_{s } , \hat{v}(x_{s } ) \bigr ) \,\mathrm{d } { s } \biggr],\ ] ] and since , this implies that \le 1+\varrho_{\hat{v}}.\ ] ] since , it follows by ( [ luniq2a ] ) that = 0.\ ] ] therefore , by it s formula , we deduce from ( [ e - hjb - hat2 ] ) that + \frac{1}{t } \mathbb{e}^{\hat{v}}_{x } \biggl[\int_{0}^{t } r \bigl(x_{s},\hat{v}(x_{s } ) \bigr ) \,\mathrm{d } { s } \biggr ] \biggr ) = \hat{\varrho}.\ ] ] this implies that and since by hypothesis we must have .again by ( [ e - key ] ) , we have \le \mathcal{v}(x ) + \sup_{b_{\delta } } \mathcal{v}^{- } + \mathbb { e}^{\hat{v}}_{x } \biggl [ \int_{0}^{\tau_{r}\wedge{\breve\tau}_{\delta } } \bigl(1+r \bigl(x_{s } , \hat{v}(x_{s } ) \bigr ) \bigr ) \,\mathrm{d } { s } \biggr].\ ] ] it follows by , lemma 3.3.4 , that < \infty,\ ] ] and since we must have = 0.\ ] ] using ( [ luniq2b ] ) and following the steps in the proof of the second equality in ( [ el3.7-strep ] ) , we obtain + \inf_{b_{\delta } } \hat{v } \\ & \ge & v_{*}(x ) - \sup_{b_{\delta } } v _ { * } + \inf_{b_{\delta } } \hat { v}.\end{aligned}\ ] ] taking limits as , we have . since and , we must have on , and the proof of part ( b ) is complete . to prove part ( c ) note that by part ( a ) we have . therefore , by theorem [ t - bst](a ) , which implies that by the hypothesis .therefore , converges as by , proposition 2.6 , which of course implies that tends to as .similarly , we deduce that as . applying it s formula to ( [ e - hjb - hat2 ] ) , with , we obtain . another application with results in .therefore , .the result then follows by part ( b ) .we finish this section with the proof of theorem [ t - hjb3 ] .proof of theorem [ t - hjb3 ] we first show that .let , and .applying it s formula to ( [ e - key ] ) , we obtain & \le & \mathcal{v}(x ) - \mathbb{e}^{u}_{x } \biggl[\int_{0}^{\tau_{n}(t ) } \alpha\tilde\mathcal{v}(s , x_{s } ) \,\mathrm{d } { s } \biggr ] \\ & & { } + \mathbb{e}^{u}_{x } \biggl[\int_{0}^{\tau_{n}(t ) } \mathrm { e}^{-\alpha s } \bigl(1-h(x_{s},u_{s } ) \bigr ) \mathbb{i}_{\mathcal { h}^{c}}(x_{s},u_{s } ) \,\mathrm{d } { s } \biggr ] \\ & & { } + \mathbb{e}^{u}_{x } \biggl[\int_{0}^{\tau_{n}(t ) } \mathrm { e}^{-\alpha s } \bigl(1+r(x_{s},u_{s } ) \bigr ) \mathbb{i}_{\mathcal{h}}(x_{s},u_{s } ) \,\mathrm{d } { s } \biggr].\end{aligned}\ ] ] it follows that \nonumber \\[-8pt ] \label{e - disc02 } \\[-8pt ] \nonumber & & \qquad\le \frac{1}{\alpha}+\mathcal{v}(x)+ \mathbb{e}^{u}_{x } \biggl[\int _ { 0}^{\tau_{n}(t ) } \mathrm { e}^{-\alpha s } r(x_{s},u_{s } ) \mathbb{i}_{\mathcal{h}}(x_{s},u_{s } ) \,\mathrm{d } { s } \biggr].\end{aligned}\ ] ] taking limits first as and then as in ( [ e - disc02 ] ) , and evaluating at an optimal -discounted control , relative to we obtain the estimate , using also ( [ e - ka3 ] ) , \le \frac{2}{\alpha } + \mathcal{v}(x)+2v_{\alpha}(x).\ ] ] by ( [ e - ka3 ] ) and ( [ e - disc03 ] ) , it follows that \\ & \le & v_{\alpha}(x ) + \varepsilon k_{0 } \bigl ( \alpha^{-1 } + \mathcal{v}(x ) + v_{\alpha } ( x ) \bigr).\end{aligned}\ ] ] multiplying by and taking limits as we obtain the same inequalities hold for the `` . ''therefore , .let ( note that a similar result as lemma [ l3.4 ] holds . )then satisfies \qquad \forall v\in\bigcup _ { \beta>0}\mathfrak{u}_{\mathrm{sm}}^{\beta}.\ ] ] this can be obtained without the near - monotone assumption on the running cost ; see , for example , , lemma 3.6.9 or lemma 3.7.8 .it follows from ( [ e - strep ] ) that . on the other hand , since , and , we must have by the strong maximum principle .we introduce an approximation technique which is in turn used to prove the asymptotic convergence results in section [ s - optimality ] .let be any control such that .we fix the control on the complement of the ball and leave the parameter free inside . in other words , for each we define we consider the family of controlled diffusions , parameterized by , given by with associated running costs .we denote by the subset of consisting of those controls which agree with on .let .it is well known that there exists a nonnegative solution , for any , to the poisson equation ( see , lemma 3.7.8(ii ) ) which is inf - compact , and satisfies , for all , \qquad \forall x\in\mathbb{r}^{d}.\ ] ] we recall the lyapunov function from assumption [ ass-1 ] .we have the following theorem . [ t - trunc ]let assumptions [ ass-1 ] and [ ass-2 ] hold .then for each there exists a solution in , for any , with , of the hjb equation = \varrho_{l},\ ] ] where is the elliptic differential operator corresponding to the diffusion in ( [ e - sder ] ) .moreover , the following hold : is nonincreasing in ; there exists a constant , independent of , such that for all ; uniformly over ; the restriction of on is in . as earlier, we can show that \ ] ] is the minimal nonnegative solution to = \alpha v^{l}_{\alpha}(x),\ ] ] and , .moreover , any measurable selector from the minimizer in ( [ e - dishjb - n ] ) is an optimal control .a similar estimate as in lemma [ l3.4 ] holds and , therefore , there exists a subsequence , along which converges to in , , and as , and satisfies ( [ e - hjb - n ] ) ( see also , lemma 3.7.8 ) . to show that , is a minimizing selector in ( [ e - hjb - n ] ) , we use the following argument . since , we claim that there exists a nonnegative , inf - compact function such that .indeed , this is true since integrability and uniform integrability of a function under any given measure are equivalent ( see also the proof of , lemma 3.7.2 ) .since every control in agrees with on , then for any the map \ ] ] is constant on . by the equivalence of ( i ) and ( iii ) in lemma 3.3.4 of , this implies that and thus is uniformly integrable with respect to the family for any .it then follows by , theorem 3.7.11 , that this yields part ( i ) .moreover , in view of lemmas [ l3.4 ] and [ l3.5 ] , we deduce that for any it holds that , where is a constant independent of .it is also evident by ( [ t - trunc01 ] ) that is decreasing in and for all .fix such that on .since is nonnegative , we obtain \le \varphi_{0}(x)\qquad \forall x\in\mathbb{r}^{d}.\ ] ] using an analogous argument as the one used in the proof of , lemma 3.7.8 , we have + \kappa_{\delta } \qquad\forall v\in\mathfrak{u}_{\mathrm{sm}}(l , v_{0}).\ ] ] thus , by ( [ t - trunc02a ] ) and ( [ t - trunc02 ] ) , and since by the choice of , it holds that on , we obtain + \kappa_{\delta} \nonumber \\[-8pt ] \label{t - trunc03 } \\[-8pt ] \nonumber & \le & \kappa_{\delta}+2\varphi_{0}(x ) \qquad\forall x\in \mathbb{r}^{d}.\end{aligned}\ ] ] this proves part ( ii ). now fix .let be a minimizing selector of ( [ e - dishjb - n ] ) .note then that .therefore , is a stable markov control .let in the topology of markov controls along the same subsequence as above .then it is evident that . also from lemma [ l3.7 ], we have \ , \mathop { \longrightarrow}_{\alpha_{n}\searrow0}\ , \mathbb{e}^{v^{l}}_{x } [ { \breve \tau}_{\delta}]\qquad \forall x\in b_{\delta}^{c } , \forall \delta>0.\ ] ] using , lemma 3.7.8 , we obtain the lower bound - \kappa_{\delta}.\ ] ] by , theorem 3.7.12(i ) ( see also ( 3.7.50 ) in ) , it holds that \nonumber \\[-8pt ] \label{t - trunc05 } \\[-8pt ] \nonumber & \ge & \mathbb{e}^{v^{l}}_{x } \biggl[\int_{0}^{{\breve\tau } _ { \delta } } r_{l } \bigl(x_{s},v^{l}(x_{s } ) \bigr ) \,\mathrm{d } { s } \biggr ] - \varrho_{l}\mathbb{e}^{v^{l}}_{x } [ { \breve\tau}_{\delta } ] - \kappa_{\delta } \qquad\forall x\in b_{l}^{c}.\end{aligned}\ ] ] by ( [ e - ka3 ] ) , we have therefore , using the preceding inequality and ( [ t - trunc05 ] ) , we obtain + \kappa_{\delta } \nonumber \\[-8pt ] \label{t - trunc06 } \\[-8pt ] \nonumber & & \qquad\ge \frac{2}{k_{0 } } \mathbb{e}^{v^{l}}_{x } \biggl[\int _ { 0}^{{\breve \tau}_{\delta } } \tilde{h } \bigl(x_{s},v^{l}(x_{s } ) \bigr ) \mathbb{i}_{\mathcal { h } } \bigl(x_{s},v^{l}(x_{s } ) \bigr ) \,\mathrm{d } { s } \biggr].\end{aligned}\ ] ] by ( [ e - key ] ) , ( [ t - trunc05 ] ) and the fact that is nonnegative , we have -\mathcal{v}(x)-\mathbb{e}^{v^{l}}_{x } [ { \breve \tau}_{\delta } ] \nonumber \\ \label{t - trunc07 } & & \qquad \le \mathbb{e}^{v^{l}}_{x } \biggl[\int_{0}^{{\breve\tau}_{\delta } } r \bigl(x_{s},v^{l}(x_{s } ) \bigr ) \mathbb{i}_{\mathcal { h } } \bigl(x_{s},v^{l}(x_{s } ) \bigr ) \,\mathrm{d } { s } \biggr ] \\ & & \qquad \le v^{l}(x)+\varrho_{l}\mathbb{e}^{v^{l}}_{x } [ { \breve\tau}_{\delta } ] + \kappa_{\delta}. \nonumber\end{aligned}\ ] ] combining ( [ t - trunc03 ] ) , ( [ t - trunc06 ] ) and ( [ t - trunc07 ] ) , we obtain & \le & k_{0 } ( 1+\varrho_{l } ) \mathbb{e}^{v^{l}}_{x}[{\breve\tau } _ { \delta } ] \\ & & { } + \frac{k_{0}}{2}\mathcal{v}(x)+2k_{0 } \bigl(\varphi_{0}(x)+ \kappa_{\delta } \bigr)\end{aligned}\ ] ] for all . as earlier , using the inf - compact property of and the fact that is bounded , we can choose large enough such that \le \mathbb{e}^{v^{l}}_{x } \biggl [ \int_{0}^{{\breve\tau}_{\delta } } \tilde{h } \bigl(x_{s},v^{l}(x_{s } ) \bigr ) \,\mathrm{d } { s } \biggr ] \le k_{0}\mathcal{v}(x ) + 4k_{0 } \bigl(\varphi_{0}(x)+\kappa_{\delta } \bigr)\ ] ] for all . since is inf - compact , part ( iii ) follows by ( [ t - trunc04 ] ) and ( [ t - trunc08 ] ) . part ( iv )is clear from regularity theory of elliptic pde , theorem 9.19 , page 243 .similar to theorem [ t - hjb1 ] , we can show that oscillations of are uniformly bounded on compacts .therefore , if we let we obtain a hjb equation = \hat\varrho,\ ] ] with and . by theorem [ t - trunc ], we have the bound for some positive constant .this of course , implies that .moreover , it is straightforward to show that for any with , we have < \infty.\ ] ] therefore , if in addition , we have < \infty,\ ] ] then it follows by theorem [ t - trunc](iii ) that [ t - trunc2 ] suppose that . then , under the assumptions of theorem [ t - trunc ] , we have , and . moreover , .let be any sequence of measurable selectors from the minimizer of ( [ e - hjb - n ] ) and the corresponding sequence of ergodic occupation measures .since by theorem [ t - bst ] is tight , then by remark [ r - tight ] if is a limit point of a subsequence , which we also denote by , then is the corresponding limit point of .therefore , by the lower semi - continuity of we have it also holds that by ( [ t - trunc000 ] ) , we have = 0,\ ] ] and hence applying it s rule on ( [ et4.2a ] ) we obtain . on the other hand , if is an optimal stationary markov control , then by the hypothesis , the fact that , ( [ estim5 ] ) and , proposition 2.6 , we deduce that ] tends to as . therefore , evaluating ( [ e - hjb - hat ] ) at and applying it s rule we obtain . combining the two estimates , we have , and thus equality must hold . here , we have used the fact that there exists an optimal markov control for by theorem [ t - hjb2 ] .next , we use the stochastic representation in ( [ t - trunc05 ] ) ,\qquad x\in b_{\delta}^{c}.\ ] ] fix any .since is compact , it follows that for each and with , the map defined by \ ] ] is continuous .therefore , the map is lower semi - continuous .it follows that \le \lim _ { l\to\infty } \mathbb{e}^{\hat{v}_{l}}_{x } \biggl[\int _ { 0}^{{\breve\tau}_{\delta } } r \bigl(x_{s } , \hat{v}_{l}(x_{s } ) \bigr ) \,\mathrm{d } { s } \biggr].\ ] ] on the other hand , since is inf - compact , it follows by ( [ t - trunc08 ] ) that is uniformly integrable with respect to the measures . therefore ,as also shown in lemma [ l3.7 ] , we have = \mathbb{e}^{\hat{v}}_{x } [ { \breve\tau}_{\delta } ] .\ ] ] since , uniformly on compact sets , and , as , it follows by ( [ et4.2d])([et4.2f ] ) that ,\qquad x\in b_{\delta } ^{c}.\ ] ] therefore , by theorem [ t - hjb2](b ) , for any and we obtain \\ & \le & \hat{v}(x ) + \mathbb{e}^{\hat{v}}_{x } \bigl[v^{*}(x_{{\breve \tau}_{\delta } } ) \bigr ] - \mathbb{e}^{\hat{v}}_{x } \bigl[\hat{v}(x_{{\breve\tau}_{\delta } } ) \bigr],\end{aligned}\ ] ] and taking limits as , using the fact that , we obtain on . since , we must have . by theorem [ t - trunc](ii ) , we have . it can be seen from the proof of theorem [ t - trunc2 ] that the assumption can be replaced by the weaker hypothesis that \to0 ] is the sum of the squares of the jumps , and that -\langle\hat{x}^{n}_{1}\rangle ] for , lemmas 9.2 and 9.3 .therefore , using it s formula and the definition of , we obtain \nonumber\\ & & \qquad= \frac{1}{t } \mathbb{e } \bigl[f \bigl(\hat{x}^{n}(0 ) \bigr ) \bigr ] \nonumber\\ \label{asym-2 } & & \qquad\quad{}+ \int_{\mathbb{r}^{d}\times\mathbb{u } } \biggl(\sum_{i=1}^{d } \mathcal{a}^{n}_{i}(x , u)\cdot f_{x_{i}}(x ) + \mathcal{b}^{n}_{i}(x , u)f_{x_{i}x_{i}}(x ) \biggr ) \phi ^{n}_{t}(\mathrm{d } { x},\mathrm{d } { u } ) \\ \nonumber & & \qquad\quad{}+\frac{1}{t } \mathbb{e}\sum_{s\le t } \biggl [ \delta f \bigl(\hat{x}^{n}(s ) \bigr)-\sum_{i=1}^{d } f_{x_{i } } \bigl(\hat{x}^{n}(s- ) \bigr)\cdot\delta\hat { x}^{n}_{i}(s ) \\ & & \qquad\quad\hspace*{54pt}{}- \frac{1}{2}\sum_{i , j=1}^{d } f_{x_{i}x_{j } } \bigl(\hat { x}^{n}(s- ) \bigr ) \delta \hat{x}^{n}_{i}(s)\delta\hat{x}^{n}_{j}(s ) \biggr],\nonumber\end{aligned}\ ] ] where we first bound the last term in ( [ asym-2 ] ) . using taylor s formula , we see that for some positive constant , where we use the fact that the jump size is . hence , using the fact that independent poisson processes do not have simultaneous jumps w.p.1 , using the identity , we obtain \\ & & \qquad\le\frac{k \vert f\vert_{\mathcal{c}^{3}}}{t\sqrt{n } } \mathbb{e } \biggl[\int_{0}^{t } \sum_{i=1}^{d } \biggl(\frac{\lambda ^{n}_{i}}{n } + \frac{\mu_{i}^{n } z^{n}_{i}(s)}{n } + \frac{\gamma_{i}^{n}q^{n}_{i}(s)}{n } \biggr ) \,\mathrm{d } { s } \biggr].\nonumber\end{aligned}\ ] ] therefore , first letting and using ( [ e5.2 ] ) and ( [ e5.4 ] ) we see that the expectation on the right - hand side of ( [ e5.6 ] ) is bounded above . therefore , as , the left - hand side of ( [ e5.6 ] ) tends to .thus , by ( [ asym-2 ] ) and the fact that is compactly supported , we obtain where therefore , . the proof of the upper bound in theorem [ t - upperbound ] is a little more involved than that of the lower bound .generally , it is very helpful if one has uniform stability across ( see , e.g. , ) . in , uniform stability is obtained from the reflected dynamics with the skorohod mapping .however , here we establish the asymptotic upper bound by using the technique of spatial truncation that we have introduced in section [ s - truncation ] .let be any precise continuous control in satisfying for .first , we construct a work - conserving admissible policy for each ( see ) .define a measurable map as follows : for , let note that .define we define a state - dependent , work - conserving policy as follows : : = \cases{{\displaystyle}x_{i}^{n}-u_{h } \bigl(x^{n } \bigr ) , & \quad,\vspace*{3pt } \cr { \displaystyle}x_{i}^{n}\wedge \biggl(n-\sum_{j=1}^{i-1}x_j^{n } \biggr)^{+},&\quad.}\ ] ] therefore , whenever the state of the system is in , the system works under the fixed priority policy with the least priority given to class- jobs .first , we show that this is a well - defined policy for all large .it is enough to show that for all when . if not , then for some , , we must have and so . since , we obtain but this can not hold for large . hence , this policy is well defined for all large . under the policy defined in ( [ e - zpolicy ] ) , is a markov process and its generator given by \bigl(f(x - e_{i})-f(x ) \bigr ) \\ & & { } + \sum_{i=1}^{d } \gamma^{n}_{i } q^{n}_{i}[x ] \bigl(f(x - e_{i})-f(x ) \bigr ) , \qquad x\in \mathbb{z}^{d}_{+},\end{aligned}\ ] ] where ] .it is easy to see that , for , = \biggl[x_{i}- \biggl(n-\sum _ { j=1}^{i-1 } x_j \biggr)^{+ } \biggr]^{+}.\ ] ] [ lem - uni ] let be the markov process corresponding to the above control .let be an even positive integer .then there exists such that < \infty,\ ] ] where is the diffusion - scaled process corresponding to the process , as defined in ( [ dc1 ] ) .the proof technique is inspired by , lemma 3.1 .define where , , are positive constants to be determined later .we first show that for a suitable choice of , , there exist constants , , independent of , such that choose large enough so that the policy is well defined .we define .note that also , = \mu^{n}_{i } x_{i}-\mu^{n}_{i } q_{i}^{n}[x] ] for .let the last estimate is due to the assumptions in ( [ hwpara ] ) concerning the parameters in the halfin whitt regime .then \bigr ) \nonumber \\[-8pt ] \label{uni-07 } \\[-8pt ] \nonumber & & \qquad\qquad= - q \sum_{i=1}^{d } \beta_{i } \mu_{i}^{n } \bigl{\vert}y^{n}_{i } \bigr { \vert}^{q } + \sum_{i=1}^{d } \beta_{i } qy^{n}_{i } \bigl{\vert}y^{n}_{i } \bigr{\vert}^{q-2 } \bigl ( \delta_{i}^{n}- \bigl(\gamma^{n}_{i}- \mu_{i}^{n } \bigr)q^{n}_{i}[x ] \bigr).\end{aligned}\ ] ] if and is large , then & = & u_{h}(x ) = \varpi \bigl((e \cdot x - n)^{+}v_{\delta}(\hat{x}_{n } ) \bigr ) \\ & \le & ( e\cdot x - n)^{+ } + d \le2dk\sqrt{n}.\end{aligned}\ ] ] let .we use the fact that for any it holds that ] .also , ^{+ } = 0,\qquad i=1 , \ldots , d.\ ] ] thus , we obtain maps ^{d} ] is finite for any as this quantity is dominated by the poisson arrival process .therefore , from ( [ uni-06 ] ) we see that -f_{n } \bigl(x^{n}(0 ) \bigr ) & = & \mathbb{e } \biggl[\int _ { 0}^{t}\mathcal{l}_{n}f_{n } \bigl(x^{n}(s ) \bigr)\ , \mathrm{d } { s } \biggr ] \\& \le & c_{1 } n^{{q}/{2}}t -c_{2 } \mathbb{e } \biggl [ \int_{0}^{t}f_{n } \bigl(x^{n}(s ) \bigr ) \,\mathrm{d } { s } \biggr],\end{aligned}\ ] ] which implies that \le c_{1 } t+\sum_{i=1}^{d } \beta_{i } \bigl(\hat{x}_{i}^{n}(0 ) \bigr)^{q}.\ ] ] hence , the proof follows by dividing both sides by and letting .proof of theorem [ t - upperbound ] let be the given running cost with polynomial growth with exponent in ( [ cost1 ] ) .let . recall that for .then is convex in and satisfies ( [ eg - cost ] ) with the same exponent . for any ,we choose such that is a continuous precise control with invariant probability measure and we also want the control to have the property that outside a large ball . to obtain such , we see that by theorems [ t - trunc ] , [ t - trunc2 ] and remark [ rem - trunc ] we can find and a ball for large , such that , for , is continuous in , and where is the invariant probability measure corresponding to .we note that might not be continuous on .let be a sequence of cut - off functions such that $ ] , it vanishes on , and it takes the value on .define the sequence .then , as , and the convergence is uniform on the complement of any neighborhood of . also by proposition [ eg - prop ] the corresponding invariant probability measures are exponentially tight .thus , combining the above two expressions , we can easily find which satisfies ( [ 300 ] ) .we construct a scheduling policy as in lemma [ lem - uni ] . by lemma [ lem - uni ] , we see that for some constant it holds that < k_{1},\qquad q=2(m+1).\ ] ] define since when , it follows that = x^{n}-z^{n } \bigl[x^{n } \bigr ] = v_{h } \bigl(x^{n } \bigr)\ ] ] for large , provided that .define then define , for each , the mean empirical measure by .\ ] ] by ( [ uni-04 ] ) , the family is tight . we next show that = \int _ { \mathbb{r}^{d}}r \bigl((e\cdot x)^{+}v_{\delta}(x ) \bigr ) \mu_{\delta}(\mathrm{d } { x}).\ ] ] for each , select a sequence along which the `` '' in ( [ uni-05 ] ) is attained . by tightness, there exists a limit point of . since has support on a discrete lattice , we have therefore , \lessgtr\int_{\mathbb{r}^{d } } r \biggl ( \frac{1}{\sqrt{n}}\hat{v}_{h}(x ) \biggr ) \psi^{n}(\mathrm{d } { x})\pm\mathcal{e}^{n},\ ] ] where .\ ] ] by ( [ uni-04 ] ) , the family is tight .hence , it has a limit . by definition, we have thus , using the continuity property of and ( [ cost1 ] ) it follows that along some subsequence .therefore , in order to complete the proof of ( [ uni-05 ] ) we need to show that since the policies are work - conserving , we observe that , and therefore for some positive constants and , we have ^{m}.\ ] ] given we can choose so that for all , ^{m } \mathbb{i } _ { \{{\vert}\hat{x}^{n}(s){\vert}>({\rho_{d}}/{\sqrt{d}})\sqrt{n } \ } } \,\mathrm{d } { s } \biggr ] \le\varepsilon,\ ] ] where we use ( [ uni-04 ] ) .we observe that .thus , ( [ uni-05 ] ) holds . in order to complete the proof ,we only need to show that is the invariant probability measure corresponding to .this can be shown using the convergence of generators as in the proof of theorem [ t - lowerbound ] .we have answered some of the most interesting questions for the ergodic control problem of the markovian multi - class many - server queueing model .this current study has raised some more questions for future research .one of the interesting questions is to consider nonpreemptive policies and try to establish asymptotic optimality in the class of nonpreemptive admissible polices .it will also be interesting to study a similar control problem when the system has multiple heterogeneous agent pools with skill - based routing .it has been observed that customers service requirements and patience times are nonexponential in some situations .it is therefore important and interesting to address similar control problems under general assumptions on the service and patience time distributions .we thank the anonymous referee for many helpful comments that have led to significant improvements in our paper .ari arapostathis acknowledges the hospitality the department of industrial and manufacturing engineering in penn state while he was visiting at the early stages of this work .guodong pang acknowledges the hospitality of the department of electrical and computer engineering at university of texas at austin while he was visiting for this work .part of this work was done while anup biswas was visiting the department of industrial and manufacturing engineering in penn state .hospitality of the department is acknowledged .
we study a dynamic scheduling problem for a multi - class queueing network with a large pool of statistically identical servers . the arrival processes are poisson , and service times and patience times are assumed to be exponentially distributed and class dependent . the optimization criterion is the expected long time average ( ergodic ) of a general ( nonlinear ) running cost function of the queue lengths . we consider this control problem in the halfin whitt ( qed ) regime , that is , the number of servers and the total offered load scale like for some constant . this problem was proposed in [ _ ann . appl . probab . _ * 14 * ( 2004 ) 10841134 , section 5.2 ] . the optimal solution of this control problem can be approximated by that of the corresponding ergodic diffusion control problem in the limit . we introduce a broad class of ergodic control problems for controlled diffusions , which includes a large class of queueing models in the diffusion approximation , and establish a complete characterization of optimality via the study of the associated hjb equation . we also prove the asymptotic convergence of the values for the multi - class queueing control problem to the value of the associated ergodic diffusion control problem . the proof relies on an approximation method by _ spatial truncation _ for the ergodic control of diffusion processes , where the markov policies follow a fixed priority policy outside a fixed compact set . ./style / arxiv - general.cfg ,
the field of multiobjective design optimization has evolved very fast during last years , reflecting the need of solving tasks with several conflicting criteria , which is common in practical problems . from the mathematical point of view, this corresponds to minimization / maximization of a vector - valued function , which rarely leads to a single solution .consequently , a whole hyperplane of trade - off solutions , called pareto - optimal set , is expected as the result instead of a single optimum .a number of algorithms have been presented that generate a set of solutions approximating this hyperplane .the quality of the approximation is usually considered from two points of view : ( i ) the closeness to the exact trade - off surface and ( ii ) its distribution .the former is related to convergence properties of an algorithm while the latter describes its ability to maintain diversity .an ideal algorithm should produce well converged solutions perfectly distributed along the pareto front .however , these requirements are conflicting , and many current approaches concentrates on one of them finding reasonable compromise in the other . in this study, our attention is focused on the second aspect of diversity of the pareto - optimal set , namely we present a new strategy for maintaining variety of members of a pareto archive .the problem of maintaining uniform distribution at an affordable cost has been addressed by many algorithms .it is known that the notion of crowding distance proposed by deb et al . for algorithm nsga - ii not sufficient to maintain diversity of the evolution for more than two objectives ( e.g. ) . on the other hand , spea2 by zitzler et al . is usually able to produce well spread solutions even for three or more objectives .the concept of archiving promising design vectors was first introduced for spea by zitzler and thiele .knowles and corne presented the pareto archived evolutionary strategy ( paes ) and proposed the adaptive grid algorithm ( ) to maintain diversity . however , it is difficult to keep the efficiency of this approach in cases with more than three objectives .the new mechanism presented in this paper was implemented in micro - genetic algorithm proposed by szlls et al . , and results for three standard three - objective benchmark problems are presented .our second aim is to investigate the effect of population size for small ( sometimes called micro ) populations on the performance of .it was reported by krishnakumar for single - objective optimization and by coello and pulido for multiple objectives , that very small populations can lead to fast convergence to the pareto front . in this context, most experiments were performed using populations of 4 , 10 and 20 individuals .results got by equipped with the new archiving mechanism are compared with those obtained by two leading methods in the field , namely nsga - ii by deb et al . and spea2 by zitzler et al . , and a recent interesting algorithm ibea by zitzler and knzli .all these are implemented in the platform and programming language independent interface for search algorithms ( pisa ) .pisa is an interesting open source package developed by the team of prof .e. zitzler at eth z " urich .the software implements various selection , crossover , and mutation operators and objective function evaluations .an important idea of the project is to separate the selection of promising candidates from objective function evaluation , crossover and mutation and implement these in two separate programs , interchanging information via formatted files .these programs are called _ selectors _ and _ variators _ in the pisa context .there is an increasing number of ready - to - use variators and selectors that can be downloaded from the web page of the pisa project .therefore , the system offers a simple way to produce fair comparisons of various selection schemes with the same variator .while the described scheme of splitting an evolutionary algorithm into two separate programs is very useful for some techniques , in our opinion , it does not fit to algorithms with strong coupling between both stages via the use of an archiving procedure .that is the reason why our implementation of was used , instead of integrating the proposed archiving technique into the pisa framework .three metrics , measuring both convergence to the exact front and diversity of the approximate set , are used for the comparison .it is observed , that the new algorithm produces very good distribution of individuals outperforming in this respect the other algorithms in many cases .the archiving strategy does not seem to affect its convergence .moreover , diversity is maintained in an affordable way as suggested by presented numerical experiments .the rest of the paper is organised as follows . in section [ sec : armoga ] , is recalled with an emphasis on its main aspects .section [ sec : archive ] contains the main contribution , which is the proposition of a new archiving mechanism .tests and comparisons with the other evolutionary techniques can be found in section [ sec : comparison ] , where we describe the test problems ( [ sec : problems ] ) , metrics used for evaluating the performance ( [ sec : metrics ] ) , detailed setting of particular algorithms ( [ sec : setting ] ) , and organization of the experiments ( [ sec : experiments ] ) , respectively .our findings are discussed in detail in section [ sec : discussion ] , while section [ sec : conclusion ] contains summary of the work and concluding remarks .to minimize the costly evaluation of individuals , it is straightforward to see that one way to go is to minimize their number .it is well known for evolutionary practitioners , that using smaller populations and applying the evolutionary operators many times is often more favourable than vice versa ( e.g. ) .this idea can be brought to an extreme by using a micro - population ( e.g. 4 , 5 , 10 individuals ) , what we really did when we utilized some ideas of krishnakumar and of coello and pulido .krishnakumar came with the concept of micro - genetic approach first , and used it for single - objective optimization .his algorithm contained only selection and crossover operators , and no mutation operator .instead , the author introduced a reinitialization technique , which was invoked once in a few generations to ensure diversity for the evolution .the latter two researchers proposed a micro - genetic algorithm enabling to tackle multi - objective problems .their concept was similar to that of krishnakumar , i.e. it contained selection , crossover , and reinitialization operators supplemented by a mutation operator .both algorithms were verified on various test problems . in both the cases, the micro - genetic variants converged to the optimum ( pareto - front ) much faster than their macrogenetic counterparts used for comparison . in the approach by szlls , microgenetic algorithmis supplemented by range adaptation and `` knowledge - based '' reinitializion procedure exploiting the pareto - archive to generate better individuals .the concept of range adaptation was originally introduced by arakawa and hagiwara , who used it with binary coding of the design variables .its essence lies in ability to promote the evolution towards promising regions of the design space via sophisticated manipulation with the population statistics . due to the coding, it contained some artificial parameters which were hard to guess in general .oyama used range adaptation in real domain and was successful in avoiding this drawback via encoding a design variable to a real number defined by integration of the gaussian distribution where is linked to the original design variable by we are using this encoding scheme too , with one important difference : oyama originally calculated the average and the standard deviation by sampling the upper half of the population , which is justified as long as macro - populations are used ( e.g. with more than fifty individuals ) .but such approach would be too restrictive in the case of microevolution , since the upper half of the micropopulation contains too little information to keep the diversity .consequently , the evolution quickly ends up in premature convergence .thus , we calculate both by taking into account the whole population . `` knowledge based '' reinitialization resulted from an attempt to use the members of the pareto - archive to get new members , superseding the old ones by putting several of them into the reinitialized population . moreover , only a subset of the archive is considered .for instance , two archive members with extreme values of two different objectives chosen randomly are usually exploited . in this way , it is possible to further improve the whole archive by improving its subsets .the functioning of can be seen in figure [ fig : miarmoga_scheme ] .after initialization of the population by latin hypercube sampling ( lhs ) and evaluation depicted as archive update , the evolution goes through selection , mating and mutation to evaluation of the new population . each -th generationthe population statistics is updated , range - adaptation takes place , followed by knowledge based ( elitist - random ) reinitialization .a thorough description of the algorithm is to be found in .our approach contains two new system parameters : adaptation factor and minimal standard deviation . in short, strives to keep the evolution in a permanently `` excited '' state via forced modification of the population statistics .it practically means that the standard deviation is not allowed to fall under certain minimal value of for any design variable .this helps to prevent the micro - genetic algorithm from getting stuck in premature convergence .the role of the adaptation factor lies in controlling the frequency of range adaptation : if reinitialization is necessary , and the new standard deviation of a design variable is changed by more than , where is the standard deviation when the last reinitialization took place , then the range of that design variable is adapted ._ pareto archive _ is a key component of many evolutionary algorithms .it acts as a collector of good individuals during the evolution , and is often used to give the resulting pareto front approximation at the end of the evolution .after new individuals are evaluated , the archive is improved if these individuals dominate or are non - dominated with respect to the existing individuals of the archive . during reinitialization , the micro - genetic algorithm retrieves information from the archive , using it to explore the promising regions of the search space .obviously , in any real setup we must limit the number of individuals stored in an archive .this is necessary not just to keep the amount of information processed feasible , but also to get a good diversity of the resulting approximation . in our strategy, we use a fixed upper limit on the number of individuals stored in the archive .ideally , we want to end up with a full archive of pareto - optimal solutions that is `` well spread '' over the true pareto front of the problem .our approach is an archive dealing with a single new individual at a time .this is particularly suitable for micro - evolutionary approaches , where we only have a few new individuals from each generation .when a new individual arrives , it is first checked for pareto dominance with all existing members of the archive .now , we distinguish among three cases : * the new individual is dominated by one or more members of the archive . in this case , the new individual is discarded . *the new individual dominates one or more members of the archive .the dominated ones are removed , and the new individual is added to the archive and the internal information of the archive is updated ( see below ) . *the new individual is non - dominated and non - dominating .if the number of members of the archive has not yet reached the upper limit , the new individual is added as in the previous case . in the opposite case , we need to discard at least one individual ( either the newcomer or one from the archive ) , but we can not decide this by pareto dominance . in this case , we proceed to the secondary decision procedure described below : if we arrive at the case that can not be resolved by pareto dominance , our secondary goal is to maximize the distance between neighbouring individuals , based on some distance - measure in the objective space . in this paper , we use the standard euclidean distance , which is meaningful for any dimension of the objective space . first , we consider the minimum pairwise distance , i.e. , where denotes the set of archived individuals and stands for the vector of objective values of individual .we take the pair of individuals that achieves the minimum in the above expression .if there are multiple pairs , we take any of them . without the loss of generality , we assume that the minimum pair is .further , we denote the vector of objective values of the new individual as . if we can replace by .alternatively , if we can replace by . if either of the above conditions is satisfied , the overall minimum pairwise distance will be improved by the substitution or , if there were multiple minimal pairs , it will stay the same but the number of minimal pairs will reduce .we call this as the _ global improvement _ check .if neither of these conditions is satisfied , we consider the closest archived individual to , say , instead . if we replace by . if this condition holds , there is a certain subset of the archived individuals whose pairwise minimum will improve .this is the _ local improvement _ check .if neither check is successful , we discard the new individual . searching for the minimum - distance pair of the archive afresh each time an individual is considered would be too costly . to make the procedure efficient ,we maintain for each archived individual a pointer to its closest neighbour ( or any of them ) .therefore , searching for the pairwise minimum in requires only one pass through the archive .similarly , the right - hand side of equation is simply the distance of to its closest neighbour .hence , these two checks only require computing the distances of the new individual to all archived individuals , and computing the minima on left - hand sides of the equations , , and .thus , _ deciding _ whether to add a new individual has linear complexity in terms of number of archived individuals ( evaluating mutual pairwise dominance also has linear complexity ) .if the new individual is to be added , the existing closest - neighbour links need to be updated .each resulting archive member is considered in turn .if the link is valid ( i.e. the closest neighbour in the archive was not discarded ) , we simply check if the newcomer is closer , and possibly update the link .this takes only constant time .however , if the link became invalid ( the former closest neighbour was discarded ) , we need to compute the closest neighbour afresh by computing objective distances of the updated individual to all others .it can be proven by a simple argument based on -dimensional ball volumes that the maximum number of points in -dimensional space having a single common closest neighbour is bounded from above by a constant depending on .since the pareto archive consists of mutually non - dominating vectors , which can not be arranged arbitrarily , in our case the constant is even smaller .for instance , for a two - objective optimization , i.e. , a single archive member can be the closest neighbour to at most two other members at the same time .using this argument , it can be easily seen that the complexity of a single archive update has complexity , where is the size of the pareto archive and is the number of archive members dominated by the new individual .as was already said , merely _ deciding _ whether the newcomer is to be added costs . if the decision is positive , there are two cases : either the newcomer dominates some existing archive members , or it was added based on the secondary decision procedure . in the former case , members will be discarded , so at most nearest - neighbour links will need to be updated , being the upper bound constant discussed in the previous paragraph . in the latter case ,one existing member is discarded , so at most existing links must be updated . given that updating a single link costs , together we have the cost which can be simplified to , given that is a constant independent of .while in principle can be as high as , in practice it drops to very quickly as the convergence proceeds and new dominating individuals become increasingly rare .it should also be noted that if is high at one step , the evolution continues with an archive of which will be significantly smaller than .numerical experiments confirm that in real evolutionary runs , the average number of invalid links per archive update is very small , even much smaller than the theoretical bounds suggested above .this might be observed from tables [ tab : updates ] and [ tab : asize_updates ] .therefore , we can conclude that the procedure of adding new individual to our archive is essentially of linear complexity .the abilities of the new archiving mechanism are first demonstrated on test functions dtlz1 , dtlz2 and dtlz4 , suggested by deb et al . . to examine the influence of population size, our algorithm was run separately with 4 , 10 and 20 individuals .obviously , it is preferable to maintain the number of function evaluations as low as possible , therefore we study the behaviour of the aforementioned approaches for three fixed numbers of evaluations , 4000 , 20000 and 40000 . for test problem dtlz1 ,the number of function evaluations is extended to 100000 and 200000 , since the algorithms were unable to converge to the global pareto front with just 40000 computations . to further investigate the behaviour of the proposed method, we performed an experiment with test problem wfg1 suggested by huband et al .for this difficult problem , it was necessary to run the evolution to as many as 2000000 evaluations to obtain reasonable convergence to the pareto front . the algorithms are compared on three benchmark problems introduced in .the following form of them is considered : * dtlz1 + minimize ,, , where subject to , for . + here and .* dtlz2 + minimize ,, , where to , for .+ here and . * dtlz4 + minimize ,, , where to , for .+ here and .problem wfg1 is a benchmark problems introduced in .the following form is considered : * wfg1 + minimize ,, , where , \\ f_2(\mathbf{x } ) & = & 4\left [ ( 1-\cos(z_1\pi/2))(1-\sin(z_2\pi/2))\right ] , \\ f_3(\mathbf{x } ) & = & 6\left [ 1 - z_1 - \frac{\cos(10\pi z_1 + \pi/2)}{10\pi}\right],\end{aligned}\ ] ] where , and are auxiliary variables obtained from design variables by a series of nonlinear transformations ( see for their definition ) .however , there is a slight difference in our application of transformation ` b_poly ` , which we use with exponent 0.2 instead of 0.02 suggested in due to numerical issues in floating point arithmetic .design variables have range , for .+ the exact front of this problem is visualized in figure [ fig : wfg1_front ] .the results are evaluated according to three measures .the distance of members of the pareto archive to the true pareto front is measured using the * generational distance ( gd ) * , which is defined as where is the number of nondominated solutions found by an algorithm , and is the euclidean distance of the -th solution to the exact front . in order to evaluate the distance accurately , we implemented an approach , that is able to iteratively find the closest point of the exact front for each approximate solution , provided the analytic expression of the exact front is known .this point is then used for measuring the distance .zero value of gd corresponds to all members of the archive on the exact front .we evaluate also another measure of convergence , denoted as * tol5*. it is defined as the lowest value , such that holds for at most 5 % of individuals . in statistics , it is called the 95-th percentile with respect to distance . again , the lower value of tol5 the better convergence .zero value indicates , that at least 95% of archive members are on the exact front .this metric is less sensitive to remote individuals than the gd value .the uniformity of distribution of archive members is measured by * spacing * defined in .it is based on the distance to the nearest neighbour for each member of the archive , which is defined as now spacing is the ratio of standard deviation of values of these squared distances and their average , i.e. where stands for the mean value consequently , zero spacing corresponds to uniform distribution of distances to the nearest neighbour .although this does not assure global uniformity of distribution ( e.g. for pairs of individuals ) , our experience with this metric is satisfactory .the coverage of the pareto front is not evaluated by means of a metric , but rather compared qualitatively at presented plots . all the algorithms from pisa package ( in the pisa context called selectors ) , i.e. nsga - ii , spea2 , and ibea , are used with the following setting of variator dtlz : * individual mutation probability 1 , * individual recombination probability 1 , * variable mutation probability 0.01 , * variable swap probability 0.5 , * variable recombination probability 1 , * mutation 20 , * recombination 15 , * use symmetric recombination 1 , for variator wfg , these values are the same except the value of variable mutation probability preset to 1 ( default ) . the simulations with pisa are performed with population of 100 individuals .all of them are selected for mating , producing 100 new individuals in each generation .the tournament of 2 individuals is used in these selectors .experiments with ibea are performed using the additive -indicator with scaling factor equal to 0.05 .the is run with 4 , 10 and 20 individuals in population , marked as (4 ) , (10 ) and (20 ) , respectively ( for wfg1 , only 4 members of population are considered ) .the archive size is always set to 100 to produce results comparable with those of pisa algorithms .simple one - point crossover scheme without mutation is used .it was reported by oyama , that this scheme derived for binary coded algorithms works reasonably well also for real - domain .the version with 4 members uses reinitialization in each generation ( dtlz1 , dtlz2 , dtlz4 ) or once in four generations ( wfg1 ) , while for larger populations , the reinitialization is performed once per 3 generations for all problems .after reinitialization , several existing archive members are put into the new population .their number is 2 , 4 and 6 for the population of 4 , 10 and 20 members , respectively .random selection of individuals for mating is then performed with this modified population .the other important parameters of are set to the following values : * adaptation factor 1.4 , * minimal standard deviation 0.8 ( dtlz1 , wfg1 ) , 0.005 ( dtlz2 , dtlz4 ) , * recombination probability 1 .larger value of helps to attain the global pareto optimal front of multimodal problems such as dtlz1 and leads to faster convergence also in the case of wfg1 . in general , its large values emphasizes global exploration of the design space while small values lead to refined search . the results for problems dtlz1 , dtlz2 and dtlz4 are summarized in tables [ tab : dtlz1_4000][tab : dtlz4_40000 ] , and visualized in figures [ fig : dtlz1_4000][fig : dtlz4_40000 ] . for problem wfg1, results are summarized in tables [ tab : wfg1_4000][tab : wfg1_2000000 ] and figures [ fig : wfg1_200000][fig : wfg1_2000000 ] .the values in tables are obtained as averages for 20 different seeds and where it makes sense , the best value is emphasized by bold font .the approximation with the best distribution is selected out of the twenty runs of each algorithm for visualization .the exact pareto front is marked by grid of small dots in presented figures .however , not all twenty seeds lead to a successful approximation of the pareto set in some instances , especially for problem dtlz4 .most of the algorithms suffer from problems with robustness with respect to initial population and produce _ degenerated _fronts for some seeds .we consider a front as degenerated , if all individuals have almost identical value of an objective , and thus cover just a line on the three dimensional surface of the exact front .numbers of degenerated fronts for all problems and methods are summarized in table [ tab : degenerated ] .the efficiency of the proposed archiving technique is further demonstrated in comparison with the same approach , i.e. , using crowding distance .that algorithm was described in .outputs of these experiments are summarized for dtlz1 in tables [ tab : dtlz1_macd ] and [ tab : dtlz1_mana ] , for dtlz2 in tables [ tab : dtlz2_macd ] and [ tab : dtlz2_mana ] , for dtlz4 in tables [ tab : dtlz4_macd ] and [ tab : dtlz4_mana ] , and for wfg1 in tables [ tab : wfg1_macd ] and [ tab : wfg1_mana ] .obtained pareto fronts are plotted in figures [ fig : dtlz1_macd_mana_4000][fig : wfg1_macd_mana_2000000 ] .this problem with three objectives has a linear pareto optimal front that intersects the axes of the objective space at value 0.5 .apart of the exact front , there exist a number of other parallel planes corresponding to local pareto fronts . as these also attract an evolution , problem dtlz1 tests the ability of a genetic algorithm to cope with multi - modality .as can be seen in figure [ fig : dtlz1_4000 ] , none of the algorithms is able to reach the global pareto front in 4000 evaluations for any seed , and metrics in table [ tab : dtlz1_4000 ] do not provide much valuable information. however , we can remark that ibea and (4 ) provide one order better convergence than the other algorithms and for all sizes of population provides reasonable spacing .however , all algorithms except nsga - ii are able to reach the global front for some seeds in 20000 evaluations ( figure [ fig : dtlz1_20000 ] ) . comparing visually the results in figure [ fig : dtlz1_20000 ], we can conclude that (4 ) performs best , which is supported by the best values of all three metrics in table [ tab : dtlz1_20000 ] . as individuals for many of the seedsare still away from the global front for all algorithms , metrics in table [ tab : dtlz1_20000 ] do not provide a detailed insight either . the situation is further improved with 40000 evaluations , for which all algorithms except nsga - ii are able to reach the global front for most of the seeds ( figure [ fig : dtlz1_40000 ] ) . however , figure [ fig : dtlz1_40000 ] shows that ( for all sizes of population ) produces the best distribution , which is confirmed by values of spacing in table [ tab : dtlz1_40000 ] . since for some seedsthe individuals still are not in vicinity of the true pareto front , the averaged metrics in table [ tab : dtlz1_40000 ] are still rather bad . according to table [ tab : dtlz1_40000 ] , the best convergence is in average attained by (4 ) for this case. for 100000 and 200000 evaluations , (4 ) achieves the global pareto - optimal front for all seeds .all the other algorithms fail to find the global front for some seeds , which considerably spoils the metrics in tables [ tab : dtlz1_100000 ] and [ tab : dtlz1_200000 ] .since ibea produces only degenerated fronts in these cases , metrics are not evaluated and are omitted in tables [ tab : dtlz1_100000 ] and [ tab : dtlz1_200000 ] .although the distribution of fronts obtained by for all sizes of population is comparable to spea according to figures [ fig : dtlz1_100000 ] and [ fig : dtlz1_200000 ] , the metrics in tables [ tab : dtlz1_100000 ] and [ tab : dtlz1_200000 ] reveal that spacing is , in average , one order better by than by spea .the best average convergence metrics are obtained by (4 ) ( tables [ tab : dtlz1_100000 ] and [ tab : dtlz1_200000 ] ) .problem dtlz2 has three objectives , and the exact front corresponds to the part of a unit sphere when restricted to the octant given by non - negative values of all three objectives .this is the easiest problem for all compared algorithms and tests mainly the speed at which an algorithm is converging to the exact pareto front .already for 4000 evaluations , the fronts obtained by all the compared methods are reasonably converged and distributed along the exact pareto front .figure [ fig : dtlz2_4000 ] shows that ( regardless of the size of population ) and spea produce the best distribution of individuals along the exact front , whereas the distribution obtained by ibea is rather poor .this observation is confirmed by the spacing metric in table [ tab : dtlz2_4000 ] .the best convergence is achieved by ibea according to gd and tol5 metrics in table [ tab : dtlz2_4000 ] followed by .similar observations can be made from the results for 20000 evaluations ( table [ tab : dtlz2_20000 ] and figure [ fig : dtlz2_20000 ] ) and 40000 evaluations ( table [ tab : dtlz2_40000 ] and figure [ fig : dtlz2_40000 ] ) the best spacing is obtained for all sizes of population by and the best convergence is attained by ibea , although the distribution of individuals along the pareto front is worse .although the definition of problem dtlz4 is similar to dtlz2 ( cf .section [ sec : problems ] ) , the evolution is greatly influenced by the exponential transformation of design variables , which maps most of the space towards the axes in design space .this in turn pushes the evolution to the limits of the objective space .thus , problem dtlz4 tests best of the three dtlz problems the ability of a genetic algorithm to obtain uniform distribution of individuals along the pareto optimal surface . for 4000 evaluations ,the best distribution is produced by spea2 ( figure [ fig : dtlz4_4000 ] ) .this is confirmed by results of spacing in table [ tab : dtlz4_4000 ] .however , (4 ) produces the best converged results . for 20000 evaluations ,the distribution obtained by is already visually comparable with spea2 in figure [ fig : dtlz4_20000 ] . also spacing obtained by comparable to that of spea2 according to table [ tab : dtlz4_20000 ] for 4 and 10 members of population .algorithms (20 ) , ibea , and nsga - ii produce in average only slightly worse converged results .the best gd and tol5 values are attained by (20 ) . in the case of 40000 evaluations , the best distribution of individualsis attained by followed by spea2 according to figure [ fig : dtlz4_40000 ] and also confirmed by values of spacing in table [ tab : dtlz4_40000 ] .the best convergence is again obtained by (20 ) , followed by (10 ) , (4 ) and ibea , respectively ( table [ tab : dtlz4_40000 ] ) .this is a difficult problem and all tested algorithms had problems with convergence to the pareto front .for this reason , number of evaluations of the objective function was increased to 2000000 , after which some algorithms were able to attain the exact front .after 4000 evaluations , all algorithms produce results rather far from the pareto optimal set ( table [ tab : wfg1_4000 ] ) .nevertheles , spea2 produces the most uniform distribution according to the spacing metric .after 20000 evaluations , (4 ) slightly leads in convergence followed by ibea ( table [ tab : wfg1_20000 ] ) , producing distribution with uniformity between spea2 ( best spacing ) and the rest of the algorithms .the same observations remain valid for 40000 evaluations ( table [ tab : wfg1_40000 ] .after 100000 as well as 200000 evaluations , (4 ) dominates in convergence to the exact front ( gd and tol5 metrics in tables [ tab : wfg1_100000 ] and [ tab : wfg1_200000 ] ) , producing distribution comparable with spea2 ( best spacing ) .however , figure [ fig : wfg1_200000 ] suggest , that (4 ) covers the whole pareto front , unlike spea2 . letting the evolution run to 1000000 and 2000000 evaluations , (4 )dominates both in convergence ( one order of magnitude compared to the second ibea in tables [ tab : wfg1_100000 ] and [ tab : wfg1_200000 ] ) and distribution along the exact pareto front . in spacing metric , (4 ) is followed by spea2 .figures [ fig : wfg1_1000000 ] and [ fig : wfg1_2000000 ] show that distribution of individuals by (4 ) uniformly covers the whole pareto front , while the other algorithms approaches the region around only slowly .a set of experiments was run to compare with crowding distance and with the new archiving mechanism .the population of four individuals was selected for the comparison .results for problems dtlz1 , dtlz2 , dtlz4 , and wfg1 are summarized in tables [ tab : dtlz1_macd][tab : wfg1_mana ] and figures [ fig : dtlz1_macd_mana_4000][fig : wfg1_macd_mana_2000000 ] . according to these experiments , the new archiving approach outperforms crowding distance in diversity as is clear from figs .[ fig : dtlz1_macd_mana_4000][fig : wfg1_macd_mana_2000000 ] and spacing metric in tabs . [ tab : dtlz1_macd][tab : wfg1_mana ] .while it also has very positive effect on the convergence of to the exact pareto front for problems dtlz1 , dtlz2 , and dtlz4 ( tabs . [ tab : dtlz1_macd][tab : dtlz4_mana ] ) , both algorithms exhibit similar convergence for problem wfg1 ( tabs . [ tab : wfg1_macd ] and [ tab : wfg1_mana ] ) . as can be seen from above, outperforms the other methods in distribution of individuals along the pareto front , and in many cases achieves the best convergence as well .however , it should be noted that the default settings of the algorithms from pisa package is used , which may not be optimal for the test problems considered .while ibea offers exceptional convergence in some cases , the distribution of individuals along the exact pareto front is usually rather poor , with many individuals attached to limits of the objective space .our study confirms that the mechanism of crowding distance does not lead to uniform distribution of individuals along the pareto front for more than two objectives .the same result might be observed from the comparison of using the two archiving mechanisms crowding distance and the new proposed technique ( tables [ tab : dtlz1_macd][tab : wfg1_mana ] , and figures [ fig : dtlz1_macd_mana_4000][fig : wfg1_macd_mana_2000000 ] ) .on the other hand , spea2 produces very uniform distribution of individuals comparable with in some instances . concerning the number of evaluations, dtlz2 is the only problem , for which only 4000 evaluations are sufficient to achieve reasonable convergence and distribution of individuals on the pareto front by all algorithms . on the other hand , for dtlz1, even 40000 evaluations do not suffice to reach the true pareto front for all seeds by any approach , and results for 100000 and 200000 evaluations are added for a reasonable comparison .even this large number of evaluations was not sufficient to reach the proximity of exact front in the case of problem wfg1 , and results for 1000000 and 2000000 evaluations are added . for test functions dtlz4 and wfg1 , the new archiving mechanism is able to drive the evolution to regions , where the coverage of the pareto front by individuals is sparse , and recover nice distribution of individuals along the pareto set even for poorly chosen initial population . to investigate the optimal distribution of the number of function evaluations between population size and number of generations for micro - evolution, is run with 4 , 10 and 20 individuals for dtlz1 , dtlz2 , and dtlz4 . according to our experiments ,the performance of the algorithm is similar for all configurations with respect to spacing and convergence history and no strong dependence is revealed .however , for problem dtlz4 , the method tends to produce more degenerated fronts with larger population ( see table [ tab : degenerated ] ) .additionally , population of 4 individuals leads to the best convergence metrics for problem dtlz1 , and population of 20 individuals to the best converged front for problem dtlz4 .thus , using small populations and larger number of generations seems as the preferable approach .the goal of our study is twofold : ( a ) to develop a new approach for selecting individuals to the pareto archive ; ( b ) to explore the potential of using small population in evolutionary algorithms .the main contribution of the paper is the presentation of a new archiving mechanism .although its basic idea is rather simple and straightforward , the technique produces very promising results on all tested problems .we are aware of the fact that the theoretical time complexity of the mechanism might be rather large ( quadratic in the worst case ) . however , our tests justify its usage , since the experimentally found complexity is much more favourable ( approximately linear ) . moreover , it is intended to be used in combination with small population , for which such more elaborate selection mechanism is usually affordable .the proposed selection mechanism was combined with and is compared to other three state - of - the - art algorithms ( nsga - ii , spea2 , and ibea ) on four test problems .we can conclude that presents pareto sets with the same or better distribution as spea2 , but usually with much better convergence to the exact front that is comparable with ibea , thus the best combining requirements on both convergence and distribution of individuals .a considerable improvement is attained , using the new mechanism , in comparison with the version of that uses crowding distance .clearly , equipped with the new diversity mechanism is very promising and may be competitive with respect to other recent approaches . our experiments further support using small populations ( up to 10 individuals ) , since runs with four individuals usually produces the best results .it is well known that such small population can lead to rapid convergence . however , in combination with the proposed archiving mechanism , it also seems to be more robust with respect to an initial population . regarding the history of convergence to the pareto front , in some cases as few as 4000 evaluations of objective function could be sufficient for some problems ( dtlz2 ) , while for other problems ( multi - modal problem dtlz1 or difficult wfg1 ) , even 40000 evaluations may not be sufficient to approximate the true pareto front , and as many as 1000000 evaluations are needed for reasonable outcome .this research has been supported by the development of applied external aerodynamics program ( ministry of education , youth and sports of the czech republic grant msm0001066901 ) , by research project av0z10190503 ( academy of sciences of the czech republic ) , and by grant iaa100760702 ( grant agency of the academy of sciences of the czech republic ) . a platform and programming language independent interface for search algorithms . in _evolutionary multi - criterion optimization ( emo 2003 ) _ ( berlin , 2003 ) , c. m. fonseca , p. j. fleming , e. zitzler , k. deb , and l. thiele , eds . ,lecture notes in computer science , springer , pp . 494508 . a micro - genetic algorithm for multiobjective optimization . in _evolutionary multi - criterion optimization ( emo 2001 ) _ ( berlin , 2001 ) , e. zitzler , k. deb , l. thiele , c. coello coello , and d. corne , eds . ,lecture notes in computer science , springer , pp .126140 .a fast elitist non - dominated sorting genetic algorithm for multi - objective optimization : nsga - ii . in _parallel problem solving from nature ( berlin , 2000 ) , m. schoenauer , k. deb , g. rudolph , x. yao , e. lutton , j. j. merelo , and h .-schwefel , eds . , springer , pp . 849858 .the pareto archived evolution strategy : a new baseline algorithm for pareto multiobjective optimisation . in _ proceedings of the 1999 congress on evolutionary computation ( cec99 ) _ ( 1999 ) , vol . 1 ,98105 . micro - genetic algorithms for stationary and non - stationary function optimization . in _spie s intelligent control and adaptive systems conference _( 1989 ) , g. rodriguez , ed . , society of photo - optical instrumentation engineers ( spie ) , pp .289296 .: improving the strength pareto evolutionary algorithm for multiobjective optimization . in _evolutionary methods for design , optimisation and control with application to industrial problems .proceedings of the eurogen2001 conference _( barcelona , 2002 ) , k. giannakoglou , d. tsahalis , j. periaux , k. papaliliou , and t. fogarty , eds . , international center for numerical methos in engineering ( cimne ) , pp. 95100 .pareto front after 4000 function evaluations , problem dtlz1 , with population size 4 with crowding distance ( left ) , and with the new proposed algorithm ( right).,title="fig:",width=219 ] pareto front after 4000 function evaluations , problem dtlz1 , with population size 4 with crowding distance ( left ) , and with the new proposed algorithm ( right).,title="fig:",width=219 ] pareto front after 20000 function evaluations , problem dtlz1 , with population size 4 with crowding distance ( left ) , and with the new proposed algorithm ( right).,title="fig:",width=219 ] pareto front after 20000 function evaluations , problem dtlz1 , with population size 4 with crowding distance ( left ) , and with the new proposed algorithm ( right).,title="fig:",width=219 ] pareto front after 200000 function evaluations , problem dtlz1 , with population size 4 with crowding distance ( left ) , and with the new proposed algorithm ( right).,title="fig:",width=219 ] pareto front after 200000 function evaluations , problem dtlz1 , with population size 4 with crowding distance ( left ) , and with the new proposed algorithm ( right).,title="fig:",width=219 ] pareto front after 4000 function evaluations , problem dtlz2 , with population size 4 with crowding distance ( left ) , and with the new proposed algorithm ( right).,title="fig:",width=219 ] pareto front after 4000 function evaluations , problem dtlz2 , with population size 4 with crowding distance ( left ) , and with the new proposed algorithm ( right).,title="fig:",width=219 ] pareto front after 20000 function evaluations , problem dtlz2 , with population size 4 with crowding distance ( left ) , and with the new proposed algorithm ( right).,title="fig:",width=219 ] pareto front after 20000 function evaluations , problem dtlz2 , with population size 4 with crowding distance ( left ) , and with the new proposed algorithm ( right).,title="fig:",width=219 ] pareto front after 200000 function evaluations , problem dtlz2 , with population size 4 with crowding distance ( left ) , and with the new proposed algorithm ( right).,title="fig:",width=219 ] pareto front after 200000 function evaluations , problem dtlz2 , with population size 4 with crowding distance ( left ) , and with the new proposed algorithm ( right).,title="fig:",width=219 ] pareto front after 4000 function evaluations , problem dtlz4 , with population size 4 with crowding distance ( left ) , and with the new proposed algorithm ( right).,title="fig:",width=219 ] pareto front after 4000 function evaluations , problem dtlz4 , with population size 4 with crowding distance ( left ) , and with the new proposed algorithm ( right).,title="fig:",width=219 ] pareto front after 20000 function evaluations , problem dtlz4 , with population size 4 with crowding distance ( left ) , and with the new proposed algorithm ( right).,title="fig:",width=219 ] pareto front after 20000 function evaluations , problem dtlz4 , with population size 4 with crowding distance ( left ) , and with the new proposed algorithm ( right).,title="fig:",width=219 ] pareto front after 200000 function evaluations , problem dtlz4 , with population size 4 with crowding distance ( left ) , and with the new proposed algorithm ( right).,title="fig:",width=219 ] pareto front after 200000 function evaluations , problem dtlz4 , with population size 4 with crowding distance ( left ) , and with the new proposed algorithm ( right).,title="fig:",width=219 ] pareto front after 200000 function evaluations , problem wfg1 , with population size 4 with crowding distance ( left ) , and with the new proposed algorithm ( right).,title="fig:",width=219 ] pareto front after 200000 function evaluations , problem wfg1 , with population size 4 with crowding distance ( left ) , and with the new proposed algorithm ( right).,title="fig:",width=219 ] pareto front after 1000000 function evaluations , problem wfg1 , with population size 4 with crowding distance ( left ) , and with the new proposed algorithm ( right).,title="fig:",width=219 ] pareto front after 1000000 function evaluations , problem wfg1 , with population size 4 with crowding distance ( left ) , and with the new proposed algorithm ( right).,title="fig:",width=219 ] pareto front after 2000000 function evaluations , problem wfg1 , with population size 4 with crowding distance ( left ) , and with the new proposed algorithm ( right).,title="fig:",width=219 ] pareto front after 2000000 function evaluations , problem wfg1 , with population size 4 with crowding distance ( left ) , and with the new proposed algorithm ( right).,title="fig:",width=219 ]
the article introduces a new mechanism for selecting individuals to a pareto archive . it was combined with a micro - genetic algorithm and tested on several problems . the ability of this approach to produce individuals uniformly distributed along the pareto set without negative impact on convergence is demonstrated on presented results . the new concept was confronted with nsga - ii , spea2 , and ibea algorithms from the pisa package . another studied effect is the size of population versus number of generations for small populations . * keywords : * multi - objective optimization ; micro - genetic algorithms ; diversity preserving ; pareto archive ; selection to archive
online social networks provide a convenient and ready to use model of relationships between individuals .relationships representing a wide range of different social interactions in online communities are useful for understanding and analyzing individual attitude and behaviour as a part of a larger society . while the bulk of research in the structure on social networks tries to analyze a network using the topology of links ( relationships ) in the network , relationships between members of a network are much richer , and this additional information can be used in many areas of social networks analysis . in this paper we consider signed social networks , which consist of a mixture of both positive and negative relationships . this type of networks has attracted attention of researchers in different fields . understanding the interplay of relationships of different signs in the online social network setting is crucial for the design and function of many social computing applications , where we concern about the attitude of their members , trust or distrust they feel , known similarities and dissimilarities .for example , recommending new connections to their users is a common task in many online social networks . yet , without the understanding of the type of relationships , an enemy can be introduced to a user as a friend .this framework is also quite natural in recommender systems where we can exploit similarities as well as dissimilarities between users and products . over the last several yearsthere has been a substantial amount of work done studying signed networks , see , e.g. . some of the studies focused on a specific online network , such as epinions , where users can express trust or distrust to others , a technology news site slashdot , whose users can declare others ` friends ' or ` foes ' , and voting results for adminship of wikipedia .others develop a general model that fits several different networks .we build upon these works and attempt to combine the best in the two approaches by designing a general model that nevertheless can be tuned up for specific networks .[ [ edge - sign - prediction ] ] edge sign prediction + + + + + + + + + + + + + + + + + + + + following guha et al . and kleinberg et al . , we consider a signed network as a directed ( or undirected ) graph , every edge of which has a sign , either positive to indicate friendship , support , approval , or negative to indicate enmity , opposition , disagreement .the edge sign prediction problem , in which given a snapshot of the signed network , the goal is to predict the sign of a given link using the information provided in the snapshot .thus , the edge sign problem is similar to the much studied link prediction problem , only we need to predict the sign of a link rather than the link itself .several different approaches have been taken to tackle this problem .kunegis et al . studies the friends and foes on slashdot using network characteristics such as clustering coefficient , centrality and pagerank ; guha et al . used propagation algorithms based on exponentiating the adjacency matrix to study how trust and distrust propagate in epinion .later kleinberg et al . took a machine learning approach to identify features , such as local relationship patterns and degree of nodes , and their relative weight and build a general model to predict the sign of a given link .they would train their predictor on some dataset , to learn the weights of these features by logistic regression . once trained, the model can be used on different networks .clearly , one of the most important measures of an approach is the accuracy of prediction it provides .remarkably , in many cases ( where comparable ) the network independent approach from provides more accurate predictions than that of previous network specific studies .this shows certain potential of machine learning techniques .interestingly , this study is also related to the status and balance theories from social psychology , as they rely on configurations similar to the features exploited in . in this paperwe also take the machine learning approach , only instead of focusing on a particular network or building a general model across different networks , we build a model that is unique to each individual network , yet can be trained automatically on different networks .such an approach intuitively should be capable of more accurate predictions than network independent methods , and it remain practically feasible .[ [ trusted - peers - and - influence ] ] trusted peers and influence + + + + + + + + + + + + + + + + + + + + + + + + + + + the basic assumption of our model is that users attitude can be determined by the opinions of their peers in the network ( compare to the balance and status theories from social psychology discussed in ) . intuitively speaking ,peer opinions are guesses from peers on the sign of the link from a source node to a target node .also , we assume that peer opinions are only partially known , some of them are hidden .we introduce three new components into the model : set of trusted peers , influence , and quadratic correlation technique .when we try to count on peer opinions , not all such opinions are equally reliable , and we therefore choose a set of trusted peers whose opinions are important in determining the user s action .the set of trusted peers is one of the features our algorithm learns during the training phase .ideally , it would be good to have a set of trusted peers for each link in the network .however , considering the sparsity and the enormous size of the network , we can not always afford to determine a set of trusted peers for every possible relationships .instead , we find a set of trusted peers for each individual node .the optimal composition of such a set is not quite trivial , because even trusted peers may disagree , and sometimes it is beneficial to have trusted peers who disagree .thus , to make reliable estimations on all relationships starting at the individual nodes , its set of trusted peers has to form a wide knowledge base on other nodes in the network . while peer opinions provide very important information , this knowledge is sometimes incomplete .relying solely on peer opinions implies that the attitude of a user would always agree with the attitude of a peer .however , in reality , there are often exceptions . what also matters is how this opinion correlates with the opinion of the user we are evaluating . to take this correlation into account we introduce another feature into the model , influence .suppose the goal is to learn the sign of the link between user and user , and is a peer of .then if tends to disagree with , then positive attitude of towards should be taken as indication that s attitude towards is less likely to be positive .the opinion of is then considered to be the product of his attitude towards and his influence on . usually , influence is not given in the snapshot of the network .for example in the wikipedia adminship dataset the explicit information is a collection of results of voting , while the correlation between the ways members vote is hidden and has to be learned together with other unknown parameters .we experimented with different ways of defining peer opinion , and found that using relationships and influences together to approximate peer opinions is more effective than using relationships along .to learn the weights of features providing the best accuracy we have chosen to use the standard quadratic correlation technique from machine learning .this method involves finding the optimum of a quadratic polynomial , and while being relatively computationally costly , tends to provide very good accuracy . therefore to solve quadratic programs we resort to three approaches .firstly , we used an available max - sat solver metslib based on the tabu search heuristics .secondly , we also attempted to find the exact optimal solution using the brute force approach .third , we use off - the - shelf solver cplex .clearly , in the latter two approaches it is not feasible to solve the quadratic program arising from a large network , therefore we also used a number of heuristics to split such a program , as described later .an interesting use of our approach is to apply the quantum annealing devise developed by d - wave to run our algorithm .this devise solves large instances of the quadratic unconstrained binary optimization problem ( qubo ) with ( supposedly ) high accuracy and high speed .however , such experimentation is yet to be done because the device is currently unavailable for experimenting . [[ comparison - to - other - work ] ] comparison to other work + + + + + + + + + + + + + + + + + + + + + + + + similar to we also use a machine learning approach to build a prediction model based on local features . however , unlike their generalized features , such as the degree of nodes , and local relationship patterns , we use peer opinions from trusted peers which are personalized featuresthere are two main advantages for using personalized features .first of all , our model tolerates differences in individual personalities . unlike existing approach , two nodes with the same local features can behave differently by selecting different sets of trusted peers . yet the model of kleinberg et al . treats nodes with the same feature values as the same .secondly , our model accommodates the dynamic nature of online social networks .personalized features allow us to train a predictor separately for each individual node .as the network evolves over time , we only need to update individual predictors separately instead of rebuilding the whole model .although our model does not generalize across different datasets , a new model can be easily trained for different datasets without changing the algorithm .we build and test our model on three different datasets studied before , epinions , slashdot and wikipedia .it is difficult , however , to compare our results against the results in other works such as .for example and use certain ( different ! ) normalization techniques to eliminate the bias of the datasets above toward positive links .we therefore tried to test our model in all regimes used in the previous papers .the results shows similar or better prediction accuracy in almost all cases .when tested on unchanged ( biased ) datasets our model shows nearly perfect prediction . in spite of this , fair comparison is still problematic , because of the lack of data about other approaches .for example , even the experiment results show that our model has a better prediction accuracy than the model in statistically , we could nt simply conclude our model is better because they used some normalization technique on the dataset , and also , it was not specified which edge embeddedness threshold ( widely used in ) was used for the experiment .also , we test the model on a different dataset , movielens , used to recommend users movies to rent .experiments show that we achieve good prediction accuracy on this dataset as well .we now describe our method . we start with the underlying model of a network, then proceed to the machine learning formulation of the edge sign prediction problem , and finally describe the method to solve the resulting quadratic optimization problem .we are given a snapshot of the current state of a network .a snapshot of a network is represented by a directed graph , where nodes represent the members of the network and edges the known or explicit links ( relationships ) .some of the links are signed to indicate positive or negative relationships .let denote the sign of the relationship from to in the network .it may take two different values , , indicating negative and positive relationships respectively .note that nodes of may represent entities of different kinds .for example , a signed relationship can also indicate the like or dislike of a product from a user , or the vote from a voter to a candidate . to estimate the sign of a relationship from to , we collect peer opinions . by a peerwe understand a node in the network that we use to estimate . in different versions of the modelsa peer can be any node of the network , or any node linked to .peer opinion is an important unknown parameter of the model .it is an estimation on the type of the relationship from a peer based on its own knowledge .let denote the peer opinion of peer on the sign .when or , it indicates that the believes that or respectively .when that means does not have enough knowledge to make a valid estimation .another assumption made in our model is that not every peer can make a reliable estimation .therefore we divide all peers of a node into two categories , and count the opinions only of the peers from the first category , trusted peers .the problem of how to select a set of trusted peers and use their opinions for the estimation will be addressed later .let denote the set of trusted peers of .we estimate the sign of a relationship from to by collecting the opinions of peers .if the sum of the opinions is nonnegative , then we say should be , otherwise , it should be .this can be expressed by a simple equation as , notice that the set of trusted peers for each node is also an unknown parameter .determining the set of nodes in the set of trusted peers is a nontrivial task .observe , for example , that the prediction accuracy does not necessarily increase as we add nodes , even nodes of higher trust into the set .since the estimation are made by collecting opinions from all peers in the set , a correct estimation from one peer can be canceled by the wrong estimation of another . also , it is beneficial to select a set of peers with more diversity without compromising accuracy . as mentioned earlier , the set of trusted peers of a node is crucial for the estimation of all relationships starting from . hence , having a set of peers that make good individual estimation on relationships to different sets of target nodes rather than nodes that make good individual estimation for the same set of target nodes will likely improve the accuracy of prediction .our approach to selecting an optimal set of trusted peers is to consider the quadratic correlations between each pair of peers .the overall performance of a set of peers is determined by the sum of the individual performance of each of them together with the sum of their performance in pairs .the individual performance measures the accuracy of individual estimations , while the pairwise performance measures the degree of difference between the estimations of the pair of peers .we want to maximize the accuracy of each individual and the diversity of each pair at the same time .[ [ the - loss - function ] ] the loss function + + + + + + + + + + + + + + + + + our goal is to use the information in to build a predictor that predicts the sign of an unknown relationship from to with high accuracy . at the same time, we would also determine the unknown parameters used in the model .our goal can be expressed by the objective function below , least square loss function is the standard loss function used in measuring the prediction error .another important reason for us to pick the square loss function is that it helps to capture quadratic correlations between all pairs of nodes in .the quadratic correlation becomes more clear when the term gets expanded later in the next section .function is defined as the sign of the sum of peer opinions as follows .let denote the sum of individual peer opinions .then we set since is unknown , we introduce a new variable which indicates if a node should be included into set .hence , we rewrite equation ( [ eq:0 ] ) using the characteristic function as , [ [ quadratic - optimization - problem ] ] quadratic optimization problem + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we are now ready to set the machine learning problem . a training dataset ( a subset of )every entry of the training dataset is a known edge along with its sign .let a training dataset be .the goal is to minimize the objective function , finding the optimal weight vector , where .we use machine learning methods to train the predictor and learn an optimal weight vector such that the objective function ( [ eq:00 ] ) is minimized . substituting and into the objective function ( [ eq:00 ] ) we obtain quadratic unconstrained binary optimization ( qubo ) problem described by equation ( [ eq:2 ] ) . we want to minimize the amount of error made by , yet at the same time , we could also want to avoid overfitting .so we introduce the second term which is the regularization function based on the -norm .it ensures that the size of is not too large .the parameter controls the trade off between the accuracy of the prediction and the complexity of the set .thus , the final form of the objective function is as follows note that there will be more details on peer opinion terms .as mentioned earlier , we are going to test our model using different peer opinion formulations .first , let be extension of to edges with unknown sign and also to pairs of nodes that are not edges defined by [ [ simple - adjacent ] ] simple - adjacent + + + + + + + + + + + + + + + the simplest option , later referred to as _ simple - adjacent _ , is , based on the given information , to formulate peer opinions using existing relationships from peers to the target node . in other words , in this case we set to be .however , we also understand that the relationship from a peer to the target node does not always agree with the relationship from the source node to the target node . yet, peers whose attitudes always disagree with the sources node are as important as these whose attitudes agree with the source node . in order to take the advantage of these disagreements, we introduced a second parameter , influence , which can be either positive , negative , or neutral .[ [ standard - pq ] ] standard - pq + + + + + + + + + + + the second and third options differ in who is considered as a peer . in the _ standard - pq_ option the influences associated with each pair of vertices is an unknown parameter in the model . in a sense every pair of nodes is assumed to be linked , thus turning into a complete graph .a positive influence , , indicates that the attitude of affects positively , while a negative influence , indicates that the attitude of affects negatively .then the we obtain the standard formulation , since the standard formulation gives us the best result in experiments , we use it throughout our discussion . using the standard formulation , we rewrite equation ( [ eq:01 ] ) as [ [ standard - adjacent ] ] standard - adjacent + + + + + + + + + + + + + + + + + finally , in the _ standard - adjacent _ option peers of are restricted to the neighbours of . rest is defined in the same way as for the _ standard - pq _ option . in our model ,we are given a directed complete graph .in equation ( [ eq:02 ] ) , both and are unknown parameters .since , we can reduce the number of unknown parameters by considering all possible values of , and rewriting as , where for . if , then and .similarly , indicates that and . when both and , then .although can take three possible values , there are only two terms in equation ( [ eq:02 ] ) since when , the term is also zero regardless of the value of . now to minimize the objective function , we need to determine the optimal weight vector where such that to find the optimal solution of , we need to solve a qubo of variables which is very difficult since there are usually millions of nodes in a social network . luckily , from the definition , we know and are independent for different nodes and . instead of solving for directly , we can solve for each separately , and then combine their values to get . now , instead of solving a qubo of size , we could solve qubos of size separately .it can be solved approximately by a max - sat solver if we use a different approach ( similar to ) , the problem should be further simplified , as it is still challenging to solve each of these size qubos exactly .[ [ breaking - down - the - problem ] ] breaking down the problem + + + + + + + + + + + + + + + + + + + + + + + + + in order to find a good approximation of the optimal solution to the qubo defined by equation ( [ eq:21 ] ) , we could break it down to much smaller qubos .given a subset , let , and define a restricted optimization problem as follows by the definition when , equation ( [ eq:22 ] ) is exactly the same as ( [ eq:21 ] ) , however , if we allow to be a subset of much smaller size , then the qubo we need to solve is a much smaller size as well . by decreasing the size of , we are trading the solution accuracy with the computational efficiency .our intuition is to arrange nodes in according to some order , and then , we break into several smaller subsets . by solving equation ( [ eq:22 ] ) on each of the subsets , and combining their solutions, we can get a good approximation to the optimal solution of equation ( [ eq:21 ] ) . in the next section, we are going to explain the method used to approximate the optimal value of in detail .the optimization problem defined by equation ( [ eq:21 ] ) is a quadratic unconstrained binary optimization problem which is np - hard in general . to solve the optimization problem , we use two approaches .first , we solve problems of the form ( [ eq:21 ] ) separately using an open source max - sat solver metslib based on the tabu search heuristics .second , we apply a similar method as described in to reduce the size of the problem dramatically . under the second approach we break the original problem into even smaller subproblems and we obtain an approximate solution by combining the solutions of each subproblems .algorithm [ alg:1 ] describes the method we used to solve each subproblems .algorithm [ alg:2 ] uses algorithm [ alg:1 ] as a subroutine and explains how the problem is broken down into subproblems and also how to combine the solutions of subproblems to obtain an approximate solution . during the training process , we need to use graph and dataset .moreover , is randomly split into two parts , training dataset and validation dataset .we use to train the predictor and we use to validate the predictors obtained at each step to select the optimal one . according to algorithm [ alg:2 ] , we split into smaller subsets in two steps .first of all , for each and for each , we compute the individual prediction error of for on the dataset as follows : for each data point , we count , the number of instances when , and , the number of instances when separately .note that since , we can compute this number ; and that if is not a neighbour of then it contributes to both and .then , we replace by and with individual prediction error , and respectively .second , using their individual prediction errors , we can sort the nodes in in increasing order of the error .the subset is iteratively selected by picking the first nodes in the list that are not yet considered .the value of is an important parameter of the algorithm and is selected manually at the beginning of the algorithm . if , then we are solving the problem defined by equation ( [ eq:21 ] ) .the sorting and selecting processes not only reduce the amount of computation , but also allow us to consider the relevant nodes first .once the subset is selected , we use algorithm ( [ alg:1 ] ) to solve the subproblem defined by equation [ eq:22 ] .the algorithm determines the optimal value of and the set which minimize the amount of prediction errors made by .it repeatedly solves the qubo for different $ ] ( in our experiments we use and ) . and bound the possible range for , and the best value of is selected using cross - validation on dataset .the set together with the weight which produces the lowest prediction errors on is selected as the optimal solution .notice , when the size of is small , say , we can solve it exactly using brute - force method .when its size is larger , we use some heuristic methods such as the quadratic optimization solver cplex to approximate the solution . the optimal solution determined by algorithm [ alg:1 ] on subset is used to extend the optimal solution for by algorithm [ alg:2 ] . to extend the solution , we use a greedy approach similar to adaboost .we would extend the partial solution , as long as extending it by the optimal solution on lowers the prediction error on the validation set .training dataset : , validation dataset : , a subset of nodes : values of and = solve the optimization = measure the validation error on using = = training dataset : , validation dataset : , the size of the subset : values of for , and the set of trusted peers = = - 1 sort nodes of by their individual prediction errors in increasing order = the first nodes in = , = algorithm [ alg:1]( , , ) = measure the validation error on using update with the next nodes in use three datasets borrowed from and a movie rental dataset movielens that we consider separately . in order to make comparison possible the datasets are unchanged rather than updated to their current status .the dataset statistics is therefore also from ( see table [ tab : stat ] ) .[ [ epinions ] ] epinions + + + + + + + + this is a web site dedicated to reviews on a variety of topics including product reviews , celebrities , etc .the feature of epinion interesting to us is that users can express trust or distrust to each other , making it a signed social network .the dataset contains 119,217 nodes , 841,200 edges , 85% of which are positive and15% are negative .[ [ slashdot ] ] slashdot + + + + + + + + slashdot is another web site for technology news where users are allowed to leave comments .it also has an additional zoo feature that allows users tag each other as ` friends ' and ` foes ' .this dataset contains 82,144 nodes , 549,202 edges of which 77.4% are positive and 22.6% are negative .[ [ wikipedia ] ] wikipedia + + + + + + + + + this dataset contains wikipedia users and the results of voting among them for adminship .every link represents a vote of one user for or against another .a link is positive if the user voted for another and negative otherwise .it contains 7,118 nodes representing users who casted a vote or been voted for , 103,747 edges , of which 78.7% are positive and 21.2% are negative .[ tb1 ] .basic statistics on the datasets [ cols="<,^,^,^",options="header " , ] observe that since our approach is to train the predictor for a particular dataset rather than finding and tuning up general features as it is done in and , and the test datasets are biased toward positive edges , it is natural to expect that predictions are biased toward positive edges as well .this is clearly seen from table [ tb5 ] ( see also fig . [fig : balanced ] ) .we therefore think that average error rate does not properly reflect the performance of our algorithm . in the case ofbalanced datasets our predictor does not produce biased results , again as expected .this , however , is the only case when its performance is worse than some of the previous results .one way to explain this is to note that density of the dataset is crucial for accurate predictions made by the quadratic correlation approach .therefore we had to lower the embeddedness threshold used in this part of the experiment to , , while still tests only edges of embeddedness at least 25 . to test the versatility of the model, we also test it on a completely different dataset .movielens is a dataset used primarily in the study of recommender systems .it contains rating of movies given by users who rented movies from a shop or online .every user gives a rating to some of the movies by assigning a score from 1 to 5 , where higher score corresponds to higher evaluation of the movie .it therefore can viewed as a bipartite graph with users in one part of the bipartition , and movies in the other .the version of the dataset we used , movielens-100k , contains approximately 100,000 ratings from 1000 users on 1700 movies .there are also certain density restriction : every user included in the dataset must rate at least 20 movies .it is natural to treat users ratings as attitudes of users towards movies .our model , however , can not work directly with the movielens dataset , because it requires binary attitudes rather ratings between 1 and 5 .thus , we convert user ratings into positive and negative attitudes , by introducing a negative link every time user s rating is 3 or less , and by introducing a positive link if user s rating is 4 or 5 . under such interpretation of scoresthe dataset is almost balanced , 44.625% of its edges are negative .predictions are , of course , also made in terms of positive and negative links . with the standard values of the parameters : using _ standard - pq _ option with cplex , and with , , ,the model makes about 75% correct predictions providing about 20% increase over the random guess .although there is a very substantial amount of research on recommender systems using movielens as a test dataset ( see e.g. , it is not possible to compare our result against the existing ones , because the evaluation measures normally used for recommender systems are quite different ; they measure either the success rate in recommending a group of products ( movies ) or given in terms of estimating user s rating rather than attitude .nevertheless we can conclude that the method gives a similar advantage over the random choice , as for other datasets .one interesting feature of the ( internal ) work of our method is that it finds influences and sets of trusted peers between users , although there is no explicit information about such connections .we have investigated the link sign prediction problem in online social networks with a mixture of both positive and negative relationships .we have shown that a better prediction accuracy can be achieved using personalized features such as peer opinions . moreover, the proposed model accommodates the dynamic nature of online social networks by building a predictor for each individual nodes independently .it enables fast updates as the underlaying network evolves over time . in the future , we consider possible improvements of the model in two directions .first of all , we need to find a better formulation for peer opinions .the current formulation is very simple that it either gives an estimation or it does nt estimate .ideally , we want a formulation that gives an estimation along with the confidence level of its estimation .the current choice of the binary representation of the problem was determined by several factors .firstly , many more existing algorithms , heuristics , and off - the - shelf solvers are available for binary problems .this includes many readily available max - sat solvers . some solvers , for example , cplex ,while can be used for non - binary problems , produce good results only if the problem satisfies certain conditions .we experimented with weights that can take more than just 2 values , but this often leads to instances that are not positive semidefinite , and cplex does not produce any meaningful results . in spite of this we experimented by allowing various variables of the problem to take more valueshowever , it did not lead to any noticeable improvements of the results .secondly , we want to build a more sophisticated model that incorporates more information .the basic assumption of our model is that users actions can be determined by the opinions of their peers in the network . yet , as an independent individual , we also have our own knowledge and belief. there are also external factors that affects our decision making process , such as mood , weather , location , and so on .all these information can be used as features for our model in the future .jrme kunegis , andreas lommatzsch , and christian bauckhage , _ the slashdot zoo : mining a social network with negative edges _ , proceedings of the 18th international conference on world wide web ( new york , ny , usa ) , www 09 , acm , 2009 , pp .741750 .david liben - nowell and jon kleinberg , _ the link prediction problem for social networks _ , proceedings of the twelfth international conference on information and knowledge management ( new york , ny , usa ) , cikm 03 , acm , 2003 , pp .
the structure of an online social network in most cases can not be described just by links between its members . we study online social networks , in which members may have certain attitude , positive or negative toward each other , and so the network consists of a mixture of both positive and negative relationships . our goal is to predict the sign of a given relationship based on the evidences provided in the current snapshot of the network . more precisely , using machine learning techniques we develop a model that after being trained on a particular network predicts the sign of an unknown or hidden link . the model uses relationships and influences from peers as evidences for the guess , however , the set of peers used is not predefined but rather learned during the training process . we use quadratic correlation between peer members to train the predictor . the model is tested on popular online datasets such as epinions , slashdot , and wikipedia . in many cases it shows almost perfect prediction accuracy . moreover , our model can also be efficiently updated as the underlaying social network evolves . _ keywords : signed networks , positive link , negative link , machine learning , quadratic optimization _
meta - materials with simultaneously negative electric permittivity and magnetic permeability have recently received much attention .these materials can support a backwards - traveling wave where the phase propagation is antiparallel to the direction of energy flow .these materials have been identified by several names including left - handed or double - negative material but we shall refer to them here as backwards - wave ( bw ) materials .their properties were first considered theoretically by veselago during the 1960 s but they have only been fabricated recently . they are predicted to exhibit many unusual properties such as refraction at a negative angle , an inverse doppler shift , and a backwards oriented erenkov radiation cone .veselago also predicted that , under the conditions a parallel slab of this material could focus waves emitted from a point source and essentially act as a lens , albeit without magnification .these conditions are considered `` ideal '' because they lead to the same impedance and speed of light as free space .pendry pointed out that under these conditions incident decaying evanescent waves become growing inside the slab .the recovery of the evanescent waves allows the image resolution to be better than the diffraction limit and this material could theoretically produce a perfect image of the source , which lead pendry to call the system a `` perfect lens . '' a bw material must necessarily be dispersive and therefore all this would only work properly at certain frequencies , where the conditions of ( [ eq : ideal - condition ] ) are fulfilled . at these frequencies the meta - material slab not only compensates for the phase propagation of the waves , as a conventional lens does , but also compensates for the amplitude decay experienced by evanescent waves .this claim of growing evanescent waves and the possibility of a perfect lens has aroused much interest and given rise to much discussion of its correctness and feasibility , e.g. , .besides that , there is great interest and potential for applications in these materials beyond the issue of a perfect lens . in this paperwe study the behavior of an evanescent wave interacting with a slab of bw material via simulations .we consider the performance and applicability of two numerical techniques being used to simulate this interaction . in sec .[ sec : model - system ] we introduce the model system and in sec .[ sec : numer - prop - schem ] the numerical schemes .section [ sec : results ] discusses results and sec .[ sec : conclusion ] presents conclusions .to be able to study the properties of evanescent waves interacting with a bw slab , we isolate a single evanescent wave by inserting the slab into a parallel plate waveguide and use excitation below the cutoff frequency of the waveguide . this way the wave in the waveguide is evanescent and its wave vector can be chosen through the width of the waveguide and the spatial profile of the sourceis translationally invariant in the direction .a sheet current source at and a slab of bw material are located inside the waveguide ., width=336 ] figure [ fig : schematic - waveguide ] shows a schematic of the system . the top and bottom of the waveguide are perfectly electrically conducting ( pec ) plates .pec walls also exist at the sides of the computational domain but these are behind perfectly - matched layers ( pml ) which essentially allow the structure to mimic an open waveguide ( but the evanescent fields considered here decay rapidly enough that the performance of the pml is not a critical component in the simulation ) . the system is translationally invariant in the direction and can be treated as two - dimensional . a sheet current density at parallel to the - plane acts as the source for the fields . as our interest is in the behavior of evanescent fields ,a single evanescent mode is excited using the spatial profile of a sheet current matched to that of the lowest order evanescent mode , where is the dirac delta function and is the width of the waveguide . the source location and the waveguide widthare shown in fig .[ fig : schematic - waveguide ] , and is the time dependence of the current source .( in the steady - state situation we deal with a single or few wave vectors . for this reasonthe concern of ref . does not apply here . )composite materials in which the effective electric permittivity and the effective magnetic permeability are both negative in some frequency interval have experimentally been shown to exist .based on this we assume that both the electric permittivity and the magnetic permeability are described by lorentz - type frequency dependences as here and are the plasma frequency , and are the resonance frequency , and and are the absorption parameter of the permittivity and permeability , respectively . to simplify the model we choose and to coincide ,i.e. , , , and . in the special case of , the conditions ( [ eq : ideal - condition ] ) are fulfilled at the design frequency we take ( [ eq:1 ] ) to define the relationship between the design frequency and the plasma frequency even when and are nonzero but small . our attentionis primarily concerned with the design frequency since this is the only frequency at which a perfect focus could possibly be achieved . at a frequency other than the design frequencythe rate at which evanescent fields grow in the slab is not equal to the rate they decay in free space .thus for a source emitting multiple evanescent fields in front of the slab , there is no unique point on the other side of the slab at which the fields have obtained the same level as at the source ( i.e. , there is no unique image point ) .we assume that the material has abrupt edges and the change in constitutive parameters is instantaneous and collocated .the case of the change in the constitutive parameters of a bw slab not coinciding leads to a shift in the frequency of the nonreflecting wave and the bound modes of the slab .the fields of the bound modes decay exponentially away from the slab . the two bound modes with spatially decaying fields inside and outside the slab delimit a frequency interval that also contains a nonreflecting wave with exponential spatial dependence .for the material reaction to the field to be causal , the kramers - kronig relations show that for any deviation from free - space behavior the imaginary part of the permittivity or permeability can not vanish for all frequencies . on the other hand , as far as causality is concerned , it can be arbitrarily small .the numerical schemes used here are applied directly to maxwell s equations rather than to a wave equation .the first - order partial differential equations are discretized in time and space and an update algorithm is developed .specifically the behavior of the traditional yee finite - difference time - domain ( fdtd ) algorithm and the pseudospectral time - domain ( pstd ) method are studied . in both schemesthe system is modeled as two dimensional and the open sides of the waveguide are modeled by using a causal uniaxial perfectly matched layer ( pml ) absorbing boundary .one significant difference between the two schemes lies in the grid location where the fields are sampled . in the pstd schemethe sample points of the fields are collocated while in the fdtd scheme they are staggered .the plates of the parallel plate waveguide are modeled as perfect electric conductors . on the pec boundary ,the tangential components of the electric field and the normal component of the magnetic field must vanish .the boundary conditions of the other components are given by surface charge density and surface current density , here is the surface normal unit vector . in 1966 , yee proposed a discretization scheme for maxwell s equations based on a fully staggered grid , where the electric field components are each sampled on one of the edges of a cubical unit cell and the magnetic components each on one of the face centers . in this mannerthe spatial derivatives can be approximated as centered differences and one achieves second - order accuracy in the spatial step size .the pec boundary conditions are implemented by placing the boundary in a plane that contains the tangential electric field sampling points and then setting these components to zero . in this case , the tangential magnetic field components and the normal electric field component are not sampled on the boundary and one does not need to enforce any boundary condition on them .the normal component of the magnetic field is sampled on the boundary and is updated in the usual fashion .its boundary condition is automatically fulfilled through its dependence on the tangential electric field .owing to the staggering of the field components , the precise location of an interface where both the permittivity and permeability change is ambiguous .the case where a single material parameter , such as the permittivity , changes has been well - studied in the literature , e.g. , .it is well established that for a tangential field component collocated with the interface , the best approach is to use the arithmetic average of the material properties to either side .( it can also be shown , at least for the case of normal incidence , that if one is free to assign the material interface to fall halfway between tangential nodes , an abrupt change in the material parameters is optimum and , in fact , superior to averaging . ) for the study here , the precise location of the interface is not a primary concern . instead, the important issue is whether the evanescent fields exhibit the proper growth within the bw slab ( whatever its thickness or location ) .as reported in , averaging of material parameters has been used for modeling bw slabs with the yee algorithm .( in that work surface waves are present and no subwavelength imaging is seen . )figure [ fig : boundaries ] shows a portion of the computational grid as well as the material properties associated with each node one must attribute a permeability to each magnetic field node and a permittivity to each electric field node .figure [ fig : boundaries](a ) shows the case of using abrupt discontinuities in the material parameters in the yee grid while ( b ) shows the case where averaging is used for the tangential node at the interface .figure [ fig : boundaries](c ) shows the grid for the pstd simulation which is described next . which uses permeability and the first node which uses permittivity .( b ) the yee grid using averaging .the interface is assumed to be collocated with the node which uses the average permittivity to either side even though the introduction of a new material ( ) arguably creates an additional interface further to the left .( c ) the pstd grid . because the grid is not staggered, the boundary between the two regions is unambiguously assumed to exist between the nodes which employ the different material parameters.,width=336 ] in pseudospectral techniques a discrete fourier transform is used to calculate the spatial derivatives in the discretized version of a partial differential equation .time integration is achieved using the same second - order time - stepping approach as used in the yee algorithm . unlike the yee algorithm where central - differences are used , in pstdthe spatial derivatives are calculated at the same location as the field .as shown in fig .[ fig : boundaries](c ) , for maxwell s equations this means that the electric and the magnetic field components are all sampled at the same location , i.e. , in the center of a cubic unit cell .an important problem with this approach is that the use of exponential fourier transforms inherently introduces periodic boundary conditions that are often undesired .this problem can be avoided by covering the numerical domain boundaries with a pml layer such that the waves are absorbed in the pml before they can contaminate the simulation through the periodic boundaries .this gives one a tool to simulate open domains .if the system contains pec boundaries , the problem becomes much more difficult .nonetheless , in a two - dimensional system in transverse electric ( te ) polarization , i.e. , perpendicular to the plane , which is source - free near the boundaries , one can show that the pec boundary conditions ( eqs .[ eq:2][eq:5 ] ) imply that here is the surface normal unit vector and is the derivative in that direction .conditions ( [ eq:2 ] ) and ( [ eq:6 ] ) can be easily enforced with fourier sine and cosine transforms if one chooses fourier sine transforms for the tangential electric field components and fourier cosine transforms for the tangential magnetic field components .no spatial derivatives of normal components appear and no boundary condition needs to be enforced on them .thus , in a two - dimensional system in te polarization , the pec boundary conditions are conveniently implemented through the choice of fourier transform .the discrete fourier transform has problems representing a delta function , discretized as a kronecker delta , correctly .for this reason it is advantageous to use a spatially smoothed rather than a highly localized source to represent the source screen of ( [ eq : source - profile ] ) .this problem also appears when a sudden change in material parameters leads to a sudden change in the fields . in the simulationsone observes wiggles or ripples on the signal that can become significant if the change in field is strong .this artifact is the gibbs phenomenon associated with the fourier transforms . to achieve negative and negative in a material ,this material must necessarily be dispersive . in our model system we chose lorentz dispersion characteristics ( [ eq : lorentz - material - e],[eq : lorentz - material - m ] ) .there are several commonly used methods to implement frequency dependence in a time - domain algorithm , e.g. , the auxiliary differential equation ( ade ) method , the recursive convolution method , and the z transform method . in this studywe use the ade method , the z transform method , and two frequency approximation methods . herewe present the update for the electric field ; analogous equations hold for the magnetic field . in the following , will denote the temporal step size of the discretization scheme .one approach to implement a frequency dependent response in a discrete time - domain method is to transform the frequency - domain constitutive parameters into the time domain and then approximate the derivatives with differences based on taylor series expansions .the frequency dispersion ( [ eq : lorentz - material - e ] ) at any given electric field node inside the material slab is implemented as {0pt}{20pt } 4 e^{n-1 } -\left[\left(\omega_p^2+\omega_0 ^ 2\right ) \delta_t^2-\gamma\delta_t+2\right ] e^{n-2 } \right\}\nonumber\\ \hspace{-0.095 in } & & \hspace{-0.095 in } \mbox { } \times \left[\left(\omega_p^2+\omega_0 ^ 2\right ) \delta_t^2+\gamma\delta_t+2\right]^{-1},\end{aligned}\ ] ] where is the electric flux density which must be stored as a separate quantity .the superscripts on the fields denote the time step at which the field values are taken .the z transform is essentially a more general version of the discrete fourier transform .it is strictly based on a sampled signal and provides an exact transformation between sample domain and z domain . for frequency dependencies of the debye , plasma , and lorentz type and even some types of non -linearities it allows one to derive exact update algorithms ( i.e. , exact in the sampled sense ) for the sampled signal .the frequency dispersion of ( [ eq : lorentz - material - e ] ) is implemented as where , , and is an auxiliary field .this implementation is applicable for real , i.e. , when the system response function is a damped sine wave .when is imaginary , the character of the response function changes and the z transform has to be implemented differently . another common method to derive discrete time - domain update equations for the frequency dependence of a systemis frequency approximation .here one starts with the frequency domain expression for the system response and makes the transition to the discretized time domain by replacing each occurrence of by , i.e. , a linear backward difference where represents a shift back in time of one temporal step .when the frequency dependence becomes complicated this method can make the derivation of update equations much easier than the z transform method . on the other hand , in general , while it does not significantly change the computational burden , it introduces new error . a bilinear approximation , ,gives better accuracy but increases the computational burden .the frequency dispersion ( [ eq : lorentz - material - e ] ) using the linear approximation is implemented as {0pt}{20pt } \left(\gamma\delta_t+2\right ) i^{n-1 } + i^{n-2}\right\}\nonumber\\ \hspace{-0.095 in } & & \hspace{-0.095 in } \mbox { } \times \left[\left(\omega_p^2+\omega_0 ^ 2\right)\delta_t^2 + \gamma\delta_t + 1\right]^{-1},\\ \label{eq:10 } i^n \hspace{-0.095 in } & = & \hspace{-0.095 in } \left\{\omega_p^2\delta_t^2e^n + \left(\gamma\delta_t+2\right ) i^{n-1 } -i^{n-2}\right\ } \nonumber\\ \hspace{-0.095 in } & & \hspace{-0.095 in } \mbox { } \times \left[\omega_0 ^ 2\delta_t^2 + \gamma\delta_t + 1\right]^{-1}.\end{aligned}\ ] ] in the bilinear approximation the electric field is updated using {0pt}{20pt } \left(2\omega_0 ^ 2\delta_t^2 - 8\right ) i^{n-1 } + \left(\omega_0 ^ 2\delta_t^2 - 2\gamma\delta_t+4\right ) i^{n-2 } \right\ } \nonumber\\ \hspace{-0.095 in } & & \hspace{-0.095 in } \mbox { } \times \left [ \left(\omega_p^2+\omega_0 ^ 2\right)\delta_t^2 + 2\gamma\delta_t+4\right]^{-1},\\ \label{eq:12 } i^n \hspace{-0.095 in } & = & \hspace{-0.095 in } \left\{\omega_p^2\delta_t^2 e^n + 2\omega_p^2\delta_t^2 e^{n-1 } + \omega_p^2\delta_t^2 e^{n-2}\right .\nonumber\\ \hspace{-0.095 in } & & \hspace{-0.095 in } \mbox { } -\left .\left(2\omega_0 ^ 2\delta_t^2 - 8\right ) i^{n-1 } -\left(\omega_0 ^ 2\delta_t^2 - 2\gamma\delta_t+4\right)i^{n-2 } \right\ } \nonumber\\ \hspace{-0.095 in } & & \hspace{-0.095 in } \mbox { } \times \left [ \omega_0 ^ 2\delta_t^2 + 2\gamma\delta_t+4 \right]^{-1}.\end{aligned}\ ] ] in ( [ eq:8])([eq:12 ] ) is used as an auxiliary field .we have performed model calculations for a slab of bw material inside a waveguide using the geometry and the material described in sec .[ sec : model - system ] .we compare the results for the fdtd and the pstd schemes , using the ade method for the dispersion implementation .we also show a comparison of all four dispersion implementations of sec . [sec : dispersion - implementation ] with the pstd scheme . for the fdtd method one commonly uses the yee scheme to derive the update equations for the fields .because this scheme is based on a staggered grid , where all field components are sampled at different locations in a unit cell , it has inherent difficulties representing a collocated change in material parameters .this problem becomes clearly evident when modeling an interface between free space and a bw material . in the pstd methodall the field components are sampled at the same location and no transition layer exists .we find that , unless the dispersion implementation introduces phase error , the bound modes and the nonreflecting wave appear at frequencies very close to those found in the continuous world . in the simulations the design frequency is andthe spatial discretization level is 100 points per propagating free - space wavelength at the design frequency , i.e. , .using such a fine discretization is typically unnecessary when simulating propagating waves , where it is more customary to use approximately 20 points per wavelength , but because evanescent waves have rapid amplitude decay and also phase variations ( transverse to the amplitude decay ) that have wavelengths smaller than the propagating free - space wavelength , it can be necessary to use a finer discretization to model these waves accurately .the slab length is , and the source is located a distance in front of the bw slab .the waveguide width which determines the degree of evanescence was chosen to be .( for the waves in the waveguide would be propagating . )the material parameters were chosen as , , and .the values of and are small compared to and their influence on the results is weak .even though ( [ eq : ideal - condition ] ) can only be approximately fulfilled , due to the loss and the nonzero resonance frequency , we will assume that the design frequency ( [ eq:1 ] ) is unchanged .the results for the lorentz material are similar to those obtained for an unmagnetized plasma , i.e. , .the time dependence of the source current is a sinusoid with an exponential ramp , with carrier frequency and the ramping governed by . in the simulations we used .the graphs presented in this section show the spatial dependence of the magnitude of a certain frequency component of the temporal fourier transform of the fields in the direction .the temporal fourier transform was recorded between 15000 time steps and when the simulation stopped at a total of 67000 time steps .the solid vertical lines indicate the extent of the bw slab , the dotted vertical line indicates the source location , and the arrows show an arbitrarily chosen object location and its corresponding image location .ideally , at the design frequency the bw slab compensates exactly for the free - space propagation and decay of incident waves corresponding to the thickness of the slab .thus , referring to the geometry of fig .[ fig : schematic - waveguide ] , an object located at with , is imaged at . in the yee discretization scheme ,all field components are sampled at different locations in a unit cell .the permittivity and permeability are usually also sampled at these locations . a change in material parameters at the interface between two materials ,therefore , is extended in space and the algorithm essentially sees a thin transition layer , where some parameters are those of one material and the others are those of the other material . as discussed in sec .[ sec : yee - scheme ] , the interface may be approximated with or without an average of the material properties to either side .we identify the approach without averaging as the `` abrupt '' boundary . for a bw slab , the effect of such a transition layer between the change in and the change in previously been studied using a continuous - space model .our yee simulations employing abrupt boundaries have transition layers with the of the bw slab and the of the surrounding free space , similar to the situation in .for the front face of the slab , this corresponds to the situation shown in fig .[ fig : boundaries](a ) if one associates the subscript 1 with free space and the subscript 2 with the bw material .when averaging is used , the average permeability is applied to the nodes assumed to coincide with the interfaces . to illustrate the effect of the transition layer in the yee scheme , we show in fig .[ fig : yee - design - freq ] the fields at the design frequency , where the source frequency equals , using abrupt boundaries and the ade dispersion implementation .the evanescent field grows inside the bw slab but a reflected field is clearly present , as can be seen from the change in slope of the fields between the source and the surface , as well as between the two slab surfaces . in fig .[ fig : yee - design - freq ] the field amplitudes at the image location are significantly different from those at the object location .figure [ fig : yee - design - avg ] again shows the results at the design frequency except now averaging of the permeability is used at the interface . as before, the results do not exhibit the desired behavior .the deep nulls in both these figures occur when the total field changes sign because the reflected field is of equal magnitude as the field of the source at that point but has opposite sign . , away from the design frequency , using the abrupt - boundary yee scheme with the ade dispersion implementation .the lines for the different fields have been normalized and offset to allow qualitative comparison .the system parameters are given in the text.,width=336 ] figure [ fig : yee - nonreflecting ] shows the nonreflecting wave in the yee scheme with the abrupt boundary .it occurs at , i.e. , below the design frequency of , and was found by scanning through the frequency range between the two bound mode frequencies .the field is clearly amplified exponentially inside the bw slab and decays exponentially outside of it . upon close inspectionone finds that the magnitude of the electric field at the object location is approximately 28 , while it is about 26 at the image location .this discrepancy is due to the slower speed of light in the bw material at this frequency .thus , the exponential growth inside the slab is slower than the decay outside of it and , though the field is amplified in the slab , the amplification does not accurately compensate for the decay in free space .the magnitude of this discrepancy as well as the frequency at which the nonreflecting wave occurs is dependent on the wavevector of the field .thus , it is not possible to define a single corrected image location that holds for all wavevectors ( this is true of both the propagating and evanescent components ) .figure [ fig : yeeavgnrw ] shows the nonreflecting wave when averaging is used for the boundary . here the nonreflecting wave was found to occur at , i.e. , above the design frequency .an object of field strength 28 was found to have a corresponding field strength at the image point of 33 ( i.e. , the field was overcompensated ) . using the yee scheme with material averaging and the ade dispersion implementation .the lines for the different fields have been normalized and offset to allow qualitative comparison .the system parameters are given in the text.,width=336 ] in it was found that transition layers shift the frequency of vanishing reflection coefficient upward from the design frequency . in the fdtd simulations we find that for the abrupt - boundary yee scheme the nonreflecting frequency is shifted downward from the design frequency .we attribute this difference to the effect the discretization has on the transmission and reflection coefficient of each interface .( this is in addition to introducing transition layers into the system . ) when averaging is used , the nonreflecting frequency is above the design frequency , but the offset between the two frequencies is approximately the same as when the abrupt boundary is used .thus there does not appear to be an obvious advantage to using averaging when modeling a bw slab since neither yields the correct behavior at the design frequency .( however , again note that we are not concerned with the specific location of the boundary , but rather just the behavior of the evanescent fields . ) is a numerical artifact of the fourier transform due to finite spatial resolution.,width=336 ] in the pstd schemeall field components are collocated and there is no transition layer at a material interface .using the ade method to implement the frequency dispersion of the bw material we observe that the nonreflecting wave occurs at the design frequency , as shown in fig .[ fig : pstd - nonreflecting ] . because the wave vector magnitudes inside and outside the material are nearly equal ( with slight discrepancy due to loss and a nonzero resonance frequency ), the amplification of the evanescent wave inside the slab compensates for its free - space decay .the field amplitudes at the object location and its image location are within 0.5% of each other . the examples in figs .[ fig : yee - design - freq][fig : yeeavgnrw ] are at a discretization level of .arguably , for either the averaged or abrupt boundary , this provides a transition layer thickness of only in the yee scheme and yet it still produces a significant enough frequency shift for the nonreflecting wave that realization of a perfect planar lens is essentially impossible .the analysis in shows that in the limit of vanishing layer thickness the frequency of the nonreflecting wave goes to the design frequency in the continuous case .nevertheless , for all practical discretization levels the influence of the transition layer remains significant .there have been reports of the yee scheme being used to model evanescent fields in bw materials , but the level of discretization that had to be used was on the order of 566 cells per propagating wavelength which may be prohibitive for most applications .in general all finite difference schemes suffer numerical dispersion .the pstd scheme is more computationally expensive than the yee algorithm for the same level of discretization but suffers less numerical dispersion .the dispersive properties of the pstd scheme have been discussed in the literature ( although the dispersion relation given in the literature only pertains to the case of an infinite grid and thus only holds approximately for finite grids ) .the dispersive properties for evanescent fields in fdtd simulations have also be discussed in the literature .the fields being considered here are discretized such that numeric dispersion is not a significant concern .thus the major difference between the schemes considered here is the treatment of the interface and one can not attribute the superior results of the pstd scheme to its superior dispersion . in sec .[ sec : dispersion - implementation ] we discussed four different implementations of the frequency dependence of the permeability and permittivity . to compare their performance , we use each one in a simulation using the pstd scheme at the design frequency , with the parameters given at the beginning of sec .[ sec : results ] . in this configuration one expects the reflected wave at the first interface ( ) to vanish , as shown in fig .[ fig : pstd - nonreflecting ] . in fig .[ fig : compare - dispersion - pstd ] we show the electric field in these simulations .it is apparent that the ade , the z transform , and the bilinear frequency approximation all give good results .no reflected wave is visible and upon inspection one finds that the magnitudes at the object and image location are essentially equal . when plotted without offsetthe three curves fall on top of one another .the linear frequency approximation on the other hand does show a significant amplitude for the reflected wave .the field at the image location is also much weaker than at the object location .this effect is probably due to the numerical phase error inherent in this method .this shows that the use of the linear frequency approximation is problematic in simulations involving bw materials , while many of the other common dispersive material implementations work well .we have studied the finite - difference time - domain and the pseudospectral time - domain methods in the context of evanescent waves in the presence of a backward - wave material .our simulations verify that the predicted growth of the evanescent field inside the bw material , which is vital for imaging beyond the diffraction limit , does occur .we found that the yee fdtd method suffers from a transition layer effect due to the inherent staggered grid , even when using an averaging technique at the material interface , while the collocated grid used in the pstd method avoids this numerical artifact .the choice of numerical technique to implement the frequency dependence of the permittivity and the permeability is also important for the correct modeling of the bw material .thus , we have shown that the pstd method in concert with the ade technique , the z transform technique , or the bilinear frequency approximation , allows one to accurately model the interaction of a bw slab with an evanescent wave .the authors would like to thank christopher l. wagner of washington state university for useful discussion .d. r. smith , w. j. padilla , d. c. vier , s. c. nemat - nasser , and s. schultz , `` composite medium with simultaneously negative permeability and permittivity , '' _ phys ._ , vol .84 , no . 18 , pp . 41844187 , may 2000 .p. m. valanju , r. m. walser , and a. p. valanju , `` comment on `` wave refraction in negative - index media : always positive and very inhomogeneous '' reply , '' _ phys ._ , vol . 90 , no . 2 , art .029704 , jan . 2003 .z. s. sacks , d. m. kingsland , r. lee , and j .- f .lee , `` a perfectly matched anisotropic absorber for use as an absorbing boundary condition , '' _ ieee trans .antennas propagat ._ , vol .43 , no . 12 , pp . 14601463 , dec .m. kuzuoglu and r. mittra , `` frequency dependence of the constitutive parameters of causal perfectly matched anisotropic absorbers , '' _ ieee microwave guided wave lett ._ , vol . 6 , no . 12 , pp .447449 , dec .m. celuch - marcysiak and w. k. gwarek , `` higher - order modelling of media interfaces for enhanced fdtd analysis of microwave circuits '' , 24th european microwave conf . , pp . 15301535 , cannes , france , sept .hwang and a. c. cangellaris , `` effective permittivities for second - order accurate fdtd equations at dielectric interfaces '' , _ ieee micro .wireless components lett ._ , vol .11 , no . 4 , pp .158160 , apr .j. b. schneider and r. j. kruhlak , `` plane waves and planar boundaries in fdtd simulations '' , ieee antennas and propagat . soc .int . symposium and ursi radio sci .meeting , salt lake city , ut , july 2000 .r. m. joseph , s. c. hagness , and a. taflove , `` direct time integration of maxwell s equations in linear dispersive media with absorption for scattering and propagation of femtosecond electromagnetic pulses , '' _ opt ._ , vol . 16 , no . 18 , pp .14121414 , sept .r. j. luebbers , f. hunsberger , and k. s. kunz , `` a frequency - dependent finite - difference time - domain formulation for transient propagation in plasma , '' _ ieee trans .antennas propagat ._ , vol .39 , no . 1 ,2934 , jan . 1991 .j. b. schneider and r. j. kruhlak , `` dispersion of homogeneous and inhomogeneous waves in the yee finite - difference time - domain grid '' , _ ieee trans .microwave theory tech ._ , vol .49 , no . 2 , pp . 280287 , feb .michael w. feise received the m.s . and the ph.d .degree in physics from washington state university . from 2001 to 2003he was a research associate with the school of electrical engineering and computer science at washington state university . in 2003he became a research fellow in the nonlinear physics group of the research school of physical sciences and engineering at the australian national university .his research interests include electromagnetic and acoustic wave propagation and their linear and nonlinear interaction with matter , as well as thz radiation and low - dimensional semiconductor devices .john b. schneider received the b.s .degree in electrical engineering from tulane university and m.s . and ph.d .degrees in electrical engineering from the university of washington .he is presently an associate professor in the school of electrical engineering and computer science at washington state university .his research interests include the use of computational methods to analyze acoustic , elastic , and electromagnetic wave propagation .peter j. bevelacqua received the b.s .degree in electrical engineering ( summa cum laude ) from washington state university in 2002 . in 2002he worked as an intern at sandia national labs and was awarded a national science foundation fellowship in 2003 .he currently is a graduate student at stanford university .
backwards - wave ( bw ) materials that have simultaneously negative real parts of their electric permittivity and magnetic permeability can support waves where phase and power propagation occur in opposite directions . these materials were predicted to have many unusual electromagnetic properties , among them amplification of the near - field of a point source , which could lead to the perfect reconstruction of the source field in an image [ j. pendry , phys . rev . lett . * 85 * , 3966 ( 2000 ) ] . often systems containing bw materials are simulated using the finite - difference time - domain technique . we show that this technique suffers from a numerical artifact due to its staggered grid that makes its use in simulations involving bw materials problematic . the pseudospectral time - domain technique , on the other hand , uses a collocated grid and is free of this artifact . it is also shown that when modeling the dispersive bw material , the linear frequency approximation method introduces error that affects the frequency of vanishing reflection , while the auxiliary differential equation , the z transform , and the bilinear frequency approximation method produce vanishing reflection at the correct frequency . the case of vanishing reflection is of particular interest for field reconstruction in imaging applications . backwards - wave material , left - handed material , double - negative material , metamaterial , fdtd methods , pseudospectral time - domain method
as seen in fig . [ f : intro:1](a ) , asphalt mixtures represent in general a highly heterogeneous material with complex microstructure consisting at minimum of mastic binder , aggregates and voids .when limiting our attention to mastic asphalt mixtures , used typically in traffic arteries of substantial importance , the fraction of voids becomes negligible . a binary image of such a two - phase material system plotted in fig .[ f : intro:1](b ) is then readily available .the literature offers several distinct routes taking advantage of such a representation . [ cols="^,^,^ " , ] regarding the `` similarity '' of macroscopic response from various sepucs , the sepuc no .37 was selected to provide data needed in the calibration of the macroscopic gl model .virtual numerical tests identical to those in section [ bsec : leonov - mortar ] were again performed to give first the homogenized master curve displayed in fig .[ f : mam:1](b ) and consequently the model parameters of eqs . and available in table [ t : shift ] .note again the same temperature shift as obtained already for lower scales .this is evident in fig .[ f : mam:1](b ) showing only vertical shift of individual master curves attributed to an increasing stiffness when moving up individual scales .finally , fitting the inverse of the homogenized master curve to a particular chain of maxwell units then allows for estimating the macroscopic response of mam to a given load limiting the material symmetry to a macroscopic isotropy .the response of the homogenized asphalt mixture to the applied remote shear strain labeled as `` leonov macro '' appears in fig .[ f : mam:1](a ) . since derived from the solution of a unit cell problem , a relatively good agreement with these solutions has been expected .the difference is merely attributed to the specific approximation of the homogenized master curve via dirichlet series .it is fair to mention a considerable sensitivity of the predictions to a particular choice of pairs of parameters ( shear modulus and retardation time ) of the maxwell units . although promising , to fully accept the proposed uncoupled multiscale computational strategy will require further testing , both numerical and laboratory .this topic is still widely opened .although research interests on flexible pavements have been quite intense in the past two decades , the field is still very much in development and will certainly witness considerable activity in the coming decade particular in connection to hierarchical modeling and micromechanics . within this framework ,the present contribution provides theoretical tools for the formulation of macroscopic constitutive law reflecting the confluence of threads coming from experimental work , image analysis , statistical mechanics and traditional disciplines of micromechanics and the first order computational homogenization . here, the totally uncoupled multiscale modeling approach is favored to enable an inexpensive analysis of real world large scale structures .since much of the considered is primarily computational it would be audacious to brought this approach directly to points of application without additional experimental validation .large scale experiments of rut depth measurements due to moving wheel load are currently under way .the effect of microstructure anisotropy , influence of the first stress invariant or three - dimensional character of asphalt mixtures are just a few issues which need to be addressed , yielding flexible pavements a fertile field of future research .e. masad , l. tashman , d. little , h. zbib , viscoplastic modelling of asphalt mixes with effects of anisotropy , damage and aggregate characteristics , mechanics of materials 37 ( 2005 ) 12421256 . http://dx.doi.org/10.1016/j.mechmat.2005.06.003 [ ] .v. panoskaltsis , d. panneerselvam , an anisotropic hyperelastic - viscoplastic damage model for asphalt concrete materials and its numerical implementation , in : 5th gram international congress on computational mechanics , limassol , 2005 .j. sousa , s. weissman , j. sackman , c. monismith , nonlinear elastic viscous with damage model to predict permanent deformation of asphalt concrete mixes , transportation research record no .1384 : journal of the transportation research board , washington , d.c .( 1993 ) 8093available from : http://pubsindex.trb.org/document/view/default.asp?lbid=378429 .s. weissman , j. sackman , j. harvey , i. guada , a constitutive law for asphalt concrete mixes formulated within the realm of large strains , in : 16th engineering mechanics conference , university of washington , seattle , 2003 .available from : http://www.ce.washington.edu/em03/proceedings/papers/166.pdf .b. huang , l. mohammad , w. wathugala , application of temperature dependent viscoplastic hierarchical single surface model for asphalt mixtures , journal of materials in civil engineering 16 ( 2 ) ( 2004 ) 147154. http://dx.doi.org/10.1061/(asce)0899-1561(2004)16:2(147 ) [ ] .h. fang , j. haddock , t. white , a. hand , on the characterization of flexible pavement rutting using creep model - based finite element analysis , finite element in analysis and design 41 ( 2004 ) 4973 . http://dx.doi.org/10.1016/j.finel.2004.03.002 [ ] .a. papagiannakis , a. abbas , e. masad , microemchanical analysis of viscoelastic properties using image analysis , transportation research record no .1789 : journal of the transportation research board , washington , d.c .( 2002 ) 113120 .a. abbas , a. papagiannakis , e. masad , linear and nonlinear viscoelastic analysis of the microstructure of asphalt concretes , journal of materials in civil engineering 16 ( 2 ) ( 2004 ) 147154 .http://dx.doi.org/10.1061/(asce)0899-1561(2004)16:2(133 ) [ ] .z. you , s. adhikari , m. kutay , dynamic modulus simulation of the asphalt concrete using the x - ray computed tomography images , materials and structures 42 ( 2006 ) 617630 . http://dx.doi.org/10.1617/s11527-008-9408-4 [ ] .q. dai , m. saad , z. you , a micromechanical finite element model for linear and damage - coupled viscoelastic behaviour of asphalt mixture , international journal for numerical and analytical methods in geomechanics 30 ( 2006 ) 11351158 .http://dx.doi.org/10.1002/nag.520 [ ] .r. lackner , r. blab , a. jager , m. spiegl , k. kappl , m. wistuba , b. gagliano , j. eberhardsteiner , multiscale modeling as the basis for reliable predictions of the behaviour of multi - composed materials , in : b. topping , c. m. soares ( eds . ) , progress in engineering computational technology , saxe - coburg publications , stirling , scotland , 2004 , pp .153187 .r. lackner , m. spiegl , r. blab , j. eberhardsteiner , is low - temperature creep of asphalt mastic independent of filler shape and mineralogy ? - arguments from multiscale analysis , journal of materials in civil engineering , asce 15 ( 2005 ) 485491. http://dx.doi.org/10.1061/(asce)0899-1561(2005)17:5(485 ) [ ] .r. lackner , r. blab , j. eberhardsteiner , h. mang , characterization and multiscale modeling of asphalt - recent developments in upscaling of viscous and strength properties , in : iii european conference on computational mechanics , 2006 , p. 26 . http://dx.doi.org/10.1007/1-4020-5370-3_26 [ ] .e. aiger , c. pichler , r. lackner , bottom - up multiscale modeling of viscoelastic properties of asphalt , in : s. .loizos ( ed . ) , advanced characterisation of pavement and soil engineering materials , 2007 taylor & francis group , london , isbn 978 - 90 - 1234 - 1234 , 2004 .k. matou , m. lep , j. zeman , m. ejnoha , applying genetic algorithms to selected topics commonly encountered in engineering practice , computer methods in applied mechanics and engineering 190 ( 1314 ) ( 2000 ) 16291650 . http://dx.doi.org/10.1016/s0045-7825(00)00192-4 [ ] .j. zeman , m. ejnoha , numerical evaluation of effective properties of graphite fiber tow impregnated by polymer matrix , journal of the mechanics and physics of solids 49 ( 1 ) ( 2001 ) 6990 . http://dx.doi.org/10.1016/s0022-5096(00)00027-2 [ ] .m. ejnoha , j. zeman , overall viscoelastic response of random fibrous composites with statistically quasi uniform distribution of reinforcements , computer methods in applied mechanics and engineering 191 ( 44 ) ( 2002 ) 50275044 . http://dx.doi.org/10.1016/s0045-7825(02)00433-4 [ ] .j. zeman , m. ejnoha , from random microstructures to representative volume elements , modelling and simulation in materials science and engineering 15 ( 4 ) ( 2007 ) s325s335 .http://dx.doi.org/10.1088/0965-0393/15/4/s01 [ ] .m. ejnoha , r. valenta , j. zeman , nonlinear viscoelastic analysis of statistically homogeneous random composites , international journal for multiscale computational engineering 2 ( 4 ) ( 2004 ) 645673 .http://dx.doi.org/10.1615/intjmultcompeng.v2.i4 [ ] .g. j. dvorak , y. benveniste , on transformation strains and uniform fields in multiphase elastic media , proceedings of the royal society of london series a - mathematical , physical and engineering sciences 437 ( 1900 ) ( 1992 ) 291310 .g. j. dvorak , transformation field analysis of inelastic composite materials , proceedings of the royal society of london series a - mathematical , physical and engineering sciences 437 ( 1900 ) ( 1992 ) 311327 .t. mori , k. tanaka , average stress in matrix and average elastic energy of elastic materials with misfitting inclusions , acta metallurgica 21 ( 1973 ) 571 .http://dx.doi.org/10.1016/0001-6160(73)90064-3 [ ] .z. hashin , s. shtrikman , on some variational principles in anisotropic and nonhomogeneous elasticity , journal of the mechanics and physics of solids 10 ( 1962 ) 335342 .[ ] . j. c. michel , h. moulinec , p. suquet , effective properties of composite materials with periodic microstructure : a computational approach , computer methods in applied mechanics and engineering 172 ( 1999 ) 109143 . http://dx.doi.org/10.1016/s0045-7825(98)00227-8 [ ] . v. kouznetsova , m. g. d. geers , w. a. m. brekelmans , multi - scale constitutive modelling of heterogeneous materials with a gradient - enhanced computational homogenization scheme , international journal for numerical methods in engineering 54 ( 8) ( 2002 ) 12351260 . http://dx.doi.org/10.1002/nme.541 [ ] . a. ibrahimbegovi , c. knopf - lenoir , a. kuerov , p. villon , optimal design and optimal control of structures undergoing finite rotations and elastic deformations , international journal for numerical methods in engineering 61 ( 14 ) ( 2004 ) 24282460 . http://arxiv.org/abs/0902.1037 [ ] , http://dx.doi.org/10.1002/nme.1150 [ ] .a. kuerov , identification of nonlinear mechanical model parameters based on softcomputing methods , ph.d .thesis , ecole normale suprieure de cachan , laboratoire de mcanique et technologie ( 2007 ) .available from : http://tel.archives-ouvertes.fr/tel-00256025/en .m. ejnoha , r. valenta , epoxy resin as a bonding agent in polymer matrix composites : material properties and numerical implementation , in : advances in computational & experimental engineering & sciences , forsyth : tech science press , 2004 .g. j. dvorak , y. a. bahei - el - din , a. m. wafa , implementation of the transformation field analysis for inelastic composite - materials , computational mechanics 14 ( 3 ) ( 1994 ) 201228 .http://dx.doi.org/10.1007/bf00370073 [ ] .j. chaboche , s. kruch , j. maire , t. pottier , towards a micromechanics based inelastic and damage modeling of composites , international journal of plasticity 17 ( 4 ) ( 2001 ) 411439 . http://dx.doi.org/10.1016/s0749-6419(00)00056-5 [ ] .a. kuerov , m. lep , j. zeman , back analysis of microplane model parameters using soft computing methods , computer assisted mechanics and engineering science 14 ( 2007 ) 219242 .
a well established framework of an uncoupled hierarchical modeling approach is adopted here for the prediction of macroscopic material parameters of the generalized leonov ( gl ) constitutive model intended for the analysis of flexible pavements at both moderate and elevated temperature regimes . to that end , a recently introduced concept of a statistically equivalent periodic unit cell ( sepuc ) is addressed to reflect a real microstructure of mastic asphalt mixtures ( mam ) . while mastic properties are derived from an extensive experimental program , the macroscopic properties of mam are fitted to virtual numerical experiments performed on the basis of first order homogenization scheme . to enhance feasibility of the solution of the underlying nonlinear problem a two - step homogenization procedure is proposed . here , the effective material properties are first found for a mortar phase , a composite consisting of a mastic matrix and a fraction of small aggregates . these properties are then introduced in place of the matrix in actual unit cells to give estimates of the model parameters on macroscale . comparison with the mori - tanaka predictions is also provided suggesting limitations of classical micromechanical models . , , mastic asphalt mixture , multiscale analysis , binary image , periodic unit cell , homogenization , generalized leonov model
measurement - based quantum computing allows universal quantum computing only with adaptive single - qubit measurements on a certain entangled state such as the graph state .measurement - based quantum computing has recently been applied in quantum computational complexity theory .for example , ref . used measurement - based quantum computing to construct a multiprover interactive proof system for bqp with a classical verifier , and refs . used measurement - based quantum computing to show that the verifier needs only single - qubit measurements in qma and qam .it was also shown that the quantum state distinguishability , which is a qszk - complete problem , and the quantum circuit distinguishability , which is a qip - complete problem , can be solved with the verifier who can do only single - qubit measurements .the basic idea in these results is the verification of the graph state : prover(s ) generate the graph state , and the verifier performs measurement - based quantum computing on it . by checking the stabilizer operators, the verifier can also verify the correctness of the graph state .we call the test the stabilizer test " ( see also refs . in the context of the blind quantum computing ) .the idea of testing stabilizer operators was also used in refs . to construct multiprover interactive proof systems for local hamiltonian problems . what happens if in qma the quantum channel between merlin and arthur is noisy ?the first result of the present paper is that such a modification does not change the computational power as long as the noise is not too strong so that errors are correctable with high probability .the proof is simple : merlin encodes the witness state with a quantum error - correcting code , and sends it to arthur who can correct channel error by doing the quantum error correction .the problem becomes more nontrivial if we further assume that arthur can do only single - qubit measurements , since in this case arthur can not do the universal quantum computation by himself .the second result of the present paper is that the noisy qma with such an additional restriction for arthur is still equivalent to qma . to show it , we use measurement - based quantum computing :honest merlin sends the graph state to arthur , and arthur does fault - tolerant measurement - based quantum computing on it with only single - qubit measurements . by measuring stabilizer operators, arthur also checks the correctness of the graph state .note that the results of refs . can not be directly applied to the present case , since the stabilizer test used in these results is so strict that even honest merlin is rejected with high probability if the channel is noisy : even if honest merlin sends the ideal graph state , the state is changed due to the noise in the channel , and such a deviated state is rejected with high probability by the stabilizer test in spite that the correct quantum computing is still possible on such a state by correcting errors .we therefore introduce a more relaxed test that can accept not only the ideal graph state but also noisy graph states that are error - correctable .note that recently a similar relaxed stabilizer test was introduced and applied to blind quantum computing in ref .in this section , we define two noisy qma classes , and .first we define . * definition 1 * : let be a family of cptp maps , where is a cptp map acting on qubits .a language is in if and only if there exists a uniformly - generated family of polynomial - size quantum circuits such that * if then there exists an -qubit state such that the probability of obtaining 1 when the first qubit of v_x^\dagger ] is measured in the computational basis is .note that this definition reflects a physically natural assumption that malicious merlin can replace the channel , and therefore arthur should assume that any state can be sent in no cases .we can also consider another definition that assumes that even evil merlin can not modify the channel , but in this case we do not know how to show that the class is in qma , and therefore in this paper , we do not consider the definition .we can show that contains if is not too strong so that errors are correctable with high probability .( more details about the error correctability is given in sec .[ sec : fujii ] . ) throughout this paper , we assume that satisfies such property , since if the channel noise is too strong and therefore the witness state is completely destroyed , the noisy qma is trivially in bqp . * theorem 1 * : for any such that and any , * proof * : let us assume that a language is in .then , there exists a uniformly - generated family of polynomial - size quantum circuits such that * if then there exists an qubit state such that the probability of obtaining 1 when the first qubit of is measured in the computational basis is , where and . *if then for any qubit state , the probability is . according to the standard argument of the error reduction , for any polynomial , there exists a uniformly - generated family of polynomial - size quantum circuits such that * if then the probability of obtaining 1 when the first qubit of is measured in the computational basis is , where and . *if then for any qubit state , the probability is . from , we construct the circuit that first does the error correction and decoding , and then applies . if , honest merlin sends arthur , which is the encoded version of in a certain quantum error - correcting code . due to the noise , what arthur receives is , where is the size of . by definition , errors are correctable , and therefore , according to the theory of quantum error correction , for any polynomial , there exists a number of the repetitions of the concatenation such that and the state after the error correction and decoding on satisfies if is applied on , the acceptance probability is where we have taken sufficiently large and the number of the repetitions of the concatenation such that therefore , the probability that accepts is larger than . if , on the other hand , any state is accepted by with probability at most .it is also the case for the output of the error - correcting and decoding circuit on any input .therefore , the acceptance probability of on any state is hence we have shown that the language is in . we next define the class . * definition 2 * : the class is the restricted version of such that arthur can do only single - qubit measurements .our second result is the following theorem .* theorem 2 * : for any such that and any , the rest of the paper is devoted to show theorem 2 . 0 from theorem 2, we can again show since readers unfamiliar with measurement - based quantum computing , we here explain some basics .let us consider a graph , where .the graph state on is defined by , and is the gate on the vertices and . according to the theory of measurement - based quantum computing , for any -width -depth quantum circuit , there exists a graph for and the graph state on it such that if we measure each qubit in , where is a certain subset of with , in certain bases adaptively , then the state of after the measurements is with uniformly randomly chosen and , where the operator is called a byproduct operator , and its effect is corrected , since and can be calculated from previous measurement results .hence we finally obtain the desired state .if we entangle each qubit of a state with an appropriate qubit of by using gate , we can also implement in measurement - based quantum computing .the graph state is stabilized by for all , where is the set of nearest - neighbour vertices of vertex . in other words , for all . for , we define the state by for all .( therefore , . )the set is an orthonormal basis of the -qubit hilbert space .in fact , if , there exists such that . then , and therefore .for the convenience of readers , we also review the stabilizer test used in refs .consider the graph of fig .[ stabilizer_test ] .( for simplicity , we here consider the square lattice , but the result can be applied to any reasonable graph . ) as is shown in fig .[ stabilizer_test ] , we define two subsets , and , of , where and .we also define a subset of by in other words , is the set of vertices in that are connected to vertices in .we further define two subsets of : finally , we define two subgraphs of : . is the set of vertices in the dotted red square , and is the set of other vertices .( b ) the subgraph .( c ) the subgraph ., scaledwidth=40.0% ] the stabilizer test is the following test : * randomly generate an -bit string .* measure the operator where is the stabilizer operator , eq .( [ stabilizer ] ) , of the graph state .* if the result is , the test passes ( fails ) . let be a pure state on .if the probability that passes the stabilizer test satisfies , where , then where here , is a certain state on and for a proof , see ref .according to the theory of fault - tolerant measurement - based quantum computing , if is not too strong , fault - tolerant measurement - based quantum computing is possible on the state for a certain -qubit graph .in particular , there exists a set of -bit strings such that fault - tolerant measurement - based quantum computing is possible on .( for more details , see sec . [sec : fujii ] . )if there is some noise in the quantum channel between merlin and arthur , the stabilizer test introduced in the previous section is so strict that even honest merlin is rejected with high probability .for example , let us assume that honest merlin sends arthur the correct state but due to the noise , what arthur receives is but . here , is the set of -bit strings such that fault - tolerant measurement - based quantum computing is possible on .( see sec .[ sec : fujii ] . )then , the probability that passes the stabilizer test is note that this value is the minimum value of , since for any state .let us try to prove theorem 2 by using the stabilizer test of the previous section .we first assume that a language is in .due to the error reducibility of qma , the assumption means that is in for any polynomial .we want to show that is in for any . to show it, we consider a similar protocol of ref . where arthur chooses the computation with probability and the stabilizer test with probability .let be the probability of accepting the computation result when he chooses the computation , and be that of passing the stabilizer test when he chooses the stabilizer test .first let us consider the case of . in this case, merlin sends the correct state , i.e. , the encoded witness state entangled with the graph state .according to the theory of fault - tolerant measurement - based quantum computing , arthur can do the correct quantum computing on the noisy graph state with probability and fails the correct computing with probability for any polynomial .the acceptance probability is therefore + ( 1-q)\frac{1}{2}\\ & = & q[(1 - 2^{-t})(1 - 2^{-s})]+ ( 1-q)\frac{1}{2 } \equiv\alpha.\end{aligned}\ ] ] next let us consider the case of . in this case , the acceptance probability is if malicious merlin sends a state such that .the gap is then +(1-q)\big(\epsilon-\frac{1}{2}\big)\le 0\end{aligned}\ ] ] for any , and therefore we can not show , which is necessary to show theorem 2 .a reason why the above proof does not work is that the probability that honest merlin passes the stabilizer test is too small .if merlin is honest and if the channel gives only a weak error that is correctable , what arthur receives should be accepted with high probability , since it is useful for the correct quantum computing .this argument suggests that the stabilizer test in the previous section is too strict for several practical situations such as the noisy channel case .hence we need a more relaxed test .now we give a proof of theorem 2 by introducing a more relaxed stabilizer test .let us assume that a language is in .due to the error reducibility of qma , this means that is in for any polynomial .therefore , without loss of generality , we take and for any polynomial .let be arthur s verification circuits , and be the yes witness that gives the acceptance probability larger than .we consider the bipartite graph of fig .[ protocol ] .( for simplicity , the graph is written as the two - dimensional square lattice , but the graph can be more complicated depending on the computation . ) . is the set of vertices in the dotted red square , and is the set of other vertices .two subgraphs and are defined as in sec .[ sec : st ] . in this example , the subgraph is equal to . , scaledwidth=15.0% ] our protocol runs as follows . *if merlin is honest , he generates the correct state \end{aligned}\ ] ] on the graph , where is the encoded version of and placed on .merlin sends each qubit of one by one to arthur .if merlin is malicious , he generates any state on and sends each qubit of it one by one to arthur .( due to the convexity , we can assume without loss of generality that malicious merlin sends pure states . ) * with probability , which will be specified later , arthur does the fault - tolerant measurement - based quantum computation that implements the fault - tolerant version of with input .if the result is accept ( reject ) , he accepts ( rejects ) .we denote the acceptance probability by . * with probability , arthur measures all black qubits of in and all white qubits of in .let and be the set of the measurement results and measurement results , respectively . if and only if the syndrome set satisfies certain condition , which will be explained later , arthur accepts . here , is the set of the nearest - neighbour vertices of vertex in terms of the graph , and is the set of black vertices in .we denote the acceptance probability by . * with probability , arthur measures all white qubits of in and all black qubits of in .let and be the set of the measurement results and measurement results , respectively .if and only if the syndrome set satisfies certain condition , which will be explained later , arthur accepts .here , is the set of the white vertices in .we denote the acceptance probability by .the conditions and are taken in such a way that if satisfies and satisfies then errors are correctable , and therefore fault - tolerant measurement - based quantum computing is possible . in this paper , we do not give the explicit expressions of and , since they are complicated and not necessary .at least , according to the theory of fault - tolerant quantum computing , we can define such and , and the membership of and can be decided in a polynomial time .a more detailed discussion is given in sec .[ sec : fujii ] .first we consider the case when . since is not too strong , and for certain (see sec .[ sec : fujii ] ) . therefore , the acceptance probability is + \frac{1-q}{2}(1-\delta)+\frac{1-q}{2}(1-\delta)\\ & = & q(1 - 2^{-s})a+(1-q)(1-\delta)\equiv\alpha.\end{aligned}\ ] ] next we consider the case when .there are four possibilities for : * , * , * , * , where is a certain parameter that will be specified later .let us consider each case separately .first , if and , second , if and , third , if and , finally , if and , +\frac{1-q}{2}+\frac{1-q}{2}\\ & = & q[(1 - 2^{-s})b+2^{-s}+\sqrt{4\epsilon-4\epsilon^2}]+1-q\equiv\beta_3.\end{aligned}\ ] ] here , we have used the fact that if and then where is the set of such that errors on are correctable , is the set of certain complex coefficients such that and is an orthonormal basis on . a proof of eq .( [ noisy_stabilizer ] ) is given in the next section .let us define +(1-q ) \big(\frac{\epsilon}{2}-\delta\big),\\ \delta_2(q)&\equiv&\alpha-\beta_2=q[(1 - 2^{-s})a-1]+(1-q)(\epsilon-\delta),\\ \delta_3(q)&\equiv&\alpha-\beta_3= q[(1 - 2^{-s})(a - b)-2^{-s}-\sqrt{4\epsilon-4\epsilon^2}]-(1-q)\delta.\end{aligned}\ ] ] then , the value that gives is such that .therefore , and for this , the gap is } { 1+\frac{\epsilon}{2}-(1 - 2^{-s})b-2^{-s}-\sqrt{4\epsilon-4\epsilon^2 } } -\delta(1-q^*)\\ & \ge & \frac{\frac{\epsilon}{2}[(1 - 2^{-s})(a - b)-2^{-s}-\sqrt{4\epsilon } ] } { 1+\frac{\epsilon}{2 } } -\delta\times1\\ & = & \frac{\frac{\epsilon}{2}[(1 - 2^{-s})(1 - 2^{-t+1})-2^{-s}-\sqrt{4\epsilon } ] } { 1+\frac{\epsilon}{2 } } -\delta\\ & \ge & \frac{\frac{1}{64\times2}(\frac{7}{8}\times\frac{7}{8}-\frac{1}{8 } -\frac{2}{8 } ) } { 1+\frac{1}{64\times2 } } -\delta\\ & = & \frac{25}{64 ^ 2\times2 + 64}-\delta,\end{aligned}\ ] ] where we have taken , , and .hence is in with .it is easy to show that if we run the above protocol in parallel , and arthur takes the majority voting , then the error can be amplified to for any .the proof is almost the same as that of the standard error reduction in qma .one different point is , however , that when the channel is noisy , even the yes witness is not the tensor product of the original witness states , because the noise can generate entanglement among them .this means that unlike the standard qma case , the output of each run is not independent even in the yes case , and therefore the chernoff bound does not seem to be directly used .however , we can show that the probability of obtaining 0 in the run is upperbounded by whatever results obtained in the previous runs .therefore , the rejection probability is upperbounded by that of the case when each run is the independent bernoulli trial with the coin bias , where the standard chernoff bound argument works .( more precisely , the argument is as follows . in the first run ,the probability of obtaining 0 is \le 1-a ] , we can maximize the rejection probability .in the second run , the probability of obtaining 0 is \le 1-a ] , we can maximize the rejection probability .if we repeat it for all runs , we conclude that the the independent bernoulli trial with the coin bias achieves the maximum rejection probability .according to the chernoff bound , the maximum rejection probability is upperbounded by an exponentially decaying function . )in this section , we show eq .( [ noisy_stabilizer ] ) .let us define and .since , where is the set of -bit string such that satisfies , and is the stabilizer operator of on qubit . since is an orthonormal basis , we can write with certain complex coefficients such that . let be the set of stabilizer operators of the graph state .then , it is easy to check for all . therefore , from eq .( [ assumption ] ) , latexmath:[\ ] ] for all , is called a correctable error . the conditions , and , are given as the sets of syndromes on for all correctable errors .an explicit form of the povm depends on the fault - tolerant scheme chosen , and therefore so does the set of correctable errors .most fault - tolerant schemes in the measurement - based model are constructed by ( or at least can be regarded as ) simulation of circuit - based fault - tolerant schemes .for example , fault - tolerant schemes in ref . and refs . can be viewed as circuit - based fault - tolerant schemes using the steane 7-qubit code and the surface code , respectively . in the fault - tolerant theory for the circuit model , a set of sparse errorsare defined such that they do not change the output of the quantum computation under fault - tolerant quantum error correction .therefore it is straightforward to find a correctable set of errors by directly translating the set of sparse errors in the existing circuit - based fault - tolerant schemes into errors on the graph state in the measurement - based model .a channel is not too strong so that errors are correctable with high probability if here , for natural noises .( in this paper , the proof holds even for sufficiently small constant . )according to the theory of fault - tolerant quantum computation , under a natural physical assumption like spatial locality of noise , if noise strength of each noisy operation is sufficiently smaller than a certain threshold value , the above condition is satisfied .tm is supported by grant - in - aid for scientific research on innovative areas no.15h00850 of mext japan , and the grant - in - aid for young scientists ( b ) no.26730003 of jsps .kf is supported by kakenhi no.16h02211 .hn is supported by the grant - in - aid for scientific research ( a ) nos.26247016 and 16h01705 of jsps , the grant - in - aid for scientific research on innovative areas no .24106009 of mext , and the grant - in - aid for scientific research ( c ) no.16k00015 of jsps .00 r. raussendorf and h. j. briegel , a one - way quantum computer .. lett . * 86 * , 5188 ( 2001 ) .m. mckague , interactive proofs for bqp via self - tested graph states .theory of computing * 12 * , 1 ( 2016 ) .t. morimae , d. nagaj , and n. schuch , quantum proofs can be verified using only single qubit measurements .a * 93 * , 022326 ( 2016 ) .t. morimae , quantum arthur - merlin with single - qubit measurements .a * 93 * , 062333 ( 2016 ) .t. morimae , quantum state and circuit distinguishability with single - qubit measurements .m. hayashi and t. morimae , verifiable measurement - only blind quantum computing with stabilizer testing .lett . * 115 * , 220502 ( 2015 ) .k. fujii and m. hayashi , in preparation .j. f. fitzsimons and t. vidick , a multiprover interactive proof system for the local hamiltonian problem . proc . of the 6th itcs , pp.103 - 112 ( 2015 ) .z. ji , classical verification of quantum proofs .proc . of the 48th stoc , pp.885 - 898 ( 2016 ) .m. a. nielsen and i. l. chuang , quantum computation and quantum information ( cambridge university press , cambridge 2000 ) .d. aharonov , and m. ben - or , fault - tolerant quantum computation with constant error .proc . of the 20th stoc , pp .176 - 188 ( 1997 ) .p. aliferis and d. w. leung , simple proof of fault tolerance in the graph - state model .a * 73 * , 032308 ( 2006 ) .
what happens if in qma the quantum channel between merlin and arthur is noisy ? it is not difficult to show that such a modification does not change the computational power as long as the noise is not too strong so that errors are correctable with high probability , since if merlin encodes the witness state in a quantum error - correction code and sends it to arthur , arthur can correct the error caused by the noisy channel . if we further assume that arthur can do only single - qubit measurements , however , the problem becomes nontrivial , since in this case arthur can not do the universal quantum computation by himself . in this paper , we show that such a restricted complexity class is still equivalent to qma . to show it , we use measurement - based quantum computing : honest merlin sends the graph state to arthur , and arthur does fault - tolerant measurement - based quantum computing on the noisy graph state with only single - qubit measurements . by measuring stabilizer operators , arthur also checks the correctness of the graph state . although this idea itself was already used in several previous papers , these results can not be directly used to the present case , since the test that checks the graph state used in these papers is so strict that even honest merlin is rejected with high probability if the channel is noisy . we therefore introduce a more relaxed test that can accept not only the ideal graph state but also noisy graph states that are error - correctable .
often in genetic applications , and in special in immune - genetics , interest lies in detecting and quantifying the association of a given genomic region to a trait . herethe genomic region might be a gene or a gene - complex . typically , no bio - molecular or genomic information is directly available about this genomic region , but instead , a system of dna - markers located close to the region is used .an example of this situation is the study presented by schou _et al _ , ( 2007 , 2008 ) where the possible association of the susceptibility to several parasites and the major histocompatibility complex ( mhc ) in chickens was investigated using the microsatellite lei0258 as a marker .another example involves the use of a tight system of snp markers to associate putative alleles in the mhc region and susceptibility to psoriasis in humans ( orru _ et al _ , 2002 ) .the association between alleles or haplotypes in the genomic region of interest and the trait is commonly characterized by a regression - like statistical model in which the trait enters as the dependent variable and factors representing the presence of the marker - alleles are included among the explanatory variables .a common practice is to declare association between the trait and the genomic region when at least one of the parameters representing the marker - alleles is statistically significant . in order to establish association between traits and putative haplotypes in the genomic region of interestit is required to use a representation of those haplotypes in terms of marker - alleles ( the only genomic information available ) .this representation is crucial to properly characterize the association .we argue that such a representation should be constructed with groups of marker - alleles instead of only individual marker alleles .informally , our main point is that when considering only groups consisted of single marker - alleles ( as usually done when using a naive approach ) one might fail to represent alleles or haplotypes in the neighbouring genomic region .this leads to a significant reduction of efficiency or even to the complete loss of the capacity to detect certain associations .our approach requires the use of a more complex statistical inference involving a search in a large number of possibilities .we show however , that the statistical inference is feasible for a moderate number of marker - alleles ( 10 - 15 allele - markers ) .the text is organized as follows .section [ section.2 ] contains the basic setup , including a description of the genetic and molecular biological scenario , a formulation of the statistical model in terms of a generalized linear model and some discussion on the proper form of performing inference under those premises . a phylogenetic based argument justifying our proposalis presented in the last part of section [ section.2 ] .the details of the implementation of the statistical inference are given in section [ section.3 ] and one examples is discussed in section [ section.4 ] .some discussion is provided in section [ section.5 ] .we assume that the data available consist of observations on diploid individuals from a given population for which we have the information on the values of a trait and a range of explanatory variables characterizing the individuals .the interest is in detecting and characterizing a possible association between the trait of interest and alleles or haplotypes in a given genomic region such as a gene or a gene complex which are not directly observable .we will refer to these alleles or haplotypes as the _ haplotypes in the genomic region of interest_. we assume additionally that data on dna markers located close to the genomic region is available .these markers are assumed to be tight linked so that they can be viewed as a single locus with several possible alleles ( _ e.g_. a microsatellite marker or a system of very close snp markers ) , called the _ marker - alleles_. the data available can be thought as composed of triplets , where indexes the individuals , is the value of the trait , is a vector of auxiliar explanatory variables and is a vector representing the values of allele - markers observed for the individual .we introduce below a suitable generalized linear model ( glim ) that will serve as a framework to expose our method .it is straightforward to extend the techniques presented to other regression - like statistical models .the generalized linear model describing the data is specified by choosing a distribution for the trait among the family of the exponential dispersion models ( jrgensen _ et al _ , 1996 ) ( typically , but not necessarily , a normal distribution ) and specifying a relationship between the expected value of the trait and the explanatory variables ( and ) .here we assume that there is a smooth monotone function , called the _ link function _ , and the parameters and such that \ , = \ , { \mbox{{\bf x}}}_i { \mbox{\boldmath{}}}+ \alpha_1 i_{m_{1i}}+ \dots + \alpha_h i_{m_{hi } } \ , , \end{aligned}\ ] ] where is an indicator variable taking the value 1 if the individual carries the marker - allele and 0 otherwise .we assume , for simplicity , that all the alleles act as completely dominant .that is , the effect of an allele in homozygous individuals carrying two copies of the allele is equal to the effect of the allele in heterozygous individuals carrying one copy of the allele .this assumption can easily be modified to include other genetic mechanisms by modifying the definition of the factors representing the marker - alleles ( _ eg _ by defining factors with more than two levels for representing partial dominance ) . using standard techniques of generalized linear modelsit is possible to make inference on the parameters and .here our interest lies in the parameter representing the effects of the marker - alleles , while is considered as an auxiliary nuisance parameter .the association between the genomic region of interest and the trait is often verified by considering a test of hypothesis given by since represents a vector with all components equal to zero , the null hypothesis above is saying that all the components of the vector are equal to zero while the alternative hypothesis states that _ at least one _ of the marker alleles has a non - vanishing effect .this test can be easily carried out by comparing a model defined by ( [ eq.1 ] ) to a model defined by \ , = \ , { \mbox{{\bf x}}}_i { \mbox{\boldmath{}}}\ , , \end{aligned}\ ] ] using a likelihood ratio type test .rejection of the null hypothesis indicates association of the genomic region in study to the trait of interest .although this simple joint test detects association , it does not help to identify the associated haplotypes in the genomic region of interest .a naive procedure is to identify alleles or haplotypes in the genomic region by looking for the marker - alleles with statistically significant effects .we claim that this can be misleading .it might happen that some of the alleles or haplotypes in the genomic region are in close linkage disequilibrium ( _ i.e. _ are tight linked to ) with more than one marker - allele in such a a way that in some individuals the first marker - allele occur while the second do not occur ( and vice - verse ) . a phylogenetic - based argument presented below shows that this scenario can and indeed often occurs .under this situation , the tests for the effect of each single factor representing each of the marker - alleles would not have biological meaning and would imply in a loss of power due to a misclassification of individuals .therefore , the inference on haplotypes in the genomic region of interest should be performed using sets of marker - alleles instead of only individual marker - alleles . the precise formal statement of this ideais given below .let be pairwise disjoint non - empty subsets of the set of marker alleles ( with ) .define the model given by \ , = \ , { \mbox{{\bf x}}}_i { \mbox{\boldmath{}}}+ \alpha_1 i_{g_{1i}}+ \dots + \alpha_h i_{g_{hi } } \ , , \end{aligned}\ ] ] where is a variable taking value 1 if the individual carries at least one allele - marker belonging to the subset ( for ) .clearly , the simple model given by ( [ eq.1 ] ) is contained in the class of models in the form given by ( [ eq.3 ] ) , since the disjoint subsets can be all constituted of a single element .however , this class of models contains many other models ( any possible combination of non - empty disjoint subsets of ) , which opens the possibility of finding a model of this type that suitably describes the genetic phenomena in play .we discuss in section [ section.3 ] a strategy for searching for the best representation among the many possibilities .a number of special structures naturally appear during the evolution process of a population . as a consequence ,the information that dna - markers carry on neighbour loci is distributed according to some characteristic patterns . in this sectionwe illustrate this general claim using a simple phylogenetic - like construction based on dichotomous branching trees . we will show how some motifs of association involving markers and alleles in the genomic region occur naturally .this will then be used to argue in favor of using a proper representation of the effect of dna - markers and to base the inference using statistical models defined with groups of marker - alleles as in ( [ eq.3 ] ) .consider a locus in the genomic region of interest and two observable markers and in a neighbourhood of .suppose , for exposition simplicity that , and are di - allelic with pairs of alleles , and respectively . assume , moreover , that recombination between these loci can be neglected due to a strong linkage disequilibrium around the region of interest .we can think of each of those alleles as the result of a single event occurred at some point in the evolutionary history of the population in play ( _ e.g. _ a single nucleotide mutation or the duplication of a small genomic region ) .the sequence of events that generated these alleles can be represented by a tree with three dichotomous branching , each branching corresponding to the event that generated one of the alleles .we use the convention that the alleles represented by capital letters are the results of events , while the alleles represented by small letters are the reference alleles , or wild types , corresponding to the states of the loci before the events . a marker - allele _ carries information _ about the locus when the knowledge of the occurrence of determines the allelic form of .if the allele can occur together with the allele and the allele , then is said to be _neutral _ with respect to the locus .for instance , if the branching that formed the locus occurred before the branching of the locus and the branching of occurred in the branch of the tree containing the allele ( see figure 1a ) , then the occurrence of the allele in the locus implies that the locus carries the allele .moreover , in that circumstances the occurrence of the allele does not imply neither that carries the allele nor .therefore the allele carries information on the locus and the allele is neutral with respect to .figure 2a illustrates a scenario where the branching of the locus occurred first , which was followed by the branching of the locus and then the branching of the locus .moreover , the branching of the locus occurred in the branch of the tree containing the allele while the branching of the locus occurred in the branch of the tree containing the marker - allele . under these circumstancesthere are only four possible haplotypes : , , , .the allele only occurs together with the allele , and it carries information about the locus .analogously , the allele carries also information on the locus . since the alleles and might occur together with both the allele and the allele , then both and are neutral with respect to the locus .we can still draw further conclusions about the distribution of the information on the locus .if we want to use a rule for detecting the presence of the allele based on of the occurrence of marker - alleles , then the rule occurs when or occurs would detect two out of the three occurrences of the allele . a rule based only on the occurrence of the marker - allele would only detect one out of the three possible occurrences of the allele and therefore would be less efficient in detecting than the rule using the alleles and together .the alleles and are both neutral and therefore the occurrence of the allele in the haplotype can not be detected using the information contained in the marker - alleles .we conclude that under the scenario described by figure 2a , one can only detect the occurrence of the allele using a rule based on the marker - alleles in two out of the three possible haplotypes containing .this maximum possible information recovery is attained only by the rule occurs when or occurs. a different scenario is described in figure 2b where the branching in the locus occurred first , followed by the branching in the locus , in the branch containing the allele , and then the branching in the locus in the branch of containing the allele . in this casethe four possible haplotypes are : , , and .therefore the marker - allele carries information on and the marker - allele is neutral .the alleles and necessarily occur together and both carry information on ( but the same information ) .here there are two cases in which the genotype of the locus is determined by the genotypes of the marker - alleles : occurrence of and occurrence of and implying the occurrence of or respectively .note that the last event occurrence of and is equivalent to the event not occurrence of or . under this scenario there are only two rules based on the marker - alleles genotypes that determine the genotype at the locus , both can be expressed as the effect of a combination of the occurrence of the marker - alleles and . the first rule ( occurrence of implies occurrence of )can also be expressed as the effect of a single allele - marker as in the traditional inference method , but the second rule ( not occurrence of or implies the occurrence of ) requires the use of groups of marker - alleles as in the models described in ( [ eq.3 ] ) to be properly represented in a statistical model .here sticking only to rules based on single markers would represent a loss of half of the possibilities for determining the allele at the locus , that is a loss of half of the information on the genotype of the locus that could be recovered with the knowledge of the marker - alleles .figure 3 displays the branching tree of a more complex scenario composed with the locus in the genomic region of interest and four marker - loci , , , and , with alleles , , , , , , and respectively ( we apply the same notational convention as before ) .the following six haplotypes are formed : , , , , and .therefore , the rule if or or or occur then occurs detects four out of the five haplotypes containing the allele .moreover , under the current scenario , this is the best possible rule that could be constructed with the information on the marker - alleles that we have at hand .although the allele carries information on the locus , this information is redundant since occurs always together with .we can then remove the occurrence of the allele from the rule and still detect the same cases where the allele occurs when using the rule including the marker - alleles of the four marker loci .the discussion above shows that in several situations the use of a naive representation of the effects of the marker - alleles is inefficient and that fully efficiency is obtained when using the approach involving the representation of groups of allele - markers .this phenomenon is not restricted to the few scenarios presented here .we claim , without giving a formal proof , that every time that there is a branching after the branching that generated the allele in the locus the new marker - allele formed will carry information on the locus .moreover , if the branching occurs in the branch that contains the wild type allele of the last branching of a marker locus , then the new marker allele formed will add new information on the locus that is not contained in the marker - alleles formed before .this progressive gain of information obtained while the new marker - alleles are being formed is only fully realized if we use a rule of the type if or or or ... occurs , then occurs. in the genomic region of interest with the alleles ( variant allele ) and ( wild type allele ) and one marker loci with alleles and .the haplotypes formed at each ending branch are displayed at the bottom of the tree.,title="fig:",width=226 ] in the genomic region of interest with the alleles ( variant allele ) and ( wild type allele ) and one marker loci with alleles and . the haplotypes formed at each ending branch are displayed at the bottom of the tree.,title="fig:",width=226 ] in the genomic region of interest with the alleles ( variant allele ) and ( wild type allele ) and two marker loci and with alleles , , , respectively .the haplotypes formed at each ending branch are displayed at the bottom of the tree.,title="fig:",width=226 ] in the genomic region of interest with the alleles ( variant allele ) and ( wild type allele ) and two marker loci and with alleles , , , respectively . the haplotypes formed at each ending branch are displayed at the bottom of the tree.,title="fig:",width=226 ] in the genomic region of interest with the alleles ( variant allele ) and ( wild type allele ) and two marker loci and with alleles , , , respectively .the haplotypes formed at each ending branch are displayed at the bottom of the tree.,title="fig:",width=226 ] in the genomic region of interest with the alleles ( variant allele ) and ( wild type allele ) and four marker loci , , and .the haplotypes formed at each ending branch are displayed at the bottom of the tree.,width=340 ]the strategy we propose for characterizing the association between a genomic region and a trait consists in searching exhaustively all the possible groupings formed with sets of marker - alleles and then choose the best candidate among the many possibilities . herethe best candidate is one that represents all the haplotypes of the genomic region of interest that are associated with the trait and that is not redundant .we define a _ grouping of the marker - alleles _ as a collection of non - empty disjoint subsets of the set of all marker - alleles .the subsets of a grouping are called _ marker - alleles groups _ ( mag ) .the idea is to use mags to represent haplotypes in the genomic region of interest that might be associated with the trait . for each possiblegrouping of the marker - alleles one statistical model of the type described by ( [ eq.3 ] ) containing factors representing each mag of this grouping is fit .the grouping that generates the model with the best fit , according to a criterion to be defined below , is chosen to represent the association between the genomic region of interest and the trait .the grouping will be chosen in such a way that it will not contain redundancy , so each mag will represent one haplotype in the region of interest. the number of mags in this grouping will be the number of detectable haplotypes associated with the trait .the magnitude of the effect of each mag will be then the component of the magnitude of the haplotype that is detectable through the marker - alleles ( which is smaller or equal to the magnitude of the effect of the unknown haplotype ) .this procedure is only feasible if the number of marker - alleles is not very large ( we were able to analyse a data with 9 allele markers in few minutes in an ordinary personal computer ) .it is convenient to make the exhaustive enumeration of all possible groupings of the marker - alleles in the following way .consider the class of all subsets of the set of marker - alleles .formally , is the class of parts of .we associate one statistical model to each element of by defining the model of the type defined by ( [ eq.3 ] ) that incorporate factors representing the groups of marker - alleles present in .for let be the class of all the subsets of containing exactly disjoint non - empty sub - sets .clearly is the disjoint union therefore we can search for the best models by proceeding in two steps : first we find the best model for each ( ) and then we find the best model among the candidates found in the first step .the selection of the best model related to a given ( ) is done by choosing the model with the largest value of the likelihood ( or equivalently the log - likelihood ) function . in this waythe set of values of the likelihoods of the chosen candidate for each is a profile set and plays a rule analogous to the rule of a profile likelihood curve for the number of marker - alleles .denote the model that attains the maximum of the likelihood for a given by and the value log - likelihood function of at the maximum by ( for ) .we refer to these quantities as the _ profile model _ and the _ profile log - likelihood _ of order .the next step in the procedure of inference is to choose the class ( ) that yields the best statistical model .if we assume that the haplotypes in the genomic region of interest are representable in terms of subsets of , then choosing the class that produces the best statistical model is equivalent to infer the number of detectable haplotypes in the genomic region of interest .the profile log - likelihood never decreases when the number of haplotypes assumed in the model increase , _i.e. _ since a model ( ) can be expressed as a sub - model of a model with mags in which a pair of mags present the same effect . as a consequence , it is not reasonable to estimate the number of haplotypes in the region of interest by choosing the with larger log - likelihood .we argue next that maximizing the negative akaike information ( or a variant of it ) is a reasonable procedure for inferring the number of haplotypes in the genomic region of interest .the inequality ( [ eq.5 ] ) does not extract all the information available on the development of the profile likelihood curve as the number of putative haplotypes of the genomic region of interest increases .indeed , the profile log - likelihood curve is expected to increase significantly with the number of putative haplotypes until the number of detectable haplotypes is reached .after that point the profile log - likelihood curve is supposed to remain approximately constant . to see that ,consider the situation where there are associated haplotypes in the genomic region .if , then the model fails to represent at least one haplotype and then the profile log - likelihood should increase in a statistically significant way with the addition of the possibility to represent one more haplotype . once attained the number of haplotypes the gain obtained by increasing the capacity of the model to represent one more haplotype vanishes and only marginal gains in the profile log - likelihood are expected ( see figure 4 ) .we assume implicitly here that there are no significant mixtures in the data and that the model is not missing any important explanatory variable ( see figure 4 ) the informal argument above suggests we can infer the number of detectable haplotypes in the genomic region by searching for the point at which the profile log - likelihood curve remains ( approximately ) constant .one way to do that is to subtract a suitable quantity from the profile log - likelihood . by doing that , the new adjusted profile log - likelihood would decrease approximately linearly in the region where the original profile log - likelihood was constant ( _ i.e. _ when the assumed number of haplotypes is larger than the number of detectable haplotypes in the genomic region of interest ) .if the subtracted quantity is not too large , the adjusted profile log - likelihood curve will still increase in the region where the original profile log - likelihood is increasing significantly ( before attaining the number of detectable haplotypes ) .the so called akaike information criterion ( akaike , 1974 , burnham and anderson , 2002 ) explores this idea and is equivalent to subtract the number of parameters in the model from the log - likelihood .in fact the akaike information is defined as minus twice the difference of the log - likelihood and the number of parameters , more precisely , where is the number of parameters in the statistical model , and is the maximized value of the likelihood function for the estimated model . minimizing the akaike information is equivalent to maximizing the log - likelihood adjusted by subtracting the number of parameters in the model .this apparently arbitrary choice for the quantity subtracted from the log - likelihood can be justified as being equivalent to subtract from a likelihood ratio statistic its ( asymptotic ) expected value .alternatively , one might subtract from the profile log - likelihood which would be equivalent to perform a likelihood ratio test for incorporating the representation of an additional haplotype to the current model when working at a significance level of 5% .summing up , the procedure we propose is to maximize the log - likelihood in each class ( which is equivalent to minimize the aic in this class of models since all the models in has by construction the same number of parameters ) , for , and then choose the model with the smaller ( profile ) aic ( or equivalently , with the larger adjusted profile log - likelihood ) .the association between the susceptibility to the helminth _ ascaridia galli _ in chickens and the major histocompatibility ( gene ) complex ( mhc ) was investigated in two recent studies ( schou _ et al _ , 2007 , 2008 ) .these studies used the microsatellite lei0258 ( fulton et al . , 2006 ) , which is located in a non - coding region between two contiguous regions of the mhc gene complex ( the b - f / b - l and the bg loci ) , to obtain eight polymorphic marker - alleles here denoted 195bp , 207bp , 219bp , 251bp , 264bp , 276bp , 297bp and 324bp . since recombination within the chicken mhc is very rare ( plachy et al ., 1992 ; miller et al . , 2004 ) , the alleles of the microsatellite lei0258 are expected to be in tight linkage disequilibrium with haplotypes formed by alleles at the bf / bl and the bg loci ( _ i.e. _ the mhc haplotypes ) .moreover , we can discard the possibility of a direct effect of the lei0258 alleles since this marker is located in a non - coding region ( as any microsatellite marker ) .therefore it is reasonable to apply the techniques described above using the alleles of the microsatellite lei0258 as marker - alleles in the set - up described above . in the first study ( schou _ et al _ , 2007 ) the intensity of infection with _a. galli _ was determined for birds of two chicken breeds by counting the number of this worms found in the intestine of each of the birds examined .the counts were categorized as , zero , low ( up to 3 counts ) , intermediate ( 4 to 10 counts ) and high ( more than 10 counts ) .the cut - off points used for defining the categories above were chosen in such a way that the losses of the klback - leiber information about the counts due to a discretization were minimized .the association between the intensity of infection and the mhc haplotypes was studied by applying a baseline - category logits model for multinomial distributed data ( agresti , 1990 ) using the zero - category as a reference .inference in these models can be performed by fitting three logistic models constructed with a common reference category ( agresti , 1990 ) which can be done by using standard generalized linear models ( with a binomial distribution and a logistic link ) .the standard method was used in this study and an association was declared if the effect of a marker - allele , in the presence of the other marker - alleles , was statistically significant . using this procedureit was found that the occurrence of the marker - allele 276bp was associated with an increased resistance .no further associations were found with the standard procedure .a second study ( schou _ et al _ , 2008 ) was independently performed with different birds of the same two chicken breeds . in thisstudy the birds were inoculated with _a. galli _ under controlled experimental conditions and the fecal excretion of _galli _ eggs was monitored .each animal was classified based on the counts of eggs as presenting zero , low , intermediate or high infection level .a baseline - category logits model was applied but , differently from the first study , using the strategy of searching for marker - allele groups ( mag ) .three marker - allele groups , denoted mag-1 , mag-2 and mag-3 , were identified and found to be associated with the intensity of infection with _ a. galli_. figure [ figura.4 ] displays the profile log - likelihood and profile akaike information as a function of the number of mags assumed to be associated to the infection level .a joint likelihood ratio test indicated a statistically significant effect of these three mags on the intensity of infection ( p=0.0013 ) .mag-1 was formed by the lei0258 alleles 297bp and 324bp ; mag-2 was composed of the alleles 195bp , 207bp , 219bp and 264bp ; and mag-3 only contained the allele 276bp .detailed analyses revealed that animals carrying mag-1 or mag-3 presented larger resistance to _galli _ , while mag-2 was associated with augmented susceptibility ( schou _ et al _ , 2008 ) .an _ a posteriori _ analysis of the data of the first study using the strategy of searching for marker - allele groups yielded the same significant marker - allele groups with mag-1 and mag-3 associated to resistance and mag-2 associated to susceptibility to _( reported in schou _et al _ , 2008 ) .note that when applying the standard strategy only the effect of the marker - allele 276bp was found significant , which is equivalent of detecting the marker - allele group mag-3 ( composed only by the allele 276bp ) .none of the marker - alleles composing the marker - allele group mag-1 and mag-2 ( _ i.e. _ 297bp , 324bp , 195bp , 207bp , 219bp and 264bp ) presented an individual significant effect , which illustrates the loss of power to detect association when applying the standard modelling strategy . for each successive increase in the number of marker - allele groups ,corresponding to test the incorporation of one more marker - allele group at a 5% level of significance .the value 10 was added all the values of the negative alternative profile akaike information for graphical convenience . the negative profile akaike information and the alternative negative profile akaike information presented both a maximum at 3 suggesting the presence of 3 detectable marker - allele groups ., width=453 ]we presented a strategy for performing statistical inference that allows to represent the occurrence of non - observable alleles or haplotypes in a genomic region of interest in terms of a range of observable marker - alleles at highly linked loci .the kernel idea presented here is that the natural unit to build statistical models under this context are not the marker - alleles but groups of marker - alleles . as argued due to the way haplotypes in a genomic region of relatively small size ( such that allow us to ignore recombination )are formed during the evolution of a population , certain haplotypes formed with marker - alleles will occur naturally in tight linkage disequilibrium with ( non - observable ) haplotypes in the genomic region . the way these marker - allele haplotypesare constituted imply that detection rules based on indication functions of groups of marker - alleles are optimal in the sense that they allow to extract the maximum possible amount of information that is contained in the marker - alleles .naive representations constructed exclusively with groups of allele - markers with only one element are bounded to use inefficiently the information ( if not destroy it completely ) , as illustrated in an example with real data .the techniques presented here involve fitting many models and selecting a best candidate among the ( very ) many possibilities , following a sequence of models with increasing number of assumed marker - allele groups .this order is tough to facilitate the inference of the number of marker - allele groups with _detectable _ effect on the trait of interest . however , this _ force brut _ exhaustive search is not feasible for a large number of marker - alleles , since the number of possibilities to be checked increases very rapidly with the number of marker - alleles . an alternative is to use monte carlo based algorithms for maximization in discrete parameter space as simulated annealing or the genetic algorithm .we showed that the classical criterion of maximizing the log - likelihood can not be used to estimate the number of detectable marker - alleles groups , since the likelihood function can not decrease with the number of mags .a way around that is to use the akaike information criterion which penalizes the log - likelihood of models with too many ( unnecessary ) parameters .other forms of penalized likelihood could be applied , as for instance the alternative information criterion proposed in the text , which is equivalent to perform a likelihood ratio test for incorporating one extra mag in the model ( at a 5% level of significance ) . the choice of the penalized likelihood method to be used depend on the type of basic statistical model used .we used generalized linear models to explain our ideas , but we stress that other models could had been used without essentially changing the procedure exposed here .indeed , the example presented uses in fact a slight extension of generalized linear models .probably one of the most relevant extensions would be the incorporation of random components allowing to represent population structure , co - ancestry and polygenic effects . another possibility to be explored is the incorporation of information on ancestors genotypes and other mechanisms of inheritance . in conclusion ,the techniques presented here are flexible and relatively easy to implement using classical statistical models and standard software .fulton , j.e ., juul - madsen , h.r . , ashwell , c.m . ,mccarron , a.m. , arthur , j.a . , osullivan , n.p . and taylor , r.l . , jr.(2006 ) .molecular genotype identification of the gallus gallus major histocompatibility complex ._ immunogenetics _ * 58 * ( 5 - 6 ) , 407 - 421. miller , m.m . , bacon , l.d . ,hala , k. , hunt , h.d . , ewald , s.j ., kaufman , j. , zoorob , r. and briles , w.e.(2004 ) nomenclature for the chicken major histocompatibility ( b and y ) complex . _ immunogenetics _ * 56 * ( 4 ) , 261 - 279 . orru s. , e. giuressi , m. casula , a. loizedda , r. murru , m. mulargia , m.v .masala , d. cerimele , m. zucca , n. aste , p. biggio , c. carcassi , l. contu ( 2002 ) .psoriasis is associated with a snp haplotype of the corneodesmosin gene ( cdsn ) ._ tissue antigens _ * 60 * ( 4 ) , 292 - 298 . + doi:10.1034/j.1399 - 0039.2002.600403.x schou , t.l.h , permin , a. , juul - madsen h.r . , srensen p. , labouriau r. , nguye t.l.h , fink m. and pham s.l . ( 2007 ) .gastrointestinal helminths in indigenous and exotic chickens in vietnam : association of the intensity of infection with the major histocompatibility complex ._ parasitology _ , * 134 * , 561 - 573 .schou , t.l.h , labouriau , r. , permin , a. , christensen , j.p . ,srensen , p. , cu , h.p . , nguyen , k. n. and juul - madsen , h.r .mhc haplotype and susceptibility to experimental infections ( _ salmonella enteritidis _ , _ pasteurella multocida _ or _ ascaridia galli _ ) in a commercial and an indigenous chicken breed .paper iii in the phd thesis `` genetic diversity of poultry in extensive production systems and its implications for disease resistance '' by torben wilde schou , department of veterinary pathobiology , faculty of life sciences , university of copenhagen , denmark ( submitted ) .
we consider the problem of detecting and estimating the strength of association between a trait of interest and alleles or haplotypes in a small genomic region ( _ eg _ a gene or a gene complex ) , when no direct information on that region is available but the values of neighbouring dna - markers are at hand . we argue that the effects of the non - observable haplotypes of the genomic regions can and should be represented by factors representing disjoint groups of marker - alleles . a theoretical argument based on a hypothetical phylogenetic tree supports this general claim . the techniques described allow to identify and to infer the number of detectable haplotypes in the genomic region that are associated with a trait . the methods proposed use an exhaustive combinatorial search coupled with the maximization of a version of the likelihood function penalized for the number of parameters . this procedure can easily be implemented with standard statistical methods for a moderate number of marker - alleles . key - words : _ generalized linear models ( glim ) , phylogenetic - tree , akaike - information , genetic - association , mhc _
according to the evolutionary theory the main mechanisms driving the evolution of natural populations are darwinian selection , genetic drift , mutation and migration .however , these mechanisms alone do not explain the emergence of more complex life forms from simpler units .the increase in complexity at the organism level is supposed to be related to the emergence of cooperation .cooperation goes against the nature of the individuals who are supposed to act selfishly and favor their own genes .the maintenance of the cooperative behavior is still an intriguing and open topic in evolutionary biology .one individual is said to display cooperative behavior if it provides a benefit to another individual or to a group at the expense of its own relative fitness . on the other hand , in a defecting behaviorthe recipient gets the benefit of the interaction without reciprocity and paying no cost for the action .the interaction of rna phage , a viral genotype that synthesises large quantities of products as a common good , and its mutant , a genotype that synthesises less but specialises in sequestering a larger share of the products made by the others , can be described as a cooperator - defector relationship .another example is seen in the evolution of metabolic pathways .pfeiffer et al . observed similar relationship when studying the trade - off between yield and rate in heterotrophic organisms .the main source of energy for survival and reproduction of heterotrophic organisms comes from atp ( adenosine triphosphate ) .the basic raw material for the synthesis of atp is glucose .the conversion of glucose in atp occurs mainly through one of two metabolic pathways : fermentation and respiration .although respiration is more efficient , i.e. , more energy produced per glucose unit ( high yield ) , it is slower than fermentation . in its turn ,fermentation inefficiently produces atp but can achieve high rate of growth ( high rate ) at the expense of depletion of the resource .therefore the conversion of resource into atp , and consequently growth , is driven by a trade - off between yield and rate .this trade - off gives rise to a social conflict .empirical studies demonstrate that the trade - off between resource uptake and yield is a common place in the microbial world , which occurs due to biophysical limitations preventing organisms to optimize multiple traits simultaneously .independent experiments have reported the existence of a negative correlation between rate and yield .the respiration mode of processing glucose was made possible about 2.5 billion years ago during the great oxygenation event that introduced free oxygen in the atmosphere .one important issue is to understand under which conditions the efficient mode of metabolism , which displays a lower growth rate , could arise and fixate .previous studies have tried to understand the mechanism that can favor the fixation and maintenance of the efficient strain . inspatially homogeneous populations the most influential factor determining the fate of the population is the resource influx rate . under resource scarcitythe efficient strain tends to dominate , while under the scenario of abundant resource the inefficient strain thrives . besides, it has been shown that under the scenario of spatially structured populations or populations structured in groups a more favorable scenario for the fixation of the efficient trait is created .following those investigations the coexistence is found under specific conditions in spatially structured populations , but not found in well - mixed populations .though , an experimental study with yeast populations in batch culture showed that , in the presence of a toxic metabolite , coexistence can be achieved . in the current work ,we survey the conditions under which a toxic metabolite can promote a stable coexistence of two strains that differ in their metabolic properties .this study is carried out by finding the solutions and performing a stability analysis of a discrete - time model .a well - mixed population of variable size with two competing strains is considered .the competing strains are described as cooperator , denoted by , or defector , represented by .the former makes efficient use of resource , as it converts resource into atp at high yield .though strain has a low uptake rate of resource . on the other hand , strain , defector ,is characterized by a rapid metabolism , and hence it can consume resource at high rate , however its machinery of conversion of resource into atp is inefficient .the influx of resource into the system ( e.g. glucose ) , hereafter , is kept constant over time and directly influences the population size at stationarity .the population size is not uniquely determined by the resource influx rate but also by the population composition , as the strains have distinct metabolic properties , thus changing the rate at which the cells divide . at each generationevery individual goes through the following processes : resource uptake , conversion of the caught resource into energy , which can lead to cell division , and stochastic death .first there is a competition for the resource . at this stage ,strains of type , characterized by a rapid metabolism , are stronger competitors , as they present a larger rate of consumption and thus seize a larger portion of the shared resource than strains type .the amount of resource captured by each strain of type is whereas denotes the amount of resource captured by a strain of type .the quantities and are , respectively , the consumption rates of strains and , while and are their population numbers . from eqs .( [ j ] ) and ( [ j1 ] ) follows that , as required . since by definition a defecting strain displays a larger consumption rate , .empirical studies suggests that the consumption rate of defectors is ten - fold higher than the consumption rate of cooperators . in the subsequent process , the resourceis then converted into energy ( atp ) , and so increasing the individuals energy storage . the increase in the internal energy of a given individual , , of type ,is given by the functions with determine how efficiently energy is produced from the captured resource for each strain type .as functional responses , , we choose a holling s type ii function , which displays a decelerating rate of conversion of resource into energy and follows from the assumption that the consumer is restricted by its capability to process the resource .as such , we propose as functional responses [ parametrization ] as they should be , the functions depend on the amount of seized resource , , , and the exponents , , tune the efficiency of the process of conversion of resource into energy .the smaller the is , the more inefficient the metabolic pathway becomes . as aforementioned , the efficient strain converts resource into internal energy efficiently but at a low consumption rate , while the one that metabolizes fast , strain , exhibits an opposed behavior .the above scenario is simulated by holding , assuring that strategy has a larger uptake rate , together with the condition .the latter requirement aims to warrant that the strategy is less efficient in producing energy from the resource .if is large even a small amount of resource already provides nearly the maximum amount of energy that can be carried out in one life cycle .the fact that the rapid metabolic pathway is inefficient does not ensue that the strain will not end up producing a amount of energy higher than strain .indeed , this situation can be achieved in case a large amount of resource is captured . as an example we mention unicellular eukaryotes like yeast , which concomitantly uses two different pathways of atp production , working as a respiro - fermenting cell .therefore it is of particular interest the condition , which allows the strain to end up with a larger amount of energy at the expense of a big amount of resource .under this situation , strain type faces the worst environmental situation and in the case it can be selected for in such conditions it will certainly thrive in more favourable scenarios .the number of individuals ( cells ) is not fixed but variable over time , being determined by the intrinsic dynamics of the population and the influx of resource into the system .every generation , each cell divides into two identical cells whenever its energy storage ( group index , ) reaches a threshold value , .the daughter cells are endowed with half of the energy of the parental cell .the model assumes that individuals can also die spontaneously with a small probability per generation .the model also assumes that the growth rates of both strain types are affected due to byproducts ( toxin ) produced by the inefficient strain .the effect of the toxins on strains and are not the same being simulated as a reduction in the growth rate which is density dependent , and for strains and , respectively .the assumption that only the inefficient strain produces the toxin is quite reasonable since the fermentation process gives rise to many byproducts , such as alcohol and acethic acid , which carry substantial amount of free energy . on the other hand, the respiration process leads to the production of water and carbon dioxide , which are easily released by the cell .as aforesaid , the model considers two types of strains : strain , which displays a high yield in energy production , and a second strain which is characterized by a high uptake rate of resource though achieves a lower yield in the process of energy conversion . additionally , the strain generates the toxin that directly reduces the growth rate of the two strain types . as an individual replicates at a rate which is proportional to the rate the cell generates atp , in a discrete - time model formulation the population numbers , and , can be written as [ modeleq ] \\ n_c(t+1 ) & = n_c(t ) \left[1+a_c \left(1-\text{e}^{-\alpha_c s_c}\right)-\nu -\beta\ , n_d(t)\right].\end{aligned}\ ] ] in the above equations , represents the population size of strain type at time , and is a necessary rescaling as a cell only divides after its energy storage surpasses the energy threshold .the other parameters are as defined before .the set of equations ( [ modeleq ] ) has several solutions . as standard , a solution is the set of values that satisfies the conditions and .the simplest ones are those corresponding to isogenic populations at the steady state , i.e. , either strain or strain remains with the exclusion of the other one .the extinction of both strain types can also be achieved under very simple assumptions , as discussed later .of particular interest to us , is the solution that guarantees a stable coexistence of both strain types .actually , this is the greatest motivation of the current study . below we will present the different situations and possible solutions . a stability analysis to delimit the region of the parameter space at which the solutions are stableis also carried out .the possible solutions of eqs .( [ modeleq ] ) are : i ) and ; ii ) and ; iii ) and ; and the coexistence solution iv ) and .a trivial solution is ; . calculating the eigenvalues of the jacobian matrix of the systemwe verify that this solution is stable only when and .these conditions just mean that extinction is reached when the death rate is larger than the growth rates of both strain types .if the strain inexists at the stationary regime , i.e. , it can be easily shown that the above solution is the same found in ref . , as in the absence of defectors no toxin is produced .an evolutionary invasion analysis enables us to determine the conditions under which the solution is evolutionary stable , i.e. , once the efficient trait is established it can not be invaded by the defecting trait . in order to accomplish this , the jacobian of the system ( [ modeleq ] )is evaluated at and .therefore , the jacobian matrix becomes & 0 \\ \frac{s\ , \alpha_c\ , \beta } { \log \left(1-\frac{\nu } { a_c}\right ) } + \epsilon a_c(1-\frac{\nu}{a_c } ) \log \left(1-\frac{\nu } { a_c}\right ) & 1 + \left(a_c-\nu\right ) \log \left(1-\frac{\nu } { a_c}\right ) \end{array } \right)\label{jacobiannd0}\end{aligned}\ ] ] the stability of the solution is determined by the eigenvalues of the jacobian matrix .due to its form ( eq . [ jacobiannd0 ] ) ) , the eigenvalues are simply given by its diagonal elements , i.e. \label{eigenvalue1}\\ \lambda_2 & = 1 + \left(a_c-\nu\right ) \log \left(1-\frac{\nu } { a_c}\right ) \label{eigenvalue2}\end{aligned}\ ] ] where and . to be stable, a solution in a discrete - time formulation requires that for .if we assume the biologically plausible assumption that , these eigenvalues can be approximated as = 1-\nu\left(1-\epsilon \delta\gamma\right)\label{eigenvalue1 - 1}\\ \lambda_2 & \approx 1 + \left(a_c-\nu\right ) \left(-\frac{\nu } { a_c}\right)=1-\nu\left(1-\frac{\nu}{a_c}\right ) \label{eigenvalue2 - 1}\end{aligned}\ ] ] where . as is needed to assure stability of the solution , looking at the eq .( [ eigenvalue1 - 1 ] ) some restrictions on the parameter values are settled : the second condition is always automatically verified as and . on the other hand, the condition ( [ firstcondition ] ) imposes that in order to turn the population of efficient strains evolutionarily stable against invasion of the selfish strain .the condition matches the one derived in ref . .the second eigenvalue lays down additional constraints to ensure stability of the solution the condition [ thirdcondition ] has a simple interpretation : if the probability of death exceeds the maximum reproduction rate of the strain , the solution is no longer stable and the population is doomed to extinction . the last condition ( [ fourthcondition ] ) is always satisfied as is much smaller than one .we notice that the stability of the solution with and is independent of the effect of the toxin on the strains , i.e. independent of and .this happens because the toxin is only produced by strain . introducing a small amount of bring an infinitesimal amount of toxin which is not relevant in a first order approximation ., with , , , , and .the population is in yellow and the population in blue .the full lines denote stable equilibria and the dashed lines unstable equilibria . to the left of blue dotted line a population of pure is stable and to the right of yellow dotted line a population of pure is stable . in the middle regionthe population is stable only in coexistence.,scaledwidth=45.0% ] the population size of the inefficient strain at equilibrium , and in the absence of strain , is given by the solution of the following equation =1-\frac{\nu + \eta\ , \hat{n}_d}{a_d } \label{transcendental - nd}.\end{aligned}\ ] ] the solution of this transcendental equation can be obtained numerically . since the left - hand side is a monotonically increasing function of whereas the right - hand side starts at and monotonically decreases the equation has a positive solution whenever .when the solution is no longer stable and population becomes extinct . the jacobian matrix is now given by \left[1+\frac{s\ , \alpha_d}{\hat{n}_d}\right]\right\ } & -\frac{a_d}{\epsilon}\frac{\alpha_d s}{\hat{n}_d } \exp\left[-\frac{s\ , \alpha_d } { \hat{n}_d}\right]\\ 0 & 1-\nu-\beta \hat{n}_d + a_c\left\{1-\exp\left[-\frac{s\ , \alpha_c}{\epsilon\ , \hat{n}_d}\right]\right\ } \\\end{array } \right)\label{jacobiannc0}\end{aligned}\ ] ] where is the numerical solution of eq .( [ transcendental - nd ] ) , and its eigenvalues are simply \right\}\\ \lambda_2 & = 1-\nu-\beta \hat{n}_d + a_c\left\{1-\exp\left[-\frac{s\ , \alpha_c}{\epsilon\ , \hat{n}_d}\right]\right\}.\end{aligned}\ ] ] in figure [ fig : donly ] we numerically solve eq .( [ transcendental - nd ] ) and present in the diagram vs the region of its stability .as expected , the toxin has a harmful effect on strain leading to reduced population sizes at equilibrium as increases .we observe the existence of a threshold value for , , above which the population can no longer be sustained .although the population size of individuals type is not influenced by the parameter ( see eq . [ transcendental - nd ] ) , the stability of the solution is severely affected .the larger the effect of the toxin on strain , , the wider is the region of stability of the solution .the green region , denoting the conditions that ensure the stability of the present solution is surrounded by two gray areas .the one to the right means that the solution and is no longer evolutionarily stable , whereas the coexistence solution becomes stable , as we will see next .the gray region to the left of the green one is not physically meaningful .for those set of values of and , the net maximum growth rate of the strain becomes negative , i.e. , .this occurs because for very small the population size of defectors at equilibrium can be large , and so does the term . [cols="^,^,^ " , ] there is at least another special case in which an exact coexistence solution can be determined , i.e. , .actually , this is a very restricted situation , because the means that the maximum energy produced by the efficient strain must be as large as the performance of the strain by uptaking the resource .this is certainly not the case for fermento - respiring cells .though , the condition enables us to simplify eqs .( [ coexistence1 ] ) and ( [ coexistence2 ] ) , thus obtaining }-\epsilon\,\nu\frac{\gamma-1}{\eta-\gamma\beta}.\end{aligned}\ ] ] it is worth noticing that when and , the coexistence is not possible as in this case equals which is always negative and so not physically sound .now we explore the dependence of the coexistence solution and its stability in terms of the parameters and which describe the effects of the toxin on the strains .figure [ fig : stability_vs_sugar ] shows the regions at which the coexistence solution is stable , here represented by the green area .the gray region represents the set of the parameter space at which the solution still makes sense though it is not stable . in the graph the dependence of the stability region on the amount of resourceis studied .interestingly , we observe that the stability region grows as the resource influx rate increases .this is a quite striking outcome since in the absence of the toxin , the increase of the amount of resource available to the system favours the fixation of the selfish strain , while harsh envorinmental conditions tends to favour the cooperative trait .this outcome evinces the role played by the toxin production as the coexistence is even enhanced as the resource becomes more abundant , thus warranting the maintenance of the efficient strain . in figure [ fig : population_size_analytical ] we see a heat map of the fraction and size of the population of both strains in terms of and in the set of parameters at which coexistence solution is stable . as expected , as increased the fraction and size of the population of the strain is reduced . the same happening to the population of strain as rises .although we also notice that , for fixed , an augment of favours the population of strain , the opposite situation , fixing and augmenting promotes a much higher variation in the population size of the efficient strain .we have studied the possible solutions ( extinction , only strain c , only strain d and coexistence ) of a discrete - time model that describes the evolution of two competing metabolic strategies .the model assumes that the population is unstructured .one of the strains produces a toxin that affects the net growth rate of both strains , yet with different strength .as empirically found , the model predicts that the coexistence between a high yield strain and a high rate strain is possible .this outcome corroborates the fact that structuring is not the only possible mechanism promoting coexistence . when the effect of the toxin on the efficient strain is negligible , the coexistence between the two strain types is possible for intermediate levels of the relative efficiency , , of the inefficient strain .as expected , for very low values of the efficient strain dominates while for high values of the inefficient strain dominates .the novelty comes about with the observation of a coexistence regime which takes place at intermediate values of , which is lacking if the toxin is not considered . in the following we explored a more general scenario in which the toxin also attenuates the growth rate of the efficient strain . at this point ,the analysis relied on values of relative efficiency at which the cooperative strain is not favored in the absence of toxin , and extensively probed the outcome of the model in the parameter space vs .a striking finding of the model is the observation that the resource influx rate has a prominent role in driving the fate of the population . in a situation where the resource is scarcethe population evolves to a pure population of only individuals of type ( inefficient strain ) and the coexistence solution is always unstable .however , as the resource influx rate rises a coexistence domain emerges .the domain of the coexistence solution starts just after the pure strain solution becomes unstable and widens out as the resource influx rate is augmented .this is quite surprising as the plenty of resource usually favors the defecting behavior .however , although a large amount of resource into the system means that the inefficient strain ends up seizing a large amount of resource and consequently has an immediate positive effect on the growth rate , subsequently it also enhances the production of byproducts ( toxin ) .our results corroborates the empirical observation that the coexistence between competing metabolic strategies is a possible outcome , and proves the effectiveness of the metabolite intermediate as the main driving mechanism for this purpose .counterintuitively , our results evince that this coexistence is enhanced in a scenario of plentiful of resource .prac is partially supported by conselho nacional de desenvolvimento cientfico e tecnolgico ( cnpq ) , and also acknowledges financial support from fundao de amparo cincia e tecnologia do estado de pernambuco ( facepe ) .fff gratefully acknowledges financial support from fapesp ( fundao de amparo pesquisa do estado de so paulo ) .jl is supported by fundao de amparo cincia e tecnologia do estado de pernambuco ( facepe ) and aa has a fellowship from conselho nacional de desenvolvimento cientfico e tecnolgico ( cnpq ) .the general form of the jacobian matrix for the analytical approximation is a with entries \left[1+\alpha_d s\frac{n_d(t ) \epsilon^2}{[n_c(t)+\epsilon n_d(t)]^2}\right]\right\}-2 \eta\,n_d(t ) \\ j_{12}&=-n_d(t)\left\{a_d \alpha_d\,s\frac{\epsilon \exp\left[-\alpha_d\frac{\epsilon s}{n_c(t)+\epsilon n_d(t)}\right]}{[n_c(t)+\epsilon n_d(t)]^2}\right\ } \\j_{21}&=-n_c(t ) \left\{a_c \alpha_c s\frac { \epsilon \exp\left[-\alpha_c\frac{s}{n_c(t)+\epsilon n_d(t)}\right]}{[n_c(t)+\epsilon n_d(t)]^2}+\beta \right\ } \\ j_{22}&=1-\nu + a_c \left\{1-\exp{\left[-\alpha_c\frac{s}{n_c(t)+\epsilon n_d(t)}\right ] } \left[1+\alpha_c s\frac{n_c(t)}{[n_c(t)+\epsilon n_d(t)]^2}\right]\right\}-\beta \ , n_d(t)\end{aligned}\ ] ]
understanding why strains with different metabolic pathways that compete for a single limiting resource coexist is a challenging issue within a theoretical perspective . previous investigations rely on mechanisms such as group or spatial structuring to achieve a stable coexistence between competing metabolic strategies . nevertheless , coexistence has been experimentally reported even in situations where it can not be attributed to spatial effects [ heredity * 100 * , 471 ( 2008 ) ] . according to that study a toxin expelled by one of the strains can be responsible for the stable maintenance of the two strain types . we propose a resource - based model in which an efficient strain with a slow metabolic rate competes with a second strain type which presents a fast but inefficient metabolism . moreover , the model assumes that the inefficient strain produces a toxin as a byproduct . this toxin affects the growth rate of both strains with different strength . through an extensive exploration of the parameter space we determine the situations at which the coexistence of the two strains is possible . interestingly , we observe that the resource influx rate plays a key role in the maintenance of the two strain types . in a scenario of resource scarcity the inefficient is favored , though as the resource influx rate is augmented the coexistence becomes possible and its domain is enlarged .
the formation and evolution of a protein family is a problem of the same importance as that of protein folding and unfolding .we also think that the last problem will be solved or at least treated on a more general perspective , by concentrating the theoretical research on the joint formation and evolution of the whole set of proteins of each protein family .a probabilistic analysis of a model for protein family formation is then most welcome which is able to unveil the specific nature of this protein family formation process ( pffp ) and the consequent folding / unfolding process .a pictorial representation of the pffp to be seen as a game for the upsurge of life and its homeostasis can be introduced by thinking on consecutive trials of icosahedra , each face of them corresponding to a different amino acid .this sort of ideas has been already introduced on the scientific literature of the entropy maximization principle .our desiderata is then to translate the information contained on biological almanacs ( protein databases ) in terms of random variables in order to model a dynamics for describing the folding and unfolding of proteins , this means that we think on pffp instead of the evolution of a single protein as the key to understand the protein dynamics . in section 2, we introduce a description of the sample space of probability to be used in the calculations of the present contribution .we have also made a digression on generalized proposals of joint probabilities which should be used on future generalizations of the present work . in section 3 ,a poisson statistical receipt is derived from applications of a master equation method in order to derive one adequate probability distribution of our pffp problem in section 4 . in section 5, we describe the distribution of amino acids in protein families and the pattern recognition through level curves of the probability distribution .we also determine the domain of the variables for the present probabilistic model . in section 6, we introduce a scheme in terms of graphical cartesian representations of level curves with selected protein families from the pfam database .we also investigate the possibility of using this representation to justify the classification of protein families into clans .a final section of concluding remarks will comment on the introduction of alternative candidates to usual probability distributions , like the use of joint probabilities and the study of non - additive entropy measures .all these proposals which aim to improve the pattern recognition method will appear in forthcoming publications .the fundamental idea of the protein family formation process as introduced in section 1 , is of a dynamical game played by nature .the amino acids which are necessary to participate in the formation of a protein family are obtained from a `` universal ribossome deposit '' and the process will consist in the distribution on shelves of a bookshelf . in the first stage , amino acids are distributed on shelves of the first bookshelf . on a second stage ,the shelves of the second bookshelf should be fulfilled by the amino acids transferred from the first bookshelf and so on . after choosing a database , in order to implement all the future calculations with random variables , we select an array of protein domains ( rows ) .there are amino acids on the first column , in the second one , in the column .we then select the ( ) block where another way of introducing a ( ) block is to specify the number of columns a priori and to delete all proteins such that , and to also delete the last amino acids on the remaining proteins .in fig.1 below we present an example of a block , which is organized according to this second basic procedure .there is at least one block associated to a protein family .amino acids from pfam database . ]let be the probability of occurrence of the amino acid and a , c , d , e , f , g , h , i , k , l , m , n , p , q , r , s , t , v , w , y in the -th column , where is the number of occurrences of the -amino acid in the -th column of the block .we have and the probabilities will be considered as the components of a -component vector , .these vectors will be random variables w.r.t .the probability distribution already defined in eq. .we can also introduce the probability distribution corresponding to the occurrence of amino acids , in columns , , respectively .we can write , with a , c , d , e , f , g , h , i , k , l , m , n , p , q , r , s , t , v , w , y. this is the joint probability distribution and it can be understood in terms of the conditional probability as from bayes law [ 11 ] we can write analogously to eqs., , we can write and we can take on eq. , : these random variables will be arranged as square matrices of order . a straightforward generalization to a multiplet of amino acids , which occur on ordered columns ,can be done by introducing joint probabilities such as , where and with these random variables are objects of components each .analogously to eqs. , , we can now write , and eqs.- will be reserved for future developments . in the present work ,we restrict all calculations to simple probabilities given by eq.- .the temporal evolution of random variables such as the probabilities of occurrence introduced above can be modelled through a master equation approach .let be the probability of occurrences of the amino acids in the -th column of the ( ) block at time .the probability of observing the same amino acid at the -th column after an interval of time is given by : where is the transition probability per unit time between columns and .we now imagine that there is a column `` the universal ribosome deposit '' where all amino acids are present at time , as this also means that at the initial time no amino acid has been received by the columns the shelves of amino acid bookshelf ( the protein family ) , as after taking the limit on eq. , we get : we also have , for , from eqs. and , we have , where from eqs. and we can write for we also have for , from eqs. , , we have eq. will turn into : by finite induction on , we can write the poisson distribution : now introduce the marginal probability distributions associated to the poisson process given by eq. .we have : from eqs. , , we can write : where is an auxiliary parameter . in the present work, we make the assumption that , which leads to a linear approximation for through eq. , or , we then have from eqs., , let us write now as the time in which the -amino acid is seen to occur at the -th column of the block .we write , where is the time interval for the transition of the amino acid between consecutive columns , the marginal probability distribution function of eq. should be considered as a two - variable distribution : where we can then write , eq. can be written in a more feasible form for future calculations such as where is the gamma function . here , since is an integer . and is related to the incomplete gamma function .one should note that the real representations of probability distribution functions should be given by the restriction on the surfaces .in fig.2 we present these surfaces for several -values and the planes and . for and the planes , .in the present section we set up the experimental scenario of the paper as well as the fundamental equations for the statistical treatment of data . we will gave on table 1 below an example of probability distribution of amino acid occurrence for the a amino acid of the pfam family pf01051 on a block .the set of -values is partitioned into subsets of values .the -values belonging to a subset do correspond to a unique constant value of the function , as where is the number of occurrences of the a amino acid on each subset of the -values .these values do correspond to the level curves of the surfaces . .the values on each row of the second column belong to the subset of values for defining a level curve of the surfaces .data obtained from a block as a representative of the pfam family pf01051 . [ cols="^,^",options="header " , ] in order to introduce the cartesian representation of the level curves , we restrict the limits of the variables , of eq. to each cell of the array given in table 2 . a generic element , , associated to the limits : where , , , are given into eqs. and and is an arbitrary non - dimensional parameter to be chosen as for obtaining a convenient cartesian representation .the level curves corresponding to the values of ( the height of the planes ) which intersect the surfaces should be written for each cell as : where we have used eq. .in the present work we have introduced the idea of protein family formation process ( pffp ) and we now stress that the evolution of an `` orphan '' protein is not a special case of this process .we then consider that proteins do not evolute independently .their evolution is a collective evolution of all their `` relatives '' grouped into families . in order to derive a model of probability occurrence of amino acids ,we have started from a master equation and we have made a very simple assumption such as that of eq. .we expected to obtain a good method for identifying the clan association of protein families in terms of level curves of the probability distributions of amino acids .the examples given into section 6 , eqs. , , , in despite of their advantage over any conclusion derived from the usual analysis of histograms of figs.8 , 9 , 10 , do not seem to lead to an unequivocal conclusion about clan formation .we then propose the following research lines for future development : 1 .the consideration of other families and clans into the analysis reported in section 6 . 2 .the introduction of other assumptions for deriving the probability distributions .the saddle point approximation seems to be a most convenient one .3 . the consideration of models of distributions based on the joint probabilities of section 2 . 4 .the introduction of generalized entropy measures as the selected functions of random variables instead of studying the elementary probabilities distributions of section 2. 0 e.t .jaynes probability theory - the logic of science - cambridge university press , 2003 .j. harte from spatial pattern in the distribution and abundance of species to a unified theory of ecology .the role of maximum entropy methods - applied optimization , vol.102 , mathematical modelling of biosystems ( 2008 ) 243 - 272 , springer - verlag , r.p .mondaini , p.m. pardalos ( eds . ) .mondaini entropy measures based method for the classification of protein domains into families and clans - biomat 2013 ( 2014 ) 209 - 218 .mondaini , s.c . de albuquerque neto entropy measures and the statistical analysis of protein family classification - biomat 2015 ( 2016 ) 193 - 210 .n.g . van kampen stochastic process in physics and chemistry , elsevier b. v. 2007 , 3 edition .w. bialek biophysics - searching for principles , princeton university press , 2012 .finn et al . pfam : clans , web tools and services - nucleic acids research , 34 ( 2006 ) d247-d251 .m. punta et al . the pfam protein families database - nucleic acids research , 40 ( 2012 ) d290-d301 .finn et al . the pfam protein families database - nucleic acids research , 42 ( 2014 ) d222-d230 .finn et al . the pfam protein families database - nucleic acids research , 44 ( 2016 ) d279-d285 .degroot , m.j .schervish - probability and statistics , 4 edition , addison - wesley , 2012 .m. abramowitz , j.a .handbook of mathematical functions , dover publ ., n. york , 9 printing , 1972 .d. veberi having fun with lambert w function - ar : 1003.1628 v1 , 8 mar 2010 .sereda et al . reversed - phase chromatography of synthetic amphipathic -helical peptides as a model for ligand / receptor interactions .effect of changing hydrophobic environment on the relative hydrophilicity / hydrophobicity of amino acid side - chains - j. chromatogr .a 676 ( 1994 ) 139 - 153 r. p. mondaini , s. c. de albuquerque neto the pattern recognition of probability distributions of amino acids in protein families . the saddle point approximation ( 2016 ) in preparation
a pattern recognition of a probability distribution of amino acids is obtained for selected families of proteins . the mathematical model is derived from a theory of protein families formation which is derived from application of a pauli s master equation method .
a central result of shannon s classical theory of information is the _ noisy channel coding theorem_. this result provides an _ effective procedure _ for determining the _ capacity _ of a noisy channel - the maximum rate at which classical information can be reliably transmitted through the channel .there has been much recent work on quantum analogues of this result .this paper has two central purposes .the first purpose is to develop general techniques for proving upper bounds on the capacity of a noisy quantum channel , which are applied to several different classes of quantum noisy channel problems .second , we point out some essentially new features that quantum mechanics introduces into the noisy channel problem .the paper is organized as follows . in section [sect : channels ] we give a basic introduction to the problem of the noisy quantum channel , and explain the key concepts .section [ sect : qops ] reviews the _quantum operations _ formalism that is used throughout the paper to describe a noisy quantum channel , and section [ sect : entropy exchange ] reviews the concept of the _ entropy exchange _ associated with a quantum operation .section [ sect : classical ] shows how the classical noisy channel coding theorem can be put into the quantum language , and explains why the capacities that arise in this context are not useful for applications such as quantum computing and teleportation .section [ sect : entanglement fidelity ] discusses the _ entanglement fidelity _ , which is the measure we use to quantify how well a state and its entanglement are transmitted through a noisy quantum channel .section [ sect : coherent information ] discusses the _ coherent information _ introduced in as an analogue to the concept of _ mutual information _ in classical information theory .many new results about the coherent information are proved , and we show that quantum entanglement allows the coherent information to have properties which have no classical analogue .these properties are critical to understanding what is essentially quantum about the quantum noisy channel coding problem .section [ sect : noisy coding revisited ] brings us back to noisy channel coding , and formally sets up the class of noisy channel coding problems we consider .section [ sect : upper bounds ] proves a variety of upper bounds on the capacity of a noisy quantum channel , depending on what class of coding schemes one is willing to allow .this is followed in section [ sect : discussion ] by a discussion of the achievability of these upper bounds and of earlier work on channel capacity .section [ sect : observed channel ] formulates the new problem of a noisy quantum channel with measurement , allowing classical information about the environment to be obtained by measurement , and then used during the decoding process .upper bounds on the corresponding channel capacity are proved .finally , section [ sect : conc ] concludes with a summary of our results , a discussion of the new features which quantum mechanics adds to the problem of the noisy channel , and suggestions for further research .the problem of noisy channel coding will be outlined in this section .precise definitions of the concepts used will be given in later sections .the procedure is illustrated in figure [ fig : channel0 ] .3.4 in the noisy quantum channel , together with encodings and decodings .there is a _ quantum source _ emitting unknown quantum states , which we wish to transmit through the channel to some receiver .unfortunately , the channel is usually subject to noise , which prevents it from transmitting states with high fidelity . for example, an optical fiber suffers losses during transmission .another important example of a noisy quantum channel is the memory of a quantum computer .there the idea is to transmit quantum states _ in time_. the effect of transmitting a state from time to can be described as a noisy quantum channel .quantum teleportation can also be described as a noisy quantum channel whenever there are imperfections in the teleportation process .the idea of noisy channel coding is to encode the quantum state emitted by the source , , which one wishes to transmit , using some _ encoding operation _ , which we denote .the encoded state is then sent through the channel , whose operation we denote by .the output state of the channel is then _ decoded _ using some _ decoding operation _ , .the objective is for the decoded state to match with high fidelity the state emitted by the source . as in the classical theory, we consider the fidelity of large blocks of material produced by repeated emission from the source , and allow the encoding and decoding to operate on these blocks .a channel is said to transmit a source reliably if a sequence of block - coding and block - decoding procedures can be found that approaches perfect fidelity in the limit of large block size . what then is the _ capacity _ of such a channel - the highest rate at which information can be reliably transmitted through the channel ?the goal of a _ channel capacity theorem _ is to provide a procedure to answer this question .this procedure must be an _ effective procedure _ , that is , an explicit algorithm to evaluate the channel capacity .such a theorem comes in two parts .one part proves an upper bound on the rate at which information can be reliably transmitted through the channel .the other part demonstrates that there are coding and decoding schemes which attain this bound , which is therefore the channel capacity .we do not prove such a channel capacity theorem in this paper .we do , however , derive bounds on the rate at which information can be sent through a noisy quantum channel .what is a quantum noisy channel , and how can it be described mathematically ? this section reviews the formalism of quantum operations , which is used to describe noisy channels .previous papers on the noisy channel problem have used apparently different formalisms to describe the noisy channel .in fact , all the formalisms can be shown to be equivalent , as we shall see in this section .historically , quantum operations have also sometimes been known as _ completely positive maps _ or _superscattering operators_. the motivation in all cases has been to describe general state changes in quantum mechanics .a simple example of a state change in quantum mechanics is the unitary evolution experienced by a closed quantum system .the final state of the system is related to the initial state by a unitary transformation , although all closed quantum systems are described by unitary evolutions , in accordance with schrdinger s equation , more general state changes are possible for open quantum systems , such as noisy quantum channels .how does one describe a general state change in quantum mechanics ?the answer to this question is provided by the quantum operations formalism .this formalism is described in detail by kraus ( see also hellwig and kraus ) and is given short but detailed reviews in choi and in the appendix to . in this formalismthere is an _ input state _ and an _ output state _ , which are connected by a map map is a _ quantum operation _ , , a linear , trace - decreasing map that preserves positivity .the trace in the denominator is included in order to preserve the trace condition , . the most general form for that is physically reasonable ( in addition to being linear and trace - decreasing and preserving positivity, a physically reasonable must satisfy an additional property called complete positivity ) , can be shown to be the system operators , which must satisfy , completely specify the quantum operation . in the particular case of a unitary transformation , there is only one term in the sum , , leaving us with the transformation ( [ eqtn : unitary operation ] ) .a class of operations that is of particular interest are the _ trace - preserving _ or _ non - selective _ operations .physically , these arise in situations where the system is coupled to some environment which is not under observation ; the effect of the evolution is averaged over all possible outcomes of the interaction with the environment .trace - preserving operations are defined by the requirement that this is equivalent to requiring that for all density operators , explaining the nomenclature `` trace - preserving '' . notice that this means the evolution equation ( [ eqtn : general evolution ] ) reduces to the simpler form when is trace - preserving . the following _ representation theorem _ is proved in , , and .it shows the connection between trace - preserving quantum operations and systems interacting unitarily with an environment , and thus provides part of the justification for the physical interpretation of trace - preserving quantum operations described above ._ theorem _ ( representation theorem for trace - preserving quantum operations ) : suppose is a trace - preserving quantum operation on a system with a -dimensional state space. then it is possible to construct an `` environment '' of at most dimensions , such that the system plus environment are initially uncorrelated , the environment is initially in a pure state and there exists a unitary evolution on system and environment such that here and elsewhere in the paper a subscript on a trace indicates a partial trace over the corresponding system ( in this case ) .conversely , given any initially uncorrelated environment ( possibly of more than dimensions , and initially impure ) , a unitary interaction between the system and the environment gives rise to a trace - preserving quantum operation , this theorem tells us that any trace - preserving quantum operation can always be _ mocked up _ as a unitary evolution by adding an environment with which the system can interact unitarily .conversely , it tells us that any such unitary interaction with an initially uncorrelated environment gives rise to a trace - preserving quantum operation .both these facts are useful in what follows .the picture we have of a quantum operation is neatly summarized by the following diagram . ( 8,5)(0,0 ) ( 2,3)(1,1) ( 3,3.5)(1,0)3 ( 6,3)(1,1) ( 1.25,3.5)(0,0) ( 4,0.5)(1,1) ( 4.4,1.7)(0,1)1.6 ( 4.6,3.3)(0,-1)1.6 ( 4.8,2.5)(0,0)[cl] here , denotes the state of the system before the interaction with the environment , and the state of the system after the interaction . unless stated otherwise we follow the convention that and are -dimensional . the environment system and the operator might be chosen to be the actual physical environment and its interaction with , but this is not necessary .the only thing that matters for the description of noisy channels is the dynamics of . for any given quantum operation are many possible representations of in terms of environments and interactions .we always assume that the initial state of is a _ pure state _ , and regard as a mathematical artifice .of course , the actual physical environment , , may be initially impure , but the above representation theorem shows that for the purposes of describing the dynamics of , it can be replaced by an `` environment '' , , which is initially pure and gives rise to exactly the same dynamics . in what followsit is this latter that is most useful .shannon s classical noisy coding theorem is proved for _ discrete memoryless channels_. discrete means that the channel only has a finite number of input and output states . by analogywe define a discrete quantum channel to be one which has a finite number of hilbert space dimensions . in the classical case, memoryless means that the output of the channel is independent of the past , conditioned on knowing the state of the source .quantum mechanically we take this to mean that the output of the channel is completely determined by the encoded state of the source , and is not affected by the previous history of the source .phrased in the language of quantum operations , we assume that there is a quantum operation , , describing the dynamics of the channel . the input of the channel is related to the output by the equation for the majority of this paper we assume , as in the previous equation , that the operation describing the action of the channel is trace - preserving .this corresponds to the physical assumption that no classical information about the state of the system or its environment is obtained by an external classical observer .all previous work on noisy channel coding with the exception of has assumed that this is the case , and we do so for the majority of the paper . in section [sect : observed channel ] we consider the case of a noisy channel which is being observed by some classical observer .in addition to the environment , , it is also extremely useful to introduce a _ reference system _ , , in the following way .one might imagine that the system is initially part of a larger system , , and that the total is in a pure state , , satisfying such a state is called a _ purification _ of , and it can be shown that such a system and purifications always exist . from our point of view introduced simply as a mathematical device to purify the initial state .the joint system evolves according to the dynamics given by the overall picture we have of a trace - preserving quantum operation thus looks like : ( 8,7)(0,0 ) ( 2,3)(1,1) ( 3,3.5)(1,0)3 ( 6,3)(1,1) ( 2,5)(1,1) ( 2.4,4.10)(0,1).05 ( 2.6,4.10)(0,1).05 ( 2.4,4.25)(0,1).05 ( 2.6,4.25)(0,1).05 ( 2.4,4.40)(0,1).05 ( 2.6,4.40)(0,1).05 ( 2.4,4.55)(0,1).05 ( 2.6,4.55)(0,1).05 ( 2.4,4.70)(0,1).05 ( 2.6,4.70)(0,1).05 ( 2.4,4.85)(0,1).05 ( 2.6,4.85)(0,1).05 ( 1,4.5)(0,0) ( 4,0.5)(1,1) ( 4.4,1.7)(0,1)1.6 ( 4.6,3.3)(0,-1)1.6 ( 4.8,2.5)(0,0)[cl] the picture we have described thus far applies only to _ trace - preserving _ quantum operations . later in the paperwe will also be interested in quantum operations which are not trace - preserving .that is , they do not satisfy the relation , and thus in general .such quantum operations arise in the theory of _generalized measurements_. to each outcome , , of a measurement there is an associated quantum operation , , with an operator - sum representation , the probability of obtaining outcome is postulated to be the completeness relation for probabilities is equivalent to the completeness relation for the operators appearing in the operator - sum representations thus for each , as an aside , it is interesting to note that the formulation of quantum measurement based on the projection postulate , taught in most classes on quantum mechanics , is a special case of the quantum operations formalism , obtainable by using a single projector in the operator - sum representation for . the formalism of positive operator valued measures ( povms ) is also related to the generalized measurements formalism : are the elements of the povm which is measured . a result analogous to the earlier representation theorem for trace - preserving quantum operations can be proved for general operations . _( general representation theorem for operations ) suppose is a general quantum operation .then it is possible to find an environment , , initially in a pure state uncorrelated with the system , a unitary , a projector onto the environment alone , and a constant , such that furthermore , in the case of a generalized measurement described by operations it is possible to do so in such a way that for each the corresponding constant , and the projectors form a complete orthogonal set , , .conversely , any map of the form ( [ eqtn : general unitary rep ] ) is a quantum operation .once again , introducing a reference system which purifies we are left with a picture of the dynamics that looks like this : ( 8,7)(0,0 ) ( 2,3)(1,1) ( 3,3.5)(1,0)3 ( 6,3)(1,1) ( 2,5)(1,1) ( 2.4,4.10)(0,1).05 ( 2.6,4.10)(0,1).05 ( 2.4,4.25)(0,1).05 ( 2.6,4.25)(0,1).05 ( 2.4,4.40)(0,1).05 ( 2.6,4.40)(0,1).05 ( 2.4,4.55)(0,1).05 ( 2.6,4.55)(0,1).05 ( 2.4,4.70)(0,1).05 ( 2.6,4.70)(0,1).05 ( 2.4,4.85)(0,1).05 ( 2.6,4.85)(0,1).05 ( 1,4.5)(0,0) ( 4,0.5)(1,1) ( 4.4,1.7)(0,1)1.6 ( 4.6,3.3)(0,-1)1.6 ( 4.8,2.5)(0,0)[cl] a few miscellaneous remarks will be useful later on . 1. a prime always denotes a _normalized _ state .for instance , 2 .the total state of the system starts and remains pure .that is , is a pure state .purity gives very useful relations amongst von neumann entropies , such as and all other permutations amongst and .these are used frequently in what follows .generically we denote quantum operations by and the dimension of the quantum system by .trace - preserving _ quantum operations arise when a system interacts with an environment , and _ no measurement _ is performed on the system plus environment .non trace - preserving operations arise when classical information about the state of the system is made available by such a measurement . for most of this paperthe noisy quantum channel is described by a trace - preserving quantum operation .sometimes we consider the composition of two ( or more ) quantum operations .generically we use the notation for the different operations , and the notation to denote composition of operations , furthermore it is sometimes useful to use the picture of quantum operations to discuss compositions .we denote the environment corresponding to operation by , and assume environments corresponding to different values of are independent and initially pure .so , for example , the initial state for a two - stage composition would be a single prime denotes the state of the system after the application of , and a double prime denotes the state of the system after the application of , and so on .this section briefly reviews the definition and some basic results about the _ entropy exchange _ , which was independently introduced by schumacher and lloyd .the entropy exchange turns out to be central to understanding the noisy quantum channel .the _ entropy exchange _ of a quantum operation with input is defined to be where is the state of an initially pure environment ( the `` mock '' environment of the previous section ) after the operation , and is the von neumann entropy .if then a convenient form for the entropy exchange is found by defining a matrix with elements it can be shown that the last equation is frequently useful when performing calculations .in this section we show how classical noisy channels can be formulated in terms of quantum mechanics .we begin by reviewing the formulation in terms of classical information theory .a classical noisy channel is described in terms of distinguishable channel states , which we label by .if the input to the channel is symbol then the output is symbol with probability .the channel is assumed to act independently on each input . for each , the probability sum rule is satisfied .these _ conditional probabilities _ completely describe the classical noisy channel .suppose the input to the channel , , is represented by some classical random variable , , and the output by a random variable .the mutual information between and is defined by where is the shannon information of the random variable , , defined by with .shannon showed that the capacity of a noisy classical channel is given by the expression where the maximum is taken over all possible distributions for the channel input , .notice that although this is not an explicit expression for the channel capacity in terms of the conditional probabilities , the maximization can easily be performed using well known techniques from numerical mathematics .that is , shannon s result provides an effective procedure for computing the capacity of a noisy classical channel .all these results may be re - expressed in terms of quantum mechanics .we suppose the channel has some preferred orthonormal basis , , of signal states . for conveniencewe assume the set of input states , , is the same as the set of output states , , of the channel , although more general schemes are possible .for the purpose of illustration the present level of generality suffices .a classical input random variable , , corresponds to an input density operator for the quantum channel , the statistics of are recoverable by measuring in the basis . defining operators by we find that the channel operation defined by is a trace - preserving quantum operation , and that where is the density operator corresponding to the random variable that would have been obtained from given a classical channel with probabilities .this gives a quantum mechanical formalism for describing classical sources and channels .it is interesting to see what form the mutual information and channel capacity take in the quantum formalism .notice that next we compute the entropy exchange associated with the channel operating on input , by computing the matrix given by ( [ eqtn : w matrix ] ) .the matrix corresponding to the channel with input has entries but the joint distribution of satisfies . thus is diagonal with eigenvalues , so the entropy exchange is given by it follows that and thus the shannon capacity of the classical channel is given in the quantum formalism by , \end{aligned}\ ] ] where the maximization is over all input states for the channel , , which are diagonal in the basis .the problem we have been considering is that of transmitting a discrete set of orthogonal states ( the states ) through the channel . in many quantum applicationsone is not only interested in transmitting a discrete set of states , but also arbitrary superpositions of those states .that is , one wants to transmit entire _ subspaces _ of states . in this case, the capacity of interest is the maximum rate of transmission of subspace dimensions .this may occur in quantum computing , cryptography and teleportation .it is also interesting in these applications to transmit the _ entanglement _ of states .this can not be done by considering the transmission of a set of orthogonal pure states alone .it is not difficult to see that is not correct as a measure of how many subspace dimensions may be reliably transmitted through a quantum channel .for example consider the classical noiseless channel , where is an orthonormal set of basis states for the channel .it is easily seen that where is the number of channel dimensions .yet it is intuitively clear , and is later proved in a more rigorous fashion , that such a channel can not be used to transmit any non - trivial subspace of state space , nor can it be used to transmit any entanglement , and thus its capacity for transmitting these types of quantum resources is zero .in this section we review a quantity known as the _ entanglement fidelity _ .it is this quantity which we use to study the effectiveness of schemes for sending information through a noisy quantum channel .the entanglement fidelity is defined for a _ process _ , specified by a quantum operation acting on some initial state , .we denote it by .the concerns motivating the definition of the entanglement fidelity are twofold : 1 . measures how well the _ state _ , , is preserved by the operation .an entanglement fidelity close to one indicates that the process preserves the state well . measures how well the _ entanglement _ of with other systems is preserved by the operation .an entanglement fidelity close to one indicates the process preserves the entanglement well .conversely , an entanglement fidelity close to zero indicates that the state or its entanglement were not well preserved by the operation .formally , the entanglement fidelity is defined by that is , the entanglement fidelity is the overlap between the initial purification of the state _ before _it is sent through the channel with the state of the joint system _ after _ it has been sent through the channel .the entanglement fidelity depends only on and , not on the particular purification of that is used . if has operation elements then the entanglement fidelity has the expression this expression simplifies for trace - preserving quantum operations since the denominator is one .the entanglement fidelity has the following properties .1 . .2 . if and only if for all pure states lying in the support of , 3 .the entanglement fidelity is a lower bound on the fidelity defined by jozsa in the following sense , 4 .suppose is an ensemble realizing , then the entanglement fidelity is a lower bound on the average fidelity for the pure states , 5 .again suppose is an ensemble realizing then if the pure state fidelity for all in the support of , ( knill and laflamme . )there are several reasons for using the entanglement fidelity as our measure of success in transmitting quantum states .if we succeed in sending a source with high entanglement fidelity , we can send _ any _ ensemble for with high average pure - state fidelity , by item 4 above .entanglement fidelity is thus a more severe requirement of quantum coherence than average pure - state fidelity .moreover , the ability to preserve entanglement is of great importance in applications of quantum coding to , say , quantum computation , where one would like to be able to apply error - correction in a modular fashion to small portions of a quantum computer despite the fact that they may , in the course of quantum computation , become entangled with other parts of the computer .( of course , the general problem of finding the capacity of a noisy quantum channel for a _ given _ ensemble with average pure - state fidelity as the reliability measure is also worth investigating . ) an appropriate measure of how well a _subspace _ of quantum states is transmitted is the _ subspace fidelity_ where the minimization is over all pure states in the subspace whose projector is item 5 above implies that if the subspace fidelity is close to one , the entanglement fidelity is also close to one .the converse is not in general true .that is , reliable transmission of subspaces is a more stringent requirement than transmission of entanglement . therefore using entanglement fidelity as our criterion for reliable transmission yields capacities at least as great as those obtained when subspace fidelity is used as the criterion .we conjecture that these two capacities are identical . as an alternative measure of subspace fidelity, one might consider the average pure state fidelity where the integration is done using the unitarily invariant measure on the subspace of interest . by item 4 above , the capacity resulting from this measure of reliability is at least as great as that which results when entanglement fidelity is used as the measure of reliability .we do not know whether these two capacities are equal .the lesson to be learnt from this discussion is that there are many different measures which may be used to quantify how reliably quantum states are transmitted , and different measures may result in different capacities .which measure is used depends on what resource is most important for the application of interest . for the remainder of this paper, we use the entanglement fidelity as our measure of reliability .there is a very useful inequality , the _ quantum fano inequality _ , which relates the entropy exchange and the entanglement fidelity .it is : where is the dyadic shannon information associated with .it is useful to note for our later work that and , so from the quantum fano inequality , the proof of the quantum fano inequality , ( [ eqtn : quantum fano ] ) , is simple enough that for convenience we repeat it here .consider an orthonormal set of basis states , , for the system .this basis set is chosen so .if we form the quantities , then it is possible to show ( see for example , page 240 ) where is the shannon information of the set .but by easily verified grouping properties of the shannon entropy , and it is easy to show that . combining these results and noting that by definition of the entanglement fidelity , which is the quantum fano inequality . for applicationsit is useful to understand the continuity properties of the entanglement fidelity . to that endwe prove the following lemma : _ lemma _ ( continuity lemma for entanglement fidelity ) suppose is a trace - preserving quantum operation , is a density operator , and is a hermitian operator with trace zero .then to prove the lemma we apply ( [ eqtn : f_e calculation form ] ) to obtain applying a cauchy - schwarz inequality to each sum , the first with respect to the complex inner product , the second with respect to the hilbert - schmidt inner product , we obtain where . applying ( [ eqtn : f_e calculation form ] ) and to the first sum andthe trace - preserving property of to the final sum gives one final application of the cauchy - schwarz inequality and the trace - preserving property of gives as required .this result gives bounds on the change in the entanglement fidelity when the input state is perturbed .note , incidentally , that during the proof a coefficient was dropped from the first term on the right hand side of the inequality .for some applications it may be useful to apply the inequality with this coefficient in place .in this section we investigate the _ coherent information_. the coherent information was defined in , where it was suggested that the coherent information plays a role in quantum information theory analogous to the role played by mutual information in classical information theory in the following sense . consider a classical random process , in which the random variable is used as the input to some process which produces as output the random variable .the distributions of and are related by a linear map , , determined by the conditional probabilities of the process .an example of such a process is a noisy classical channel with input and output .as discussed earlier , an important quantity in information theory is the mutual information , , between the input and the output of the process .note that can be regarded as a function of the input and the map only , since the joint distribution of and is determined by these .quantum mechanically we can consider a process defined by an input , and output , with the process described by a quantum operation , , we assert that the coherent information , defined by plays a role in quantum information theory analogous to that played by the mutual information in classical information theory .this is not obvious from the definition , and one goal of this section is to make it appear plausible that this is the case .of course , the true justification for regarding the coherent information as the quantum analogue of the mutual information is its success as the quantity appearing in results on channel capacity , as discussed in later sections .this is the true motivation for all definitions in information theory , whether classical or quantum : their success at quantifying the resources needed to perform some interesting physical task , not some abstract mathematical motivation . in subsection [ subsect : data processing inequality ] we review the data processing inequality which provides motivation for regarding the coherent information as a quantum analogue of the mutual information , and whose application is crucial to later reasoning . subsection [ subsect : i properties ] studies in detail the properties of the coherent information .in particular , we prove several results related to convexity that are useful both as calculational aids , and also for proving later results .subsection [ subsect : fidelity lemma ] proves a lemma about the entanglement fidelity that glues together many of our later proofs of upper bounds on the channel capacity .finally , subsections [ subsect : example 1 ] and [ subsect : example 2 ] describe two important ways the behaviour of the coherent information differs from the behaviour of the mutual information when quantum entanglement is allowed .the role of coherent information in quantum information theory is intended to be similar to that of mutual information in classical information theory .this is not obvious from the definition , but can be given an operational motivation in terms of a procedure known as _data processing_. the classical data processing inequality states that any three variable markov process , satisfies a data processing inequality , the idea is that the operation represents some kind of `` data processing '' of to obtain , and the mutual information after processing , , can be no higher than the mutual information before processing , . furthermore, suppose we have a markov process , such that . intuitively , one might expect that it be possible to do data processing on to recover .it is not difficult to show that it is possible , using alone , to construct a third variable , , forming a third stage in the markov process , such that with probability one , if and only if .an analogous quantum result has been proved by schumacher and nielsen .it states that given trace - preserving quantum operations and defining a quantum process , then furthermore , it was shown in that given a process it is possible to find an operation which reverses if and only if the close analogy between the classical and quantum data processing inequalities provides a strong operational motivation for considering the coherent information to be the quantum analogue of the classical mutual information .the proof of the quantum data processing inequality is repeated here in order to address the issue of what happens when and are not trace - preserving .the proof of the first inequality is to apply the subadditivity inequality in the picture of operations to obtain it is clear that this part of the inequality need not hold when is not trace - preserving .the reason for this is that it is no longer necessarily the case that , and thus it may not be possible to make the identification .for example , suppose we have a three dimensional state space with orthonormal states , and .let be the projector onto the two dimensional subspace spanned by and , and the projector onto the subspace spanned by .let , where , and .then by choosing small enough we can make , but , so we have an example of a non trace - preserving operation which does not obey the data processing inequality .the proof of the second part of the data processing inequality is to apply the strong subadditivity inequality , where we are now using an picture of the operations .from purity of the total state of it follows that neither of the systems or are involved in the second stage of the dynamics in which and interact unitarily .thus , their state does not change during this stage : .but from the purity of after the first stage of the dynamics , the remaining two terms in the subadditivity inequality are now recognized as entropy exchanges , making these substitutions into the inequality obtained from strong subadditivity ( [ eqtn : strong subadditivity ] ) yields which can be rewritten as the second stage of the data processing inequality , notice that this inequality holds provided is trace - preserving , and does not require any assumption that is trace - preserving .this is very useful in our later work .the set of completely positive maps forms a positive cone , that is , if is a collection of completely positive maps and is a set of non - negative numbers then is also a completely positive map . in this sectionwe prove two very useful properties of the coherent information .first , it is easy to see that for any quantum operation and non - negative , this follows immediately from the definition of the coherent information . a slightly more difficult property to prove is the following ._ theorem _ ( generalized convexity theorem for coherent information ) .suppose are quantum operations .then this result is extremely useful in our later work .an important and immediate corollary is the following : _ corollary _ ( convexity theorem for coherent information ) if a trace - preserving operation , is a convex sum ( ) of trace - preserving operations , then the coherent information is convex , the proof of the corollary is immediate from the theorem .the theorem follows from the concavity of the _ conditional _ entropy ( see references cited in , pages 249250 ) , which for two systems , and , is defined by this expression is concave in . now notice that the theorem now follows from the concavity of the conditional entropy . a further useful result concerns the additivity of coherent information , _ theorem _ ( additivity for independent channels )suppose are quantum operations and are density operators .then the proof is immediate from the additivity property of entropies for product states .the following lemma is the glue which holds together much of our later work on proving upper bounds to channel capacities . in this sectionwe prove the lemma only for the special case of trace - preserving operations .a similar but more complicated result is true for general operations , and is given in section [ sect : observed channel ] .we begin by repeating the proof of a simple inequality that was first proved in , which states that the decrease ( if any ) in system entropy must be bounded above by the increase in the entropy of a pure environment .this applies only for trace - preserving operations .applying the subadditivity inequality and the relationship which follows from purity we obtain rewriting this slightly gives for any trace - preserving quantum operation ._ lemma _ ( entanglement fidelity lemma for operations ) suppose is a trace - preserving quantum operation , and is some quantum state .then for all trace - preserving quantum operations , this lemma is extremely useful in obtaining proofs of bounds on the channel capacity . in order for the entanglement fidelity to be close to one ,the quantity appearing on the right hand side must be close to zero .this shows that the entropy of can not greatly exceed the coherent information if the entanglement fidelity is to be close to one . to prove the lemma , notice that by the second part of the data processing inequality , ( [ eqtn : data processing ] ) , applying inequality ( [ eqtn : araki - lieb ] ) gives and combining the last two inequalities gives where the second step follows from the quantum fano inequality ( [ eqtn : quantum fano ] ) .but the dyadic shannon entropy is bounded above by and , so this completes the proof .this inequality is strong enough to prove the asymptotic bounds which are of most interest for our later work .the somewhat stronger inequality ( [ eqtn : stronger bounds ] ) is also useful when proving one - shot results , that is , when no block coding is being used .there are at least two important respects in which the coherent information behaves differently from the classical mutual information . in this subsection andthe next we explain what these differences are .classically , suppose we have a markov process , intuitively we expect that and , indeed , it is not difficult to prove such a `` pipelining inequality '' , based on the definition of the mutual information .the idea is that any information about that reaches must go through and therefore is also information that has about .however , the quantum mechanical analogue of this result fails to hold .we shall see that the reason it fails is due to quantum entanglement .* example 1 : * suppose we have a two - part quantum process described by quantum operations and . then , in general an explicit example showing that this is the case is given below .it is not possible to prove a general inequality of this sort for the coherent information - examples may be found where a or sign could occur in the last equation .we now show how the purely quantum mechanical effect of entanglement is responsible for this property of coherent information .notice that the truth of the equation is equivalent to this last equation makes it easy to see why ( [ eqtn : qm subadditivity ] ) may fail .it is because the entropy of the joint environment for processes and ( the quantity on the right - hand side ) may be less than the entropy of the environment for process alone ( the quantity on the left ) .this is a property peculiar to quantum mechanics , which is caused by entanglement ; there is no classical analogue .an example of this type of phenomenon is provided by an epr pair , where the entropy of either system alone ( one bit ) is greater than that of the entire system , which is pure and thus has zero bits of entropy .an example of ( [ eqtn : example 1 ] ) is as follows . for conveniencewe use the language of coding and channel operations , since that language is most convenient later . is to be identified with the coding operation , , and is to be identified with the channel operation , .suppose we have a four dimensional state space .we suppose that we have an orthonormal basis , and that is the projector onto the space spanned by and , and is the projector onto the space spanned by and .let be a unitary operator defined by the channel operation is and we use an encoding defined by it is easily checked that for any state whose support lies wholly in the space spanned by and , it follows that it is also easy to verify that thus there exist states such that providing an example of ( [ eqtn : example 1 ] ) .the second important difference between coherent information and classical mutual information is related to the property known classically as _ subadditivity of mutual information_. suppose we have several independent channels operating .figure [ fig : subadditivity ] shows the case of two channels . 3.4 in dual classical channels operating on inputs and produce outputs and .these channels are numbered and take as inputs random variables .the channels might be separated spatially , as shown in the figure , or in time .the channels are assumed to act independently on their respective inputs , and produce outputs .it is not difficult to show that this property is known as the _ subadditivity _ of mutual information .it is used , for example , in proofs of the weak converse to shannon s noisy channel coding theorem .we now show that the corresponding quantum statement about coherent information fails to hold .* example 2 : * there exists a quantum operation and a density operator such that where and are the usual reduced density operators for systems and .an example of ( [ eqtn : example 2 ] ) is the following .suppose system consists of two qubits , and .system consists of two more qubits , and . as the initial statewe choose where is a bell state shared between systems and .the action of the channel on and is as follows : it sets bit to some standard state , , and allows through unchanged .this is achieved by swapping the state of out into the environment .formally , the same channel is now set to act on systems and : a straightforward though slightly tedious calculation shows that with this channel setup and thus this setup provides an example of ( [ eqtn : example 2 ] ) .in this section we return to noisy channel coding . recall the basic procedure for noisy channel coding , as illustrated in figure [ fig : channel1 ] . 3.4 in the noisy quantum channel , together with encodings and decodings .suppose a quantum source has output . a quantum operation , which we shall denote , is used to _ encode _ the source source , giving the input state to the channel , .the encoded state is used as input to the noisy channel , giving a channel output .finally , a decoding quantum operation , , is used to decode the output of the channel , giving a _ received state _ , .the goal of noisy channel coding is to find out what source states can be sent with high entanglement fidelity .that is , we want to know for what states can encoding and decoding operations be found such that if large blocks of source states with entropy per use of the channel can be sent through the channel with high fidelity , we say the channel is transmitting at the rate shannon s noisy channel coding theorem is an example of a _ channel capacity _ theorem .such theorems come in two parts : 1 .an _ upper bound_ is placed on the rate at which information can be sent reliably through the channel .there should be an effective procedure for calculating this upper bound .it is proved that a reliable scheme for encoding and decoding exists which comes arbitrarily close to _ attaining _ the upper bound found in 1 .this maximum rate at which information can be reliably sent through the channel is known as the _ channel capacity_. in this paper we consider only the first of these tasks , the placing of upper bounds on the rate at which quantum information can be reliably sent through a noisy quantum channel .the results we prove are analogous to the weak converse of the classical noisy coding theorem , but can not be considered true converses until attainability of our bounds is demonstrated .we do consider it likely that our bounds are equal to the true quantum channel capacity .up to this point the procedure for doing noisy channel coding has been discussed in broad outline but we have not made all of our definitions mathematically precise .this subsection gives a precise formulation for the most important concepts appearing in our work on noisy channel coding .define a _ quantum source _ to consist of a hilbert space and a sequence where is a density operator on , a density operator on , and a density operator on etc ... using , for example , `` '' to denote the partial trace over the third and fourth copies of we require as part of our definition of a quantum source that for all and all i.e. that density operators in the sequence be consistent with each other in the sense that earlier ones be derivable from later ones by an appropriate partial trace .the -th density operator is meant to represent the state of emissions from the source , normally thought of as taking units of time .( we could have used a single density operator on a countably infinite tensor product of spaces but we wish to avoid the technical issues associated with such products . )we define the _ entropy rate _ of a general source as a special case of this general definition of a quantum source is the i.i.d .source for some fixed such a source corresponds to the classical notion of an _ independent , identically distributed _ classical source , thus the term i.i.d .the entropy rate of this source is simply a _ discrete memoryless channel _ , consists of a finite - dimensional hilbert space , , and a trace - preserving quantum operation .-th extension _ of that channel is given by the pair , where is used to denote -fold tensor products .the memoryless nature of the channel is reflected in the fact that the operation performed on the copies of the channel system is a tensor product of independent single - system operations .define an _ from into to consist of a trace - preserving quantum operation , , from to , and a trace - preserving quantum operation from to .we refer to as the _ encoding _ and as the _ decoding_. the _ total coding operation _ is given by the measure of success we use for the total procedure is the _ total entanglement fidelity _ , in practice we frequently abuse notation , usually by omitting explicit mention of the hilbert spaces and .note also that in principle the channel could have different input and output hilbert spaces . to ease notational clutterwe do not consider that case here , but all the results we prove go through without change. given a source state and a channel , the goal of noisy channel coding is to find an encoding and a decoding such that is close to one ; that is , and its entanglement is transmitted almost perfectly . in general thisis not possible to do .however , shannon showed in the classical context that by considering blocks of output from the source , and performing block encoding and decoding it is possible to considerably expand the class of source states for which this is possible .the quantum mechanical version of this procedure is to find a sequence of -codes , such that as , the measure of success approaches one , where .( we will sometimes refer to such a sequence as a _ coding scheme_. ) suppose such a sequence of codes exists for a given source in this case the channel is said to transmit reliably .we also say that the channel can transmit reliably at a _ rate _ ( note that this definition does not require that the channel be able to transmit reliably _ any _ source with entropy rate less than or equal to ; that is a different potential definition of what it means for a channel to transmit reliably at rate .we conjecture that the two definitions are equivalent in the contexts considered in this paper . ) a noisy channel coding theorem enables one to determine , for any source and channel , whether or not the source can be transmitted reliably on that channel .classically , this is determined by comparing the entropy rate of the source to the capacity of the channel .if the entropy rate of the source is greater than the capacity , the source can not be transmitted reliably .if the entropy rate is less than the capacity , it can .the conjunction of these two statements is precisely the noisy channel coding theorem .( the case when the entropy rate of the source equals the capacity requires separate consideration ; sometimes reliable transmission is achievable , and sometimes not . )we expect that in quantum mechanics , the entropy rate of the source will play the role of the classical entropy rate. a channel will be able to transmit reliably any source with entropy rate less than the capacity ; furthermore , _ no _ source with entropy rate greater than the capacity will be reliably transmissible ( i.e. , the channel will be unable to transmit reliably at a rate greater than the capacity . )the first part of this would constitute a quantum noisy channel coding theorem ; the second , a `` weak converse '' of the theorem .( a `` strong converse '' would require not just that no source with entropy rate greater than the capacity can be reliably transmitted , i.e. transmitted with asymptotic fidelity approaching unity , but would require that all such sources have asymptotic fidelity of transmission approaching zero . )in this section we investigate a variety of upper bounds on the capacity of a noisy quantum channel .this subsection is concerned with the case where the encoding , , is unitary . for this subsectiononly we define where the maximization is over all inputs to copies of the channel .the bound on the channel capacity proved in this section is defined by it is not immediately obvious that this limit exists . to see that it does ,notice that and and apply the lemma proved in appendix [ appendix : limit lemma ] .notice that is a function of the channel operation only .we begin with a theorem that places a limit on the entropy rate of a source which can be sent through a quantum channel . _theorem _ suppose we consider a source and a sequence of unitary encodings for the source .suppose further that there exists a sequence of decodings , such that then this theorem tells us that we can not reliably transmit more than qubits of information per use of the channel . for unitary have and thus by ( [ eqtn : fidelity lemma ] ) with , and the fact that , it now follows that ( note that here is the dimension of a single copy of the source hilbert space , so that we have inserted for the overall dimension of ( [ eqtn : fidelity lemma ] ) ) .taking on both sides of the equation completes the proof of the theorem .it is extremely useful to study this result at length , since the basic techniques employed to prove the bound are the same as those that appear in a more elaborate guise later in the paper . in particular , what features of quantum mechanics necessitate a change in the proof methods used to obtain the classical bound ?suppose the quantum analogue of the classical subadditivity of mutual information were true , namely where is any density operator that can be used as input to copies of the channel , and is the density operator obtained by tracing out all but the -th channel .then it would follow easily from the definition that for all , and thus this expression is exactly analogous to the classical expression for channel capacity as a maximum over input distributions of the mutual information between channel input and output .if this were truly a bound on the quantum channel capacity then it would allow easy numerical evaluations of bounds on the channel capacity , as the maximization involved is easy to do numerically , and the coherent information is not difficult to evaluate .unfortunately , it is not possible to assume that the quantum mechanical coherent information is subadditive , as shown by example ( [ eqtn : example 2 ] ) , and thus in general it is possible that we will later discuss results of shor and smolin which demonstrate that the channel capacity can exceed .notice that to evaluate the bound involves taking the limit in ( [ eqtn : unitary bound ] ) . to numerically evaluate this limit directly is certainly not a trivial task , in general .the result we have presented , that ( [ eqtn : unitary bound ] ) is an upper bound on channel capacity , is an important theoretical result , that may aid in the development of effective numerical procedures for obtaining general bounds .but it does not yet constitute an effective procedure .we now consider the case where something more general than a unitary encoding is allowed . in principle, it is always possible to perform a non - unitary encoding , , by introducing an extra ancilla system , performing a joint unitary on the source plus ancilla , and then discarding the ancilla .we define where the maximization is over all inputs to the encoding operation , , which in turn maps to copies of the channel .the bound on the channel capacity proved in this section is defined by once again , to prove that this limit exists one applies the lemma proved in appendix [ appendix : limit lemma ] . to prove that this quantity is a bound on the channel capacity , one applies almost exactly the same reasoning as in the preceding subsection .the result is : _ theorem _ suppose we consider a source and a sequence of encodings for the source .suppose further that there exists a sequence of decodings , such that then again , this result places an upper bound on the rate at which information can be reliably transmitted through a noisy quantum channel .the proof is very similar to the earlier proof of a bound for unitary encodings .one simply applies ( [ eqtn : fidelity lemma ] ) with and to give : taking on both sides of the equation completes the proof .it is instructive to see why the proof fails when the maximization is done over channel input states alone , rather than over all source states and encoding schemes .the basic idea is that there may exist source states , , and encoding schemes , for which this possibility stems from the failure of the quantum pipelining inequality , ( [ eqtn : example 1 ] ) .it is clear that the existence of such a scheme would cause the line of proof suggested above to fail .classically the pipelining inequality holds , and therefore the complication of having to maximize over encodings does not arise .having proved that is an upper bound on the channel capacity , let us now investigate some of the properties of this bound .first we examine the range over which can vary .note that since if is pure then for any encoding , and for all and , , since the channel output has dimensions .it follows that this parallels the classical result , which states that the channel capacity varies between and , where is the number of channel symbols .the upper bound on the classical capacity is attained if and only if the classical channel is noiseless . in the case when takes a constant value , for all channel inputs , , it is not difficult to verify that .this is consistent with the obvious fact that the capacity for quantum information of such a channel is zero .the `` completely decohering channel '' is defined by with a complete orthonormal set of one - dimensional projectors .this channel is classically noiseless , yet a straightforward application of ( [ eqtn : abstract subadditivity ] ) yields and therefore this channel has zero capacity for the transmission of entanglement . more generally , if , where , then by the same argument , and thus the channel capacity for such a channel is zero .as a special case of this result , it follows that the capacity of _ any _ classical channel as defined in section [ sect : classical ] to transmit entanglement is zero .provided the input and output dimensions of the channel are the same , it is not difficult to show that if and only if is unitary .it is also of interest to consider what happens when channels and are composed , forming a concatenated channel , . from the data processing inequalityit follows that it is clear by repeated application of the data - processing inequality that this result also holds if we compose more than two channels together , and even holds if we allow intermediate decoding and re - encoding stages .classical channel capacities also behave in this way : the capacity of a channel made by composing two ( or more ) channels together is no greater than the capacity of the first part of the channel alone .although ( [ eqtn : example 1 ] ) might seem to suggest otherwise , in fact for let us suppose that is the encoding which achieves so that the total operation is as our encoding for the channel , we may use and decode with hence achieving precisely the same total operation . inequalities analogous to ( [ eqtn : albert einstein ] ) and ( [ eqtn : ahristopher fuchs ] ) may also be stated for the actual channel capacity . clearly these statements are true as well .so far we have considered two allowed classes of encodings : encodings where a general unitary operation can be performed on a block of quantum systems , and encodings where a general trace - preserving quantum operation can be performed on a block of quantum systems .if large - scale quantum computation ever becomes feasible it may be realistic to consider encoding protocols of this sort .however , for present - day applications of quantum communication such as quantum cryptography and teleportation , it is realistic to consider much more restricted classes of encodings . in this sectionwe describe several such classes .we begin by considering the class involving local unitary operations only .we refer to this class as - .it consists of the set of operations which can be written in the form where are local unitary operations on systems through .another possibility is the class of encodings involving local operations only , i.e. operations of the form : in other words , the overall operation has a tensor product form .a more realistic class is - encoding by local operations with one way classical communication .the idea is that the encoder is allowed to do encoding by performing arbitrary quantum operations on individual members ( typically , a single qubit ) of the strings of quantum systems emitted by a source .this is not unrealistic with present day technology for manipulating single qubits .such operations could include arbitrary unitary rotations , and also generalized measurements .after the qubit is encoded , the results of any measurements done during the encoding may be used to assist in the encoding of later qubits .this is what we mean by one way communication - the results of the measurement can only be used to assist in the encoding of later qubits , not earlier qubits .another possible class is - - encoding by local operations with two - way classical communication .this may arise in a situation where there are many identical channels operating side by side in space .once again it is assumed that the encoder can perform arbitrary local operations , only this time two - way classical communication is allowed when performing the encoding . for any class of encodings arguments analogous to those used above for general and for unitary block coding , ensure that the expression where is an upper bound to the rate at which quantum information can be reliably transmitted using encodings in .thus , in addition to the bounds for general and unitary encodings , there are bounds and which provide upper bounds on the rate of quantum information transmission for these types of encodings .a priori it is not clear what the exact relationships are amongst these bounds , although various inequalities may easily be proved , furthermore , note that these bounds allow general decoding schemes .it is possible that much tighter bounds may result if we restrict the decoding schemes in the same way we have restricted the encoding schemes .an interesting and important question is whether there are closed - form characterizations of the sets of quantum operations corresponding to particular types of encoding schemes such as - and - .for example , in the cases of - and there are explicit forms ( [ eqtn : local unitary encodings],[eqtn : local encodings ] ) for the classes of encodings allowed .for - the operations take the form : a drawback to this expression is that it is not written in a closed form , making it difficult to perform optimizations over - .it would be extremely valuable to obtain a closed form for the set of operations in - .one possible approach to doing this is to limit the range of the indices in the previous expression .this is related to the number of rounds of classical communication which are involved in the operation .similar remarks to these also apply to the class - . indeed , it is not yet clear to us if there is an expression analogous to ( [ eqtn : 1-l explicit])for - encodings .one possibility is : however , although all - operations involving a finite number of rounds of communication can certainly be put in this form , we do not presently see why all operations expressible in this form should be realizable with local operations and two - way classical communication .the classes we have described in this subsection are certainly not the only realistic classes of encodings .many more classes may be considered , and in specific applications this may well be of great interest .what we have done is illustrated a general technique for obtaining _bounds _ on the channel capacity for different classes of encodings .a major difference between classical information theory and quantum information theory is the greater interest in the quantum case in studying different classes of encodings .classically it is , in principle , possible to perform an arbitrary encoding and decoding operation using a look - up table .however , quantum mechanically this is far from being the case , so there is correspondingly more interest in studying the channel capacities that may result from considering different classes of encodings and decodings .what then can be said about the status of the quantum noisy channel coding theorem in the light of comments made in the preceding sections ?while we have established upper bounds , we have not proved achievability of these bounds .how might one prove that these bounds are achievable ?lloyd has also proposed an expression involving a maximum of the coherent information as the channel capacity , and outlines a technique involving random coding for achieving rates up to this quantity .the criterion for reliable transmission used by lloyd appears to be the subspace fidelity criterion of eqtn .( [ eqtn : subspace fidelity ] ) .as noted earlier , this criterion is at least as strong as the criterion based on entanglement fidelity which we have been using , that is , asymptotically good coding schemes with respect to subspace fidelity are also asymptotically good with respect to the entanglement fidelity .suppose one applies coding schemes to achieve rates up to ( [ eqtn : lloyd ] ) , but with the basic system used in blocking taken to be of the old systems .then it is clear that rates up to may be achieved using such coding schemes , where the maximization is done over density operators for copies of the source .it follows that rates up to may be achieved .this quantity is simply the bound ( [ eqtn : unitary bound ] ) which we found earlier for noisy channels with the class of encodings restricted to be unitary .as remarked in the last section , it is in general not possible to identify the quantity appearing in the previous equation with the quantity ( [ eqtn : lloyd ] ) , because the coherent information is not , in general , subadditive , cf .( [ eqtn : example 2 ] ) .the coding schemes considered by lloyd appear to be restricted to be projections followed by unitaries .we call such encodings _ restricted encodings _ , since they do not cover the full class of encodings possible . for the purposes of proving upper boundsit is not sufficient to consider a restricted class of encodings , since it is possible that other coding schemes may do better , and therefore that the capacity is somewhat larger than ( [ eqtn : unitary bound 2 ] ) .we suspect that this is not the case , but have been unable to provide a rigorous proof .a heuristic argument is provided in subsection [ subsect : heuristics ] . in the light of these remarksit is interesting that the coding scheme of shor and smolin provides an example where rates of transmission exceeding ( [ eqtn : lloyd ] ) are obtained .nevertheless , the general bound ( [ eqtn : general bound ] ) must still be obeyed by a coding scheme of their type .however , one can still make progress towards a proof that the expression , ( [ eqtn : general bound ] ) , which bounds the channel capacity , is the correct capacity .if we accept that it is possible to attain rates up to ( [ eqtn : unitary bound 2 ] ) , then the following four - stage construction shows that ( [ eqtn : general bound ] ) is a correct expression for the capacity ; i.e. that in addition to being an upper bound as shown in section [ sect : upper bounds ] , it is also achievable . 3.4 in noisy quantum channel with an extra stage , a restricted _ pre - encoding _ , . for a fixed block size , , one finds an encoding , , for which the maximum in is achieved .one then regards the composition as a single noisy quantum channel , and applies the achievability result on restricted encodings to the joint channel to achieve an even longer block coding scheme with high entanglement fidelity .this gives a joint coding scheme which for sufficiently large blocks and can come arbitrarily close to achieving the channel capacity ( [ eqtn : general bound ] ) .an important open question is whether ( [ eqtn : general bound ] ) is equal to ( [ eqtn : unitary bound ] ) .it is clear that the former expression is at least as large as the latter ; we give a heuristic argument for equality in the next subsection , but rigorous results are needed .thus , we think it likely that the expression ( [ eqtn : unitary bound ] ) will turn out to be the maximum achievable rate of reliable transmission through a quantum channel . butthis is still not satisfactory as an expression for the capacity , because of the difficulty of evaluating the limit involved . at a minimum, we would like to know enough about the rate of convergence of to its limit to be able to accurately estimate the error in a numerical calculation of capacity , thus providing an effective procedure for calculating the capacity to any desired degree of accuracy .it would also be useful to know whether the coherent information is concave in the input density operator ; if this were so , it would greatly aid in establishing that one has reached a maximum , since in this case a local maximum is guaranteed to be a global maximum ( appendix [ appendix : convexity ] ) . for the purposes of obtaining a capacity theorem for general encodings and decodings ,a restriction on the class of encodings is clearly unacceptable .for example , given a source density operator whose eigenvalues are not all equal , we may not even be able to send it reliably through a noiseless channel whose capacity is just greater than the source entropy rate without doing non - unitary compression as described in .this compression , which is essentially projection onto the _ typical subspace _ of the source , is not a unitary operation , and thus we expect that nonunitary operations will be essential to showing achievability of the noisy channel capacity .we conjecture that once the projection onto the typical source subspace is accomplished , nonunitary operations are of no further use in achieving reliable transmission through a noisy channel .although we have not yet rigorously shown this , we give a heuristic argument below .if the conjecture is true then it can be used to show that expressions ( [ eqtn : unitary bound ] ) and ( [ eqtn : general bound ] ) are equal .our heuristic argument applies only to sources for which a _ typical subspace _ exists .this includes all i.i.d .sources , for which the output is of the form .let be the projector onto the typical subspace after uses of the source , and the projector onto the orthogonal subspace .given any positive it is true that for sufficiently large , defining the restriction of the source to the typical subspace , and applying the continuity lemma for entanglement fidelity , ( [ eqtn : continuity lemma ] ) , we see that for any trace - preserving operation . by choosing sufficiently large be made arbitrarily small , and thus we see that for the entanglement fidelity for the source to be high asymptotically , it is necessary and sufficient that the entanglement fidelity be high asymptotically for the restriction of the source to the typical subspace .we now come to the heuristic argument . in order that the entanglement fidelity for the total channel be high ,it is asymptotically necessary and sufficient that the composite operation have high entanglement fidelity when the source is restricted to the typical subspace , . hence , if an encoding is nonunitary on it must be `` close to reversible '' on and must be close to reversing it . in is shown that perfect reversibility of an operation on a subspace is equivalent to the statement that the operation , restricted to that subspace , may be represented by operators where and is the projector onto . that is, the operation randomly ( with probabilities ) chooses a unitary which moves the state into one of a mutually orthogonal set of subspaces .hence _ in its action on the source s typical subspace , _ is close to some perfectly reversible operation consisting of `` randomly picking a unitary into an orthogonal subspace . ''hence the entanglement fidelity of the total operation is close to that of in which the encoding is replaced with the linearity of the entanglement fidelity in the operation implies that for at least one of the unitaries in the random - unitaries representation of the perfectly reversible operation , the entanglement fidelity is at least as good if the unitary is substituted for .therefore , arbitrary encodings are close to unitary encodings of into a subspace of the channel s hilbert space .thus the only nonunitarity which it is necessary to consider is the restriction to the source s typical subspace .in this section we consider a generalized version of the quantum noisy channel coding problem .suppose that in addition to a noisy interaction with the environment there is also a classical observer who is able to perform a measurement .this measurement may be on the channel or the environment of the channel , or possibly on both .the result of the measurement is then sent to the decoder , who may use the result to assist in decoding .we assume that this transmission of classical information is done noiselessly , although it is also interesting to consider what happens when the classical transmission also involves noise .it can be shown that the state received by the decoder is again related to the state used as input to the channel by a quantum operation , where is the measurement result recorded by the classical observer , the basic situation is illustrated in figure [ fig : channel4 ] .the idea is that by giving the decoder access to classical information about the environment responsible for noise in the channel it may be possible to improve the capacity of that channel , by allowing the decoder to choose different decodings depending on the measurement result . 3.4 in noisy quantum channel with a classical observer .we now give a simple example which illustrates that this can be the case .suppose we have a two - level system in a state and an initially uncorrelated four - level environment initially in the maximally mixed state , so the total state of the joint system is we fix an orthonormal basis the output of the channel is thus the quantum operation can be given two particularly useful forms , it is not difficult to show from the second form that and thus the channel capacity for the channel is equal to zero .suppose now that an observer is introduced , who is allowed to perform a measurement on the environment .this measurement is a von neumann measurement in the basis , and yields a corresponding measurement result , . then the quantum operations corresponding to these four measurement outcomes are each of these is unitary , up to a constant multiplying factor , so conditioned on knowing the measurement result , the corresponding channel capacity is perfect .that is , for all measurement outcomes .this is an example where the capacity of the observed channel is strictly greater than for the unobserved channel .this result is particularly clear in the context of teleportation .nielsen and caves showed that the problem of teleportation can be understood as the problem of a quantum noisy channel with an auxiliary classical channel . in the single qubit teleportation scheme of bennett _et al _ there are four quantum operations relating the state alice wishes to teleport to the state bob receives , corresponding to each of the four measurement results . in that schemeit happens that those four operations are the we have described above .furthermore in the absence of the classical channel , that is , when alice does not send the result of her measurement to bob , the channel is described by the single operation .clearly , in order that causality be preserved we expect that the channel capacity be zero . on the other hand , in order that teleportation be able to occur we expect that the channel capacity , as was shown above .teleportation understood in this way as a noisy channel with a classical side channel offers a particularly elegant way of seeing that the transmission of quantum information may sometimes be greatly improved by making use of classical information .the remainder of this section is organized into three subsections .subsection [ subsect : upper bounds on observed channel ] proves bounds on the capacity of an observed channel .this requires nontrivial extensions of the techniques developed earlier for proving bounds on the capacity of an unobserved channel .subsection [ subsect : relationship to unobserved channel ] relates work done on the observed channel to the work done on the unobserved channel .subsection [ subsect : observed discussion ] discusses possible extensions to this work on observed channels .we now prove several results bounding the channel capacity of an observed channel , just as we did earlier for the unobserved channel .the following lemma generalizes the earlier entanglement fidelity lemma for quantum operations , which was the foundation of our earlier proofs of upper bounds on the channel capacity ._ lemma _ ( generalized entanglement fidelity lemma for operations ) suppose are a set of quantum operations such that is a trace - preserving quantum operation .suppose further that is a trace - preserving quantum operation for each .then where by the second step of the data processing inequality , ( [ eqtn : data processing ] ) , for each , and noting also that by the trace - preserving property of , , we obtain .\end{aligned}\ ] ] applying the generalized convexity theorem for coherent information ( [ eqtn : abstract subadditivity ] ) gives we obtain but is trace - preserving since is trace - preserving and is trace - preserving , and thus by ( [ eqtn : araki - lieb ] ) , finally , an application of the quantum fano inequality ( [ eqtn : quantum fano ] ) along with the observations that the entropy function appearing in that inequality is bounded above by one , and , gives as we set out to prove .if we define we may use ( [ eqtn : fidelity lemma for general operations ] ) to easily prove that is an upper bound on the rate of reliable transmission through an observed channel , in precisely the same way we earlier used ( [ eqtn : fidelity lemma ] ) to prove bounds for unobserved channels .we may derive the same bound in another fashion if we associate observed channels with tracepreserving unobserved channels in the following fashion suggested by examples in . to an observed channel we associate a single tracepreserving operation from to a larger hilbert space , where is a `` register '' hilbert space .each dimension of corresponds to a different measurement result , .the operation is specified by : where is some set of orthogonal states corresponding to the measurement results which may occur .this map is an `` all - quantum '' version of the observed channel . since our upper bounds to the capacity of an unobserved channel apply also to channels with output hilbert spaces of different dimensionality than the input space , they apply to this map as well .it is easily verified that the coherent information for the map acting on is the same as the average coherent information for the observed channel acting on , which appears in ( [ eqtn : fidelity lemma for general operations ] ) and in the bound ( [ eqtn : observed capacity ] ) . to show this , define where we are again working in the picture of operations. then is given by ( [ eqtn : equivmap ] ) , so that since the density matrices are mutually orthogonal .also , where by definition thus hence the coherent information is , \nonumber \\ & & \end{aligned}\ ] ] which can be rewritten as the average coherent information for so an application of the bound ( [ eqtn : general bound ] ) on the rate of transmission through the unobserved channel shows the expression on the right hand side of ( [ eqtn : observed capacity ] ) which bounds the capacity of the observed channel also bounds the capacity of .this result provides some evidence for the intuitively reasonable proposition that and are equivalent with respect to transmission of quantum information .bennett et .al derive capacities for three simple channels which may be viewed as taking the form ( [ eqtn : equivmap ] ) .quantum erasure channel _ takes the input state to a fixed state orthogonal to the input state with probability ; otherwise , it transmits the state undisturbed .an equivalent observed channel would with probability replace the input state with a standard pure state within the input subspace , and also provide classical information as to whether this replacement has occurred or not .the _ phase erasure channel _ randomizes the phase of a qubit with probability , and otherwise transmits the state undisturbed ; it also supplies classical information as to which of these alternatives occurred .the _ mixed erasure / phase - erasure channel _ may either erase or phase - erase , with exclusive probabilities and .bennett et .note that the capacity of the erasure channel is in fact the one - shot maximal coherent information .we have verified that the capacities they derive for the phase - erasure channel ( ) and the mixed erasure / phase - erasure channel are the same as the one - shot maximal average coherent information for the corresponding observed channels , lending some additional support to the view that the bounds we have derived here are in fact the capacities .suppose a quantum system passes through a channel , interacts with an environment , and then measurements are performed on the _ environment alone_. how is the capacity of this observed channel related to the capacity of the channel which results if _ no measurement _ had been performed on the environment ?physically , it is clear that the capacity when measurements are performed must be at least as great as when no measurements on the environment are performed , since the decoder can always ignore the result of the measurement . in this subsectionwe show that bounds we have derived on channel capacity have this same property : observation of the environment can never decrease the bounds we have obtained .suppose are the operations describing the different possible measurement outcomes .then the operation describing the same channel , but without any observation of the environment , is recall the expressions for the bound on the capacity of the unobserved channel , and the observed channel , but the generalized convexity theorem ( [ eqtn : abstract subadditivity ] ) for coherent information implies that and thus to see that this inequality may sometimes be strict , return to the example considered earlier in this section in the context of teleportation . in that caseit is not difficult to verify that what these results show is that our bounds on the channel capacity are never made any worse by observing the environment , but sometimes they can be made considerably better .this is a property that we certainly expect the quantum channel capacity to have , and we take as an encouraging sign that the bounds we have proved in this paper are in fact achievable , that is , the true capacities .all the questions asked about the bounds on channel capacity for an unobserved channel can be asked again for the observed channel : questions about achievability of bounds , the differences in power achievable by different classes of encodings and decodings , and so on .we do not address these problems here , beyond noting that they are important problems which need to be addressed by future research .many new twists on the problem of the quantum noisy channel arise when an observer of the environment is allowed .for example , one might consider the situation where the classical channel connecting the observer to the decoder is noisy .what then are the resources required to transmit coherent quantum information ? it may also be interesting to prove results relating the classical and quantum resources that are required to perform a certain task .for example , in teleportation it can be shown that one requires not only the quantum channel , but also two bits of classical information , in order to transmit quantum information with perfect reliability .in this paper we have shown that different information transmission problems may result in different channel capacities for the same noisy quantum channel .we have developed some general techniques for proving upper bounds on the amount of information that may be transmitted reliably through a noisy quantum channel .perhaps the most interesting thing about the quantum noisy channel problem is to discover what is new and essentially _ quantum _ about the problem .the following list summarizes what we believe are the essentially new features : 1 .the insight that there are many essentially different information transmission problems in quantum mechanics , all of them of interest depending on the application .these span a spectrum between two extremes : * the transmission of a discrete set of mutually orthogonal quantum states through the channel .such problems are problems of transmitting classical information through a noisy quantum channel . * the transmission of entire subspaces of quantum states through the channel , which necessarily keeps all other quantum resources , including entanglement , intact .this is likely to be of interest in applications such as quantum computation , cryptography and teleportation where superpositions of quantum states are crucial .such problems are problems of transmitting coherent quantum information through a noisy quantum channel . + both these cases and a variety of intermediate cases are important for specific applications . for each case, there is great interest in considering different classes of allowed encodings and decodings .for example , it may be that encoding and decoding can only be done using local operations and one - way classical communication .this may give rise to a different channel capacity than occurs if we allow non - local encoding and decoding .thus there are different noisy channel problems depending on what classes of encodings and decodings are allowed .the use of quantum entanglement to construct examples where the quantum analogue of the classical pipelining inequality for a markov process , fails to hold ( cf . eqtn .( [ eqtn : example 1 ] ) ) .the use of quantum entanglement to construct examples where the subadditivity property of mutual information , fails to hold ( cf . eqtn .( [ eqtn : example 2 ] ) ) .there are many more interesting open problems associated with the noisy channel problem than have been addressed here .the following is a sample of those problems which we believe to be particularly important : 1 .the development of an effective procedure for determining channel capacities .we believe that this is the most important problem remaining to be addressed . assuming our upper bound is , in fact , the channel capacity for general encodings , it still remains to find an effective procedure for evaluating this quantity .both maximizations can be done relatively easily , since they are of a continuous function over a compact set .however , we do not yet understand the convergence of the limit well enough to have an effective procedure for evaluating this quantity .once an effective procedure has been obtained for evaluating channel capacities , it still remains to develop _ good _ numerical algorithms for performing the evaluation .assuming that the evaluation involves some kind of maximization of the coherent information , it becomes important to know whether the coherent information is concave in combined with the convexity of the coherent information in the operation this would give a powerful tool for the development of numerical algorithms for the determination of channel capacity .3 . estimation of channel capacities for realistic channels .this work could certainly be done theoretically and perhaps also experimentally .recent work on _ quantum process tomography _ points the way toward experimental determination of the quantum channel capacity .a related problem is to analyze how stable the determination of channel capacities is with respect to experimental error .as suggested in subsection [ subsect : other encoding protocols ] it would be interesting to see what channel capacities are attainable for different classes of allowable encodings and / or decodings , for example , encodings where the encoder is only allowed to do local operations and one - way classical communication , or encodings where the encoder is allowed to do local operations and two - way classical communication . we have showed how to prove bounds on the channel capacity in these cases ; whether these bounds are attainable is unknown .5 . the development of rigorous general techniques for proving attainability of channel capacities , which may be applied to different classes of allowed encodings and decodings .6 . finding the capacity of a noisy quantum channel for classical information .a related problem arises in the context of _ superdense coding _, where one half of an epr pair can be used to send two bits of classical information . it would be interesting to know to what extent this performance is degraded if the pair of qubits shard between sender and receiver is not an epr pair , but rather the sharing is done using a noisy quantum channel , leading to a decrease in the number of classical bits that can be sent . given a noisy quantum channel , what is the maximum amount of classical information that can be sent in this way ?all work done thus far has been for discrete channels , that is , channels with finite dimensional state spaces .it is an important and non - trivial problem to extend these results to channels with infinite dimensional state spaces .8 . a more thorough study of noisy channels which have a classical side channel .can the classical information obtained by an observer be related to changes in the channel capacity ?what if the classical side channel is noisy ?many other fascinating problems , too many to enumerate here , suggest themselves in this context .there are many other ways the classical results on noisy channels have been extended - considering channels with _ feedback _, developing _ rate - distortion _ theory , understanding _ networks _ consisting of more than one channel , and so on .each of these could give rise to highly interesting work on noisy quantum channels .it is also to be expected that interesting new questions will arise as experimental efforts in the field of quantum information develop further . perhaps of chief interest to us is to develop a still clearer understanding of the essential differences between the quantum noisy channel and the classical noisy channel problem .we thank carlton m. caves , isaac l. chuang , richard cleve , david p. divincenzo , christopher a. fuchs , e. knill , and john preskill for many instructive and enjoyable discussions about quantum information .this work was supported in part by the office of naval research ( grant no .n00014 - 93 - 1 - 0116 ) .we thank the institute for theoretical physics for its hospitality and for the support of the national science foundation ( grant no .phy94 - 07194 ) .man acknowledges financial support from the australian - american educational foundation ( fulbright commission ) .this appendix contains a lemma that can be used to prove the existence of several limits that appear in this paper ._ lemma _ : suppose is a nonnegative sequence such that for some , and for all and . then exists and is finite . _proof _ define this always exists and is finite , since for some .fix and choose sufficiently large that suppose is any integer strictly greater than . then by ( [ eqtn : abstract superadditivity ] ) , using the fact that ( an immediate consequence of ( [ eqtn : abstract superadditivity ] ) ) with gives where is the integer immediately below . plugging the last inequality into ( [ eqtn : intermediate appendix ] ) gives but and , so this equation holds for all sufficiently large , and thus but was an arbitrary number greater than , so letting we see that it follows that exists , as claimed .various convexity and concavity properties are useful in calculating classical channel capacities , and the same is true in the quantum situation .this appendix is devoted to an explication of the basic properties of convexity and concavity related to the coherent information and the relation of these properties to expressions such as ( [ eqtn : general bound ] ) . a _ convex set _ , , is a subset of a vector space such that given any two points and any such that , then the _ convex combination _ , , is also an element of .geometrically , this means that given any two points in the set , the line joining them is also in the set .an _ extremal point _ of is a point which can not be formed from the convex combination of any other two points in the set .a _ convex function _ on is a real - valued function such that for any satisfying , a concave function satisfies the same condition but with the inequality reversed .this follows by supposing that and are distinct local maxima .if , say , then by concavity of . by choosing sufficiently small values of see that this violates the fact that is a local maximum .thus has the same value for all local maxima , from which it follows that all local maxima are also global maxima for the function .if the coherent information turns out to be concave in the input density operator , this property will be useful in evaluating capacity bounds such as ( [ eqtn : general bound ] ) .the proof is obvious .the reason for our interest in the proof is because for fixed and trace - preserving operations , the coherent information is a convex , continuous function of the operation .the set of trace - preserving quantum operations forms a compact , convex set , and thus by the convexity lemma , attains its maximum for a quantum operation which is extremal in the set of all trace - preserving quantum operations .this result provides a considerable saving in the class of quantum operations that must be optimized over in order to numerically calculate expressions of the form ( [ eqtn : general bound ] ) .unfortunately , this only takes us part of the way towards proving that the expressions ( [ eqtn : general bound ] ) and ( [ eqtn : unitary bound ] ) are identically equal , or , alternatively , it suggests a starting point for a search for counterexamples to the proposition that the two quantities are equal .if the extremal points of the set of quantum operations were the unitary operations we would be done .however that is not the case , as the above theorem shows .
noisy quantum channels may be used in many information carrying applications . we show that different applications may result in different channel capacities . upper bounds on several of these capacities are proved . these bounds are based on the _ coherent information _ , which plays a role in quantum information theory analogous to that played by the mutual information in classical information theory . many new properties of the coherent information and entanglement fidelity are proved . two non - classical features of the coherent information are demonstrated : the failure of subadditivity , and the failure of the pipelining inequality . both properties arise as a consequence of quantum entanglement , and give quantum information new features not found in classical information theory . the problem of a noisy quantum channel with a classical observer measuring the environment is introduced , and bounds on the corresponding channel capacity proved . these bounds are always greater than for the unobserved channel . we conclude with a summary of open problems . epsf 2 [ ]
blackouts , traffic gridlocks , and floods are all malfunctions of infrastructures which drastically affect their performance . in many situations ,they occur abruptly and might propagate through the network as shock waves .these waves can either weaken by shedding their impact among branches or interfere constructively when two or more branches meet at the same node .the global consequences of these perturbations will strongly depend on the propagation dynamics and the capacity of each network element to bear abrupt changes .the identification of vulnerable spots is a challenging scientific and technological question and this is precisely what we address here .propagation of failures and cascading in complex networks have been subject of much scientific interest .examples are the use of the theory of self - organized criticality to study the propagation of failures in power grids and water transport on reservoir networks , the olami feder christensen model for earthquakes , traffic and financial networks .typically , the focus is on the cascading of failures resulting from an initial triggering event .however , it is also crucial to understand the dynamics preceeding these failures and identify the vulnerable spots where they can possibly be triggered . to describe the propagation of shock waves on directed networks we use the burgers equation .this equation describes flow when the flux depends quadratically on the load ( e.g. , voltage , traffic density , and water level ) .the range of applications of the burgers equation goes beyond fluid dynamics as it is applied in many propagation processes , such as traffic jams , glacier avalanches or chemical processes .here we show that , in the case of perturbations randomly distributed in space , the dynamics of the solutions of the dissipative burgers equation converges to a steady state in which the load distribution is strongly heterogeneous .surprisingly , we find that the load of some nodes can exceed the average load by one order of magnitude .one might expect that the location of such nodes mainly depends on the propagation dynamics .yet , we show that their fate is deeply imprinted in the network topology .we propose a new topological measure which allows to identify the most vulnerable nodes without solving the dynamics ._ dynamics_to describe the propagation of load ( e.g. , traffic density or water level ) on a directed network , we consider on each link the one - dimensional burgers equation which we solve using godunov s scheme .the details of the discretization and numerical solution are presented in the section methods . _perturbation_initially , the load on all directed edges and nodes is set to zero .perturbations are described as local changes in the load according to the following procedure .first , we choose a node at random and set to a fixed value ( ) during a time interval .the load on the corresponding edges and on the other nodes is determined by solving the burgers equation as described in the section methods .after , the constraint on the load of node is released and its load is determined by the dynamics . a new node is selected and perturbed and the procedure is iterated .in addition to the perturbations , at each iteration step , 0.1% of every node load is dissipated .this dissipation would correspond , for example , to the evaporation of water from a river network , cars leaving the streets , or a potential drop due to joule heating ._ directed networks_the dynamics is investigated on the european high - voltage power grid and two network models : the configuration model with power - law degree distribution and the watts - strogatz model , with small - world features . in the case of the model networks ,the length of the edges are random variables chosen uniformly from the interval [ 3:20 ] .initially , the power grid and the model networks are undirected .inspired by the fact that in power grids the direction of the current depends on the node voltages , we use the following method to define the direction of the link .to each node , a random value ( the node voltage ) is assigned uniformly from the interval [ 0:1 ] and the edge between two nodes is directed from the node with higher voltage to the one with lower voltage ( i.e. , ) .this method for generating directed edges automatically prevents the presence of loops .since fluctuations are always present in the network , the direction of the current can vary in time .our results are averaged over different voltage distributions as well ._ steady state_[subsec : steady_state]at each time step , we measure the temporal load correlation , defined as : where is the load averaged over all nodes at time .the brackets represent an average over the last ten consecutive time intervals of length , i.e. , with . starting with all loads equal to zero , we observe that decays in time towards a steady state in which the dissipation balances the total incoming load .when drops below 1% of the relative standard deviation of the loads within the network , we assume that the steady state is reached . in the steady state ,the load at each node has a well - defined average value with small fluctuations . the spatial distribution for a given realization of voltages in the european power grid is shown in fig .[ fig : power_grid_steady_state ] .the size of the dots represents the average load over a time window of measured in the steady state .most nodes accumulate negligible load ( ) , while surprisingly a small fraction of the nodes are overloaded ( , where is the magnitude of each perturbation ) . the observed load exemplarily shown in fig .[ fig : power_grid_steady_state ] corresponds to one realization of the voltage distribution and thus for one configuration of the direction of the edges . assuming small temporal changes in the network ( e.g. , fluctuations of voltage in the power grid or number of cars entering a road junction ) , the direction of the edges changes in time .thus , we also consider different realizations of the voltage distribution and , for each realization , we determine the steady state load distribution . figure [ fig : steady_state_density_distribution ] shows the relative load distribution averaged over realizations . in order to compare the load distribution of different networks ( power grid , watts strogatz and scale - free networks ) , loads in each curve are rescaled by the magnitude of each perturbation ( ) .the strongly inhomogeneous behavior of the steady - state load seen in fig .[ fig : power_grid_steady_state ] is also visible in the load distribution .the distributions are bimodal defining two different types of nodes : those with a negligible load compared to the perturbation ( ) and those with a larger load ( ) .the latter ones are typically overloaded in the steady state , suggesting that the incoming perturbations interfere constructively at them .nodes and edges , while the watts strogatz network has nodes and an average degree of .all loads are in units of the amplitude of the perturbation .each curve is an average over 5000 voltage realizations and , in the case of the model networks , also an average over 100 different networks .the magnitude of the standard deviation of the curves is comparable to the size of the symbols ., width=325 ] the plots for the two network models ( watts strogatz and scale - free ) in fig .[ fig : steady_state_density_distribution ] are obtained for networks with the same number of nodes as the power grid .the average degrees are also kept close to the power grid , with the same number of edges in the scale - free network and in the watts strogatz graph . in both cases , a bimodal distribution is also observed .the power - grid network is constructed from real data and its size corresponds to the real network size .thus , a finite - size study is not possible . yet , in the case of the model networks one can systematically study the effect of the network size on the load distribution .figure [ fig : size_effects]a shows the load distribution for watts strogatz networks of different network sizes .the majority of the nodes ( more than ) has always a negligible load , while the load of the remaining nodes follows a broad distribution , characterized by a decay in the relative frequency with increasing load and a cut - off for values of load close to .the bimodal distribution smoothens out for larger network sizes . for scale - free networksthe qualitative picture is slightly different . as shown in fig .[ fig : size_effects]b , for all network sizes , one observes two power - law regimes , with a crossover at .nevertheless , note that for both network models , there is always a significant fraction of nodes ( around ) with a non - negligible load .the load value of the cutoff suggests that , at large system sizes , consecutive shock waves that enter the network are separated so that they attenuate their amplitude before being able to interfere . for different system sizes ,insets show the respective data collapse , where and denote the load and the relative frequency , respectively , is the size of the network .loads are divided by the magnitude of the applied perturbation to ensure comparability .each data is averaged over at least 100 different graphs and 100 voltage realizations .the breaks in the distributions around are due to the logarithmic binning ., width=325 ] the specific nodes that exhibit these high load values typically change from realization to realization .however , after averaging over different voltage distributions , we still find some nodes which are consistently overloaded . for each distribution of voltages , we classify as `` overloaded nodes '' the ones with a load at least ten times larger than the average .we define _ vulnerability _ of a node as the probability that it is an overloaded node .figure [ fig : power_grid_map]a shows the spatial distribution of vulnerability in the european power grid where the color and size of the nodes denotes their vulnerability .the vulnerability of green nodes is lower than 0.1% , while the one of the red nodes is larger than 5% .all the other nodes ( about 30% of the nodes , in dark olive color ) have a vulnerability between 0.1% and 5% . in comparison ,the highly vulnerable nodes are at least 50 times more frequently exposed to large incoming fluxes . in the case of random perturbations ,vulnerable nodes are more likely to fail or be congested .it is therefore crucial to identify these nodes to improve their capacity and to mitigate the risk of failure .figure [ fig : vulnerability_distribution ] shows the vulnerability distribution corresponding to the map in fig .[ fig : power_grid_map]a and to other network topologies . in the case of the watts strogatz and scale - free networks , a vulnerability distribution for networks of size are presented ._ identifying vulnerable nodes_[subsec : identifying_vulnerable_nodes]next we will introduce a simple topological property of the nodes to identify the vulnerable spots without solving the dynamics of the burgers equation . according to the burgers equation ,each node sheds the incoming shock wave among its out - edges and the total load agregated at the node at a time is the sum of incoming loads .hence , if we track the path of a given shock wave , it is fragmented at each node with multiple out - edges and will stop at any node that does not have any out - edges .assuming that the time average of the load at a node is proportional to the number of incoming shock waves , we determine the _ basin _ corresponding to that node .for this purpose , let us consider one realization of the edge directions ( see fig .[ fig : weighted_basin_size ] ) . for a given node ( the one in red in fig .[ fig : weighted_basin_size ] ) , the corresponding basin is defined as the smallest subgraph of the network containing this node ( as the sink ) , which is connected to the rest of the network by outgoing edges . following the procedure as illustrated in fig [ fig : weighted_basin_size ] , we go through the nodes following the opposite direction of the in - edges , starting from the red node ( fig . [fig : weighted_basin_size]a ) , and add nodes to the basin that has only out - edges at the end of the process ( marked by the blue region in fig .[ fig : weighted_basin_size]b ) .the resulting subgraph is the basin of the red node , and the contribution of the load of the nodes in the basin is simply the inverse of their out - degree ( fig .[ fig : weighted_basin_size]c ) , except for the initial ( red ) node , that contributes to the load with unity .this choice of the contribution of the nodes in the basin is based on the fact that eq .( [ eq : network_discretization ] ) conserves the flux at each node . for simplicity, we assume here that the amplitude of the shock waves leaving a node is on average the same for each out - edge .we calculated the size of the basins for each node , defined as the sum of contributions from all nodes inside the basin of the corresponding node .this basin size ( which is determined for one voltage distribution ) is then averaged over different voltage configurations .the resulting basin size distribution is depicted in fig .[ fig : power_grid_map]b for the power - grid network .one sees that for this network , the distribution of the node - basin size is very similar to the distribution of the vulnerability .a quantitative comparison of the two properties can be given by their correlation .thus , we plot the rank - rank scatter plots in fig . [fig : scatter_plot ] of the indices of the nodes after being sorted in ascending order by vulnerability and node - basin size .the plots are the average over 5000 voltage distributions and over 100 different topological realizations of model networks .the corresponding product - moment correlations of the ranks ] where and are the mean and and are the corresponding standard deviation of the two quantities and . ]are given above the plots , showing strong correspondence between the ranked vulnerability and node - basin size for the power grid and scale - free networks .the crucial nodes are the ones with large vulnerability since they are more exposed to large loads .the basin size shows a strong correlation with the vulnerability for these large values ( top right corner of the scatter plots ) , meaning that it is a good estimator of vulnerability . watts strogatz network exhibits much less correlation because the degree distribution is extremely narrow , that is , deviations from the average degree are negligible .thus , for each realization , the differences in the loads from node to node are very small . this conclusion is also supported by fig .[ fig : vulnerability_distribution ] showing a narrow vulnerability distribution for watts strogatz networks .we study the propagation of shock waves on directed networks using the burgers equation .under sequentially applied perturbations and constant dissipation , the dynamics approaches a steady state . in this steady state ,most of the nodes have negligible average load and a significant fraction of the total load is localized on a few nodes .we found that some nodes are more likely to accumulate load even after averaging over many edge direction configurations .these nodes ( the vulnerable nodes ) are more likely to fail , when there is a finite capacity of the load they can bear .unexpectedly we find for the european power grid a broad pronounced bimodal distribution for the loads , while for scale - free network the distribution resembles more a power law .the steady state and thus the probability distribution of vulnerability among the network is determined by solving numerically the partial differential equations of eq .( [ eq:1d_burgers_equation ] ) on each edge .the propagation velocity of the shock waves depends on their amplitude , which can vary rapidly throughout the network .we propose a simpler alternative based on the node - basin size to estimate the vulnerability of the nodes and identify the most vulnerable ones .simulations on a real network ( european high - voltage power grid ) and on scale - free networks show that the node - basin size can predict very accurately the location of vulnerable nodes while it performs worse for the watt - strogatz network due to its narrow degree distribution .our results suggest that it is possible to establish a remarkable connection between dynamics and network structure .although for many networks the node - basin size seems to be an accurate tool in predicting the distributions and detecting vulnerability , it is only the first step towards a complete description of the steady state .more might be understood by studying the relation between the most vulnerable nodes : under what circumstances are they separated or forming connected subgraphs ? is any local property of the network responsible for a node being highly vulnerable ?this information would provide the tools to mitigate the risk of systemic failure .further investigation may involve the removal of nodes that reach their capacity . in this case, the study of the time evolution of the network structure or optimal strategies of dynamical node / edge addition or deletion can be of relevance .dynamics_in this section , we describe the generalization of eq .( [ eq:1d_burgers_equation ] ) on a directed network .the numerical solution of the one - dimensional burgers equation can be discretized using godunov s scheme , \label{eq : numerical_1d_burgers}\ ] ] where is the load at the mesh point at time , and are the spatial and temporal discretizations , and is the flux .the value of is given as follows : if then otherwise , to solve this equation on a network , one needs to fix the direction of each link in order to have a precise definition of the in- and out - flux of a mesh point .for practical purposes , this is a realistic approach as water always flows downhill and the current follows a decreasing gradient in electric potential . our discretization model for the edges and nodesis illustrated in fig .[ fig : discretization ] . the edges of a network are one dimensional and thereby eq . ( [ eq : numerical_1d_burgers ] ) holds .the number of mesh points in each edge is proportional to its length and the direction is defined by the direction of the edge .furthermore , a value of is assigned to each node .nodes interact with the nearest mesh points of their incident edges according to the following equation , ( ) denotes the set of in- ( out- ) edges of node , and ( ) is the load at the last ( first ) mesh point of the corresponding edge .the resulting dynamics conserves the total mass and at each node , the total incoming flux is equal to the outgoing one . denotes the mesh point on the edge , and are the mesh points corresponding to the source and target nodes , respectively .the load values on the edge mesh point are denoted by .( b ) the nodes interact only with the nearest mesh points of their edges . denotes the adjacent mesh point of the corresponding in- or out - edge of the blue node and is the flux according to eq .( [ eq : numerical_1d_burgers]).,width=325 ] also , we should add an important remark on the various constraints of the investigated model .first , in the numerical solution of a nonlinear pde on a network , the degree of a node corresponds to the local dimension of the space in which one solves the equations . as the size of the scale - free network increases ,the frequency of nodes with very large degrees also grow . considering numerical stability, the appearance of larger degrees sets an upper limit on the magnitude of the applied perturbations .however , if the perturbations are small ( which is required by the numerical treatment ) , shock waves tend to vanish by travelling on the edges and they are not able to interfere constructively .therefore , in the finite - size study , we considered only networks below the size of . _networks_we consider three network models : the european power grid ( with nodes and edges , i.e. , ) , the watts strogatz small - world network ( ) and scale - free network models ( for and in the case of large network sizes ) .the watts strogatz network is constructed by considering first a one - dimensional chain with first and second neighborhood connections and periodic boundary conditions , and then rewiring each edge with probability ( with undirected edges , this corresponds to an undirected average degree ) .after the voltages are set and edge directions are introduced , the resulting network has a directed average degree of .the scale - free network is constructed by the configuration model : first we assign the degrees for each node according to a power - law with exponent , and then connect randomly chosen nodes .finally , further rewiring of the edges is carried out in order to eliminate degree - correlations .note that the number of edges in the watts strogatz network is different from that in the power grid and the scale - free network .authors would like to thank the swiss national science foundation under contract 200021 126853 , the eth zrich risk center for financial support .this work was also supported by the european research council ( erc ) advanced grant 319968-flowccs and the eu erc fp7 collmot grant no : 227878 .38ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) _ _ ( , ) _ _ ( , ) in _ _ ( ) pp . * * , ( ) * * , ( ) _ _ , edited by , , and ( , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) _ _ ( , ) _ _ ( , ) pp . _ _ ( , ) * * , ( ) * * , ( ) _ _ ( , , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) _ _ ( , ) pp .
power grids , road maps , and river streams are examples of infrastructural networks which are highly vulnerable to external perturbations . an abrupt local change of load ( voltage , traffic density , or water level ) might propagate in a cascading way and affect a significant fraction of the network . almost discontinuous perturbations can be modeled by shock waves which can eventually interfere constructively and endanger the normal functionality of the infrastructure . we study their dynamics by solving the burgers equation under random perturbations on several real and artificial directed graphs . even for graphs with a narrow distribution of node properties ( e.g. , degree or betweenness ) , a steady state is reached exhibiting a heterogeneous load distribution , having a difference of one order of magnitude between the highest and average loads . unexpectedly we find for the european power grid and for finite watts - strogatz networks a broad pronounced bimodal distribution for the loads . to identify the most vulnerable nodes , we introduce the concept of node - basin size , a purely topological property which we show to be strongly correlated to the average load of a node .
pattern formation in living systems has attracted much attention since the pioneering work of darcy thompson . in recent yearsspecial attention has been given to patterns emerging from cell colony growth in hostile environments .these systems tend to exhibit complex growth patterns when the growth is limited by the diffusion of a nutrient that is necessary for the growth of the cells .the morphologies obtained from these living systems resemble that of many non - living systems like electrodeposition , crystal growth and viscous fingers .all of these non - living systems obey the same underlying growth principle , which is that of laplacian growth , in which the interface between the two phases is advanced at a rate proportional to the gradient of a potential field . in the case of electrodepositionit is the electric field around the substrate , in crystal growth it is the temperature field and in viscous fingering the pressure in the liquid .this similarity between biological and non - living diffusion limited patterns has led to the hypothesis that the biological patterns could be explained with the same basic principles .perhaps the most studied example of cell colony growth is the growth of bacterial colonies subject to low nutrient levels .bacteria are usually grown in petri dishes at high nutrient concentration .these conditions give rise to colonies with simple compact morphologies , but when the growth occurs in more hostile low nutrient concentrations the morphologies of the colonies take on very complex shapes . this phenomena was first reported by matsushita et al . and since then several models have been suggested to explain the observed mophologies .the main modelling approach that has been used is to consider the growth via a system of reaction - diffusion equations .these models are able to reproduce the observed patterns , ranging from eden - like and dense branched morphologies to dla - like patterns .another approach by ben - jacob et al . is to model the bacteria as clusters of discrete walkers which obey dynamical rules .this model also agrees well with experimental data and is more biologically realistic compared to the reaction - diffusion approach .avascular tumours also grow under similar nutrient limited conditions as bacteria cultured in petri dishes . in the early stages of cancer developmentthe tumour has yet to acquire its own vasculature and the cancer cells therefore have to rely on diffusion as the only means of nutrient transport .when the tumour reaches a critical size the diffusion of nutrients is not enough to supply the inner parts of the tumour with oxygen , this leads to cell death or necrosis in the core of the tumour . surrounding the necrotic core is a rim of quiescent cells and on the outer boundary there is a thin rim of proliferating cells .the mitotic activity therefore only takes place in a small fraction of the tumour , while the majority of the tumour consists of cells that are either quiescent or dead .although the growth of a tumour is a much more complex process compared to the growth of bacteria in petri dishes there is still evidence from both experiments and mathematical models that tumours exhibit fingering and fractal morphologies driven by diffusion limited growth .another biological system that displays complex patterns under diffusion limited growth are fungal colonies .complex patterns with fractal morphologies have been observed for both multi - cellular filamentous growth and for yeast - like unicellular growth .these patterns arise in low nutrient conditions or when there is a build up of metabolites which inhibit the growth of the colony and have successfully been modelled using both continuous and discrete techniques .bacterial colonies exhibit branches which have a width of approximately 0.5 mm , which is of the order of 100 cells .this is very different from viscous fingers for example , where the disparity of length scales between the molecules and pattern is much larger .we believe that in order to fully capture the dynamics of such systems , where the characteristic length scale of the pattern is similar to that of the cells which constitute the pattern , it is necessary to model them at the level of single cells . in this paperwe therefore present a simple hybrid cellular automaton model of nutrient dependent cell colony growth where each automaton element represents a single cell .the aim of this model is not to represent any specific biological system , but rather to show that complex growth patterns can emerge from a very simple model with minimal assumptions about the cell behaviour .the simplicity of the model presented in this paper ensures both generality of the results discussed as well as allowing us to carry out a stability analysis .this analysis we hope will shed light on the growth instabilities observed in the above mentioned systems .the domain of the colony is restricted to a 2-dimensional substrate and the growth is assumed to be limited by some essential nutrient which is required for cell division .the substrate on which the cells grow is represented by a cellular automaton with lattice constant .each automaton element can be in three different states : ( i ) empty , ( ii ) hold an active cell or ( iii ) hold an inactive cell and each element is identified by a coordinate .the cellular automaton is coupled with a continuous field that describes the nutrient concentration . in the case of bacteriathe nutrient represents peptone , for tumours oxygen and for fungi some sort of carbon source like glucose .the transition from an active cell to an inactive occurs if the nutrient concentration falls below some critical threshold .this inactive state corresponds to sporulation or cell quiescence and is assumed to be irreversible .an active cell divides when it has reached maturation age , it then places a daughter cell at random in an empty neighbouring grid point ( using a von neumann neighbourhood ) .if none of the neighbouring grid points are empty the cell division fails , but the cell remains in the active state .after cell division has occurred the age of the parent cell is reset , which means that is has to reach maturation age again to divide . in order to account for variation in maturation age between different cells the maturation age of each cell is chosen randomly from a normal distribution , where represents the average maturation age and the variance is set to . for simplicity we will consider non - motile cells ( which for bacteria corresponds to high agar concentrations ) , which implies that the growth of the colony is driven by cell division .active cells are assumed to consume nutrients at some fixed rate , while inactive cells do nt consume any nutrients .the nutrient is assumed to diffuse in the substrate with a diffusion constant .the nutrient concentration field therefore obeys the equation , where if the automaton element at holds an active cell and if it holds an inactive cell or is empty .the boundary conditions satisfied by the nutrient fields are dirichlet with a constant value .this represents a continuous and fixed supply of nutrient from the boundary of the system . in order to simplify the systemwe introduce new non - dimensional variables given by , using these new variables the equation describing the nutrient concentration becomes ( omitting the primes for notational convenience ) this equation is discretised using standard five - point finite central difference formulas and solved on a grid with the same spatial step size as the cellular automaton .each time step of the simulation the nutrient concentration is solved using the discretised equation and all the active cells on the grid are updated in a random order . using this simple model of cell colony growthwe have investigated how the nutrient consumption rate of the cells affects the growth dynamics of the colony .note that varying the non - dimensional consumption rate is equivalent to either varying the dimensional consumption rate or the boundary concentration , see eq .( [ eq : nondim ] ) .all simulations were started with an initial circular colony ( with radius 10 cells ) of active cells at the centre of the grid and an initial homogeneous nutrient concentration of .figure [ fig : col ] shows the colony after 300 cell generations for and with , on a grid of size .cell colony plots after 300 cell generations for nutrient consumption rates : ( a ) , ( b ) and ( c ) .the other parameters were fixed at , and grid size .empty ca elements are coloured white , inactive cells grey and active cells black .this shows that the colony morphology depends on the nutrient consumption rate , where a high consumption rate gives rise to a fractal colony morphology .the insets in ( b ) and ( c ) shows a loglog - plot of the density - density correlation function , which show that at small length scales in the fractal growth regime.,width=264 ] from this figure it is obvious that the consumption rate of the cells affects the morphology of the colony . for the lowest consumption rate the colony grows with a compact eden - like morphologythe colony consists mostly of inactive cells with an outer rim of active cells at the boundary . for morphology is no longer compact but exhibits a pattern similar to the dense branching morphology ( dbm ) observed in viscous fingering . againthe colony consists mostly of inactive cells and the few active cells reside on the tips of the branches . for the highest consumption rate the branched morphology is even more evident and it exhibits thinner branches . in order to characterise the morphologiesfurther we measured the fractal dimension of the colonies by measuring how the number of cells ( active and inactive ) depends on the radius of the pattern . for a compact morphologywe expect that , which is what we find for , but for the two other morphologies we find that , where for and for .for both and the colony thus grows with a fractal morphology .this was also confirmed by measuring the density - density correlation function for the colonies ( see inset of fig .[ fig : col].b and c ) . at smalllength scales decays as , where the fractal dimension of the colony is given by .for we find and for we have , which gives fractal dimensions in close agreement with the previous method .the eden - like growth pattern observed for is to be expected , as all cells on average divide at uniform speed , but what is interesting is that as the nutrient consumption rate is increased it leads to a branched morphology .the intuitive explanation of why this type of growth occurs is because the high nutrient consumption ca nt sustain the growth of a smooth colony boundary . if a cell on the boundary divides and places the daughter cell outside the existing colony boundary it reduces the chances of neighbouring cells to divide , as the daughter cell `` steals '' nutrient at the expense of the cells that are closer to the centre of the colony , effectively creating a screen from access to the nutrients .it is this screening effect that amplifies perturbations to the colony interface and leads to the branched morphology .this implies that the branched colony morphology is a result of nutrient limited growth , which is in agreement with the previously discussed experiments and models of colonies of bacteria , tumour cells and fungi .the dynamics of this model are essentially that of a diffusion limited growth process , in which the diffusing nutrient is transformed by the cells into biomass in the form of daughter cells .it is therefore not surprising that it exhibits morphologies similar to non - living diffusion limited growth phenomena like viscous fingers , electrodeposition and crystal growth . in the next sectionwe will quantify the growth instabilities observed in the system by performing a linear stability analysis on the model .in order to analyse the stability of the discrete model we have to construct an analogous model that captures the essential dynamics but is amenable to mathematical analysis . the analogous model will be constructed by considering the colony boundary as a sharp interface that moves in two dimensions with a velocity determined by the maturation age and the size of the individual cells , such that .the nutrient consumption of the cells is taken to be in the active part of the colony , where , zero in the inactive part , where , and zero outside the colony .if we consider the growth of a planar front , growing in the -direction and stretching infinitely in the -direction , the equations describing the nutrient concentration are given by , where is the heaviside step - function and is the position of the interface along the -axis . in order to make the analysis simplerwe make a change of coordinates to a moving frame that travels along with the interface , i.e. we define a new coordinate , where corresponds to , the position of the interface .this change of coordinates plus the fact that there is no dependence on reduces ( [ eq : init1 ] -[eq : init2 ] ) to a system of three ode s given by , where ( [ eq : ode1 ] ) describes the nutrient concentration outside the colony ( [ eq : ode2 ] ) in the active region of the colony and ( [ eq : ode3 ] ) in the inactive region , and where the width of the active region is determined from the solution of the ode s .the boundary condition for the nutrient concentration at is that it should reach the boundary value .moreover we want the solution to be smooth across the interface so we require that the solutions to ( [ eq : ode1 ] ) and ( [ eq : ode2 ] ) have the same value as do their derivatives at .we also require that the solutions to ( [ eq : ode2 ] ) and ( [ eq : ode3 ] ) take the value at and that the derivative is zero at that point .if we let be the external solution , the solution in the active region and in the inactive region we formally require that , a solution to ( [ eq : ode1 ] - [ eq : ode3 ] ) with boundary conditions ( [ eq : bc ] ) is given by where is the thickness of the boundary layer .an example of the solution with appropriate parameter values can be seen in fig .[ fig : conc ] , where the circles represents the nutrient profile measured radially in a simulation with corresponding parameter values .the agreement between the two nutrient profiles shows that the planar front approach approximates the nutrient profile very well .the nutrient profile plotted in the moving -frame for , and in the inactive part the active part and outside the colony .the circles represent the radial nutrient profile from a simulation and the solid line is the analytic nutrient profile ( [ eq : sol ] ) ., width=302 ] we will shortly analyse the stability of this simple model , but before doing so we will discuss the growth dynamics of the cells in more detail . in the hybrid modelthe cells divide at a uniform speed , only varied by the stochastic difference in maturation age .although this is the case the model still gives rise to interesting growth patterns .the reason behind this is that the cells in the model become inactive if the nutrient concentration falls below the threshold .if a cell on the boundary becomes inactive before cell division occurs the interface velocity becomes zero at that point , and the inactive cell may become the starting point of the development of a fjord .this scenario is only possible if the nutrient consumption rate is sufficiently high compared to the nutrient concentration at the boundary .if this is the case the cells on the boundary have to rely on the flux of nutrient in order to survive long enough to go through cell division .our interpretation of this is that in the limit of high consumption rates the velocity of the interface becomes proportional to the flux of nutrient , a mechanism already suggested by matsushita & fujikawa .mathematically this means that the local interface velocity is given by , where is the normal of the interface. this observation will be the basis for our stability analysis , which means that our treatment of the system will not be rigourously related to our model , but rather aimed more at understanding the dynamics of the model from a qualitative point of view . in the above solution ( [ eq : sol ] ) of ( [ eq : ode1 ] - [ eq : ode3 ] ) we assumed that the interface at was flat ,we now introduce a small oscillating perturbation of amplitude to the interface , giving , where .this changes the nutrient field in the vicinity of the interface , and we need to find this perturbed field to determine the stability . in the following analysiswe will assume that the interface velocity , which means that there is a separation in the time - scales between the movement of the interface and the dynamics of the nutrient field .this allows for a number of simplifications : firstly it implies that the nutrient field is in a quasi - stationary state , which means that the nutrient concentration approximately satisfies outside the colony and implies that we can approximate the nutrient profile by a linear function in the vicinity of the interface .this is generally the case for the types of biological system discussed here , where the nutrient diffuses on a time - scale much faster than the growth of the colony .for example the reproduction time of bacterial cell is of the order of hours , while the diffusion constant of nutrient in agar is of the order 10 /s .this means that the diffusion time across one bacteria is s ( assuming that a bacteria is approx .10 m ) , which is considerably smaller than the reproduction time .cancer cells are approximately 25 m in diameter and the diffusion constant of oxygen in tissue is /s , giving a diffusion time of s across one cell , which is several orders of magnitude smaller than the reproduction time of a cancer cell , which is of the order of 10 - 20 hours . +secondly the quasi - stationary assumption allows us to omit any time dependence in the solutions for the perturbed field .it also implies that the iso - concentration curve defined by will be stationary in the moving frame .further we will assume that this curve is given by displacing the interface by in the -direction , i.e ( cf .[ fig : layer ] ) .this is of course only valid when is small and when the wave number of the oscillation is small .the values of which give rise to branching patterns are of the order of one cell size and the interesting range of wave numbers will be small as we are not interested in perturbations of wave length smaller than a cell size ( ) .this means that this assumption will be valid within the dynamically interesting range .the equation of the perturbed nutrient field can now be written as where the linear part is given by and constant is determined from the boundary condition .this field satisfies and the boundary condition ( to first order in ) and is therefore an approximate solution for the perturbed interface .this figure shows the structure of the interface .it is assumed that the curve is given by displacing the inteface by in the -direction.,width=188 ] as the nutrient field now depends on the growth of the interface is as argued above proportional to , where .but as the interface velocity in the -direction is negligible and the gradient dependent growth velocity can be approximated by where is the constant of proportionality .the velocity can also be written as taking the derivative in the -direction and equating the two expressions for the velocity gives ( only taking into account first order in ) the growth rate of the perturbation is therefore given by from this we can see that the thickness of the boundary layer affects the stability of the interface .the wave number which has the highest growth rate is and when is large ( is small ) only modes with a small ( long wavelength ) have a significant growth rate , but for smaller ( larger ) the maxima is shifted to larger ( smaller wavelengths ) and the growth rates of these wavelengths increase ( cf . fig . [fig : growth ] ) .qualitatively the dispersion relation ( [ eq : disp ] ) can be explained in the following way : a perturbation of the colony interface gives rise to an identical perturbation in the isoconcentration curve . as the perturbed field is quasi - stationary the perturbations decay exponentially in the direction of the interface ( [ eq : perc ] ) .the larger the distance is between the curve and the interface , the more the perturbations in the nutrient field decay .since the interface velocity is proportional to the gradient of the nutrient field this implies that the thicker the boundary layer is the more uniform the interface velocity is , and consequently the interface is less sensitive to perturbations .this plot shows the dispersion relation ( [ eq : disp ] ) for and , .it can be seen that both the fastest growing mode and its growth rate depends on the consumption rate .,width=302 ] in the case the dispersion relation reduces to , which is the dispersion relation for laplacian growth without ultra - violet regularisation . in this type of growththe interface velocity is proportional to a potential field which is zero on the interface . in the above derivation the field which governs the growth of the interface instead takes on a zero value at a distance from the interface . with the dispersion relation of laplacian growth in mindwe can conclude that by introducing a boundary layer the interface is stabilised for high wave numbers , but that this stabilising effect decreases as the thickness of the boundary layer is reduced . as mentioned before the thickness of the boundary layer , which means that the stability of the colony growth depends directly on the consumption rate of the individual cells . a low nutrient consumption results in a wide boundary layer that stabilises the growth , while a high consumption gives rise to a thin boundary layer which leads to unstable branched growth .the reason why all wave numbers have a positive growth rate ( for all ) is because the dynamics do not contain any stabilising mechanism . in the mullins - sekerka instability , which also describes a diffusion - limited growth ,the growth is stabilised by surface tension , which inhibits the growth of perturbations with a large wave number .a similar type of effect can be observed in a reaction - diffusion model describing bacterial growth . in the reaction - diffusion model a protrusion into the nutrient side of the interface results in enhanced local growth , but the bacterial diffusion flow is is reduced at the protrusion .it is the relative strength between these two effects that determines the stability of the growth .this type of stabilisation does not occur in our system because the cells are immobile and the growth does not depend on the local curvature of the interface .consequently there are no perturbations that have negative growth rate .another system which exhibits a mullins - sekerka like instability is the fisher equation with a cut - off in the reaction term for low concentrations of bacteria .this is motivated by the fact that bacteria are discrete entities , which means that at some small concentration the continuum formulation breaks down . because we consider single cells rather than concentrations , the cut - off effect is already incorporated in our model .on the other hand we also have a cut - off in the growth due to low nutrient concentrations , i.e. no cells divide in regions where .although this cut - off is due to cells becoming inactive rather than finite particle numbers it might have a similar effect on the stability of the continuous model and is a question certainly worth investigating .the above derivation of the dispersion relation ( [ eq : disp ] ) contains a number of simplifications and assumptions and it is therefore important to verify the analytic result by comparing it to simulation results from the discrete model .this was done by measuring the average branch width in the colony and how it depends on the consumption rate .the consumption rate affects the branching of the colony as it determines the linear stability of the interface .when a branch grows it is constantly subject to perturbations and when it reaches a critical width it becomes linearly unstable and splits , similar to what occurs in splitting of viscous fingers .as we do nt have any detailed information about the dynamics of the tip splitting we will consider a idealised version of the process .we will assume that the branches grow to the critical width at which they split and that each splitting gives rise to two branches of equal width .if we assume that no branches are annihilated and that they grow at a constant speed then an estimate of the average branch width in the colony is this is of course a highly idealised picture of the branching process , but contains the essential dynamics as it is clear from figure [ fig : col ] that the branch width decreases with increasing .the results from the simulations were analysed in the following way : first the colony was allowed to grow long enough for the morphology to be properly established ( approx .400 time steps ) , the cell density was then measured on a circle of radius , where is the distance to the most distant cell in the colony ( cf .[ fig : tumour ] ) .the resulting cell density was stored in a vector , where if the automaton element at distance and angle from the centre holds a cell ( active or inactive ) and if it is empty and from this vector the average branch width was calculated . in order to make sure that the measurements were not biased by the choice of radius we also measured how the average branch width depended on the radius .the results show that the average branch width depends on the radius for small , but that this bias is negligible for the values of we consider ( data not shown ) . the circle around which the average branch width was measured.,width=151 ] the average branch width was calculated for several values in the range of ( averaged over 20 simulations for each value of ) where branching occurs and the results can be seen in fig . [fig : width ] , where it is plotted together with the analytic result ( [ eq : anwidth ] ) . from thiswe can see that the average branch width from the simulations agree with the analytic result obtained from the linear stability analysis of the model .the agreement is not perfect but the simulation results exhibit an approximate decay of the average branch width predicted by the stability analysis .one should also bear in mind that our analysis contains a number of simplifications which means that we can not expect to capture the exact dynamics of the system , but at least our analysis predicts the general behaviour of the system .the average branch width as a function of the consumption rate .the circles shows the average result from simulations and the solid line represents the analytic result .the error bars show the standard deviation of the simulation results.,width=302 ]in this paper we have presented a simple hybrid cellular automaton model of cell colony growth , which exhibits interesting growth patterns .we have investigated how the nutrient consumption rate of the cells ( or equivalently the nutrient concentration ) affects the growth dynamics .the results show that for low consumption rates the colony grows with a eden - like morphology , while for higher consumption rates the colony breaks up into a branched morphology . by observing that the local growth of the colony is proportional to the gradient of the nutrient field we were able to derive a dispersion relation , which shows that the thickness of the boundary layer in the colony determines the stability of the growth .when the nutrient consumption rate is low the colony exhibits a wide boundary layer , which stabilises the growth , while when the consumption is high the width of the boundary layer is reduced and the growth becomes unstable leading to a branched morphology . when the boundary layer vanishes the derived dispersion relation is reduced to the one describing laplacian growth without ultra - violet regularisation .analysis of colonies obtained from the discrete model show good agreement between simulations and theory .some cells are known to use chemotactic signalling under harsh growth conditions .this has for example been observed in bacterial colonies , which under very low nutrient conditions exhibit densely packed radial branches .it has been suggested that this occurs because stressed cells in the interior of the colony secrete a signal which repels other cells .this could be included in the model either by introducing a bias towards placing the daughter cell in the neighbouring automaton element that has the lowest ( or highest ) level of the chemotactic substance or by allowing cells to move down ( or up ) gradients of the substance .introducing a chemorepellant secreted by cells exposed to low nutrient concentrations would most likely lead to a more directed growth away from the colony centre , and thus to a more ordered morphology with straighter branches .another approach could be to introduce a direct chemotactic response to the nutrient , which probably would have a similar effect on the colony morphology .it should be noted that the introduction of chemotaxis would make the dynamics of the model more complex and would render the stability analysis much more difficult. the instabilities described in this paper should be observable in any system where cell colony growth is limited by some nutrient that diffuses and which leads to inhibition if it falls below some critical threshold .the nutrient field also has to be in a quasi - stationary state , which corresponds to a separation in time scales between the cell division and the dynamics of the nutrient field .additionally we require that the colony expansion occurs mainly by cell division at the colony boundary and not by movement of individual cells .these conditions apply to growth of avascular tumours , bacterial colonies grown in high agar concentrations , yeast colonies and fungal growth .all of these systems exhibit branched or fractal morphologies under certain growth conditions and these growth patterns may be explained by the dispersion relation presented in this paper .we would like to thank olivier lejuene and fordyce davidson for helpful discussions .this work was funded by the national cancer institute , grant number : u54 ca113007 .
cell colonies of bacteria , tumour cells and fungi , under nutrient limited growth conditions , exhibit complex branched growth patterns . in order to investigate this phenomenon we present a simple hybrid cellular automaton model of cell colony growth . in the model the growth of the colony is limited by a nutrient that is consumed by the cells and which inhibits cell division if it falls below a certain threshold . using this model we have investigated how the nutrient consumption rate of the cells affects the growth dynamics of the colony . we found that for low consumption rates the colony takes on a eden - like morphology , while for higher consumption rates the morphology of the colony is branched with a fractal geometry . these findings are in agreement with previous results , but the simplicity of the model presented here allows for a linear stability analysis of the system . by observing that the local growth of the colony is proportional to the flux of the nutrient we derive an approximate dispersion relation for the growth of the colony interface . this dispersion relation shows that the stability of the growth depends on how far the nutrient penetrates into the colony . for low nutrient consumption rates the penetration distance is large , which stabilises the growth , while for high consumption rates the penetration distance is small , which leads to unstable branched growth . when the penetration distance vanishes the dispersion relation is reduced to the one describing laplacian growth without ultra - violet regularisation . the dispersion relation was verified by measuring how the average branch width depends on the consumption rate of the cells and shows good agreement between theory and simulations .
in the evolution equation for the nucleation period has been derived . herewe shall analyze the properties of the evolution equation and the uniqueness and existence of the solution of special problems appeared in the theory of nucleation .consider the initial problem formulated in * * * the following equation is given with positive parameters , , , , which are chosen to satisfy condition equation ( [ i ] ) and condition ( [ ii ] ) form the initial problem .this problem as well as all other ones is considered in a class of continuous functions . instead of the function introduce the function .then we get we change the variables the new variables will be marked by the same letters .then we get we denote also as .the value will be denoted as .we come to the following problem * * * the following equation is given with the positive parameter , chosen to satisfy the equation ( [ p ] ) and the condition ( [ pp ] ) form the reduced problem .so , here only one parameter remains .it is more convenient to consider instead of the function then we come to the following problem ( problem a ) * * * the following equation is given with the positive parameter , chosen to satisfy equation ( [ 1 ] ) and the condition ( [ 1l ] ) form the problem a. this problem will be the auxiliary problem .consider now the problem b * * * the following equation is given only the equation ( [ g ] ) forms the problem b. we see that the problem b has no parameters .the main goal of investigation will be to see the uniqueness of the solution of the problem b.at first we consider ( [ 1 ] ) with some positive .we rewrite ( [ 1 ] ) for function in the form one can see the following property * if the solution exists , then for any * this property goes from the positivity of sub - integral function .we introduce the iterated equation we also introduce the nonlinear operator according to one can prove the following fact : * if for any the following inequality takes place , then takes place for any . * it follows from the explicit form of .one can also see the following fact * if the solution exists then for any * it follows from the positivity of exponent in the sub - integral function .we construct iterations according to as the initial approximation we choose * we see that for any value of the argument * it follows from the positivity of exponent and the whole sub - integral function .form the statements formulated above it follows the following chain of inequalities for any value of the argument .so , the odd iterations converge and the even iterations converge .so , the solution of the iterated equation exists .but the uniqueness is nt yet proven . forany initial approximation which is always positive the iterations with initial estimate them from above . since the r.h.s .is positive and solution has to be positive , the iterations with any initial approximation have to be positive starting from some number ( the asymptotics at small arguments are written explicitly ) .correspondingly , later the iterations will be estimated by the already constructed iterations .so , the convergence can be estimated to convergence of constructed iterations .remark : the analogous iterations have been constructed by f.m.kuni in 1984 , but for the problem b. for the problem b the monotonic functions are absent and it is impossible to prove in such a simple way the convergence of iterations with now we shall prove the existence and uniqueness of non - iterated equation([1 ] ) ( in the class of continuous functions ) it can be done by several ways .one of the methods is to introduce the cut - off from the side of small , i.e. to consider only with big and positive parameter .it was done in . herewe shall use more rigorous method .we shall show that for given at \ ] ] with some small positive fixed the solution of ( [ 1 ] ) exists and it is unique .we construct iterations .the starting approximation is not important .we note that they do not go out of the class of positive continuous functions . under the arbitrary initial approximation, all iterations starting from the first one will be positive .we have for approximations then in the functional space ] . since one can come to since we see that and having calculated the integral we get the recurrent application of the last estimate ( with the explicit integration of ) leads to or simply to summation of the r.h.s .of the last relation leads to .then this sequence in the caushy sequence .then it must have a limit .this limit will be the unique one and it is the unique precise solution of equation ( [ 1 ] ) .the uniqueness in the global sense can be proven by approach used in investigation of the iterated solution .it has to be repeated every time we come to analogous situation .the existence and the uniqueness of solution of equation ( [ 1 ] ) for are proven . generally speakingthis proof is sufficient for uniqueness at every finite but here we have estimated by .it is possible to give more precise estimate , which will be done below .now we shall prove the existence and uniqueness for the rest .we construct iterations of the special type . the procedure is absolutely analogous to the special type butnow the iterations are the precise solution until .all properties mentioned for the special type remain here .now we rewrite equation for and have dx\ ] ] here is the power .now we keep parameter because in further investigations it will be necessary to prove all for the arbitrary positive ( and not integer ) power . for the integer powerthe task is more simple because having differentiated several times we can kill the integral term and reduce the equation to the differential equation .then we can use all standard theorems for differential equations . then can write with some fixed then one can see that having applied this estimate times with explicit integration one can come to the following estimate with some constant and two functions and .these functions have properties + 1)i + const_1\ ] ] where ] and ] is the minimal integer number greater than and is some constant which is independent on .now we can make the following remark : we can assume that earlier we proved the uniqueness until . then .then we come to the following estimate + 2)i}}{i ! } ( z - z_{21r})^{const_1}\ ] ] one can see that the r.h.s .is the member of the taylor s serial for the function + 2)})\ ] ] so , then the conclusion about the uniqueness and existence of solution can be also proven .one can rewrite the equation ( [ 1 ] ) by the substitution in a following form with .this representation is more convenient to show the uniqueness of dependence of on .suppose that the are two parameters and .let it be . then for we construct iterations with initial approximation and a recurrent procedure then so , with finite positive small .one can see the following important fact * the following formula takes place . *this is clear if one notes that grows when grows .really this proves the inequality .one can also prove the following statement : * there exists a point until which * this follows from the explicit estimates for the first and the second iterations with zero initial approximations .it is known that at they come very close and estimate the solution very precise . really , then since we see that asymptotically then the necessary point exists .this proves the statement . as the resultwe see that at extremely big negative the following hierarchy takes place so , all of them are between and at extremely big negative .later iterations can go away from this frames .we shall mark the coordinate when iteration number attains the mentioned boundary by .one can easily see that either goes to infinity or exists .one can also see that it is easy to see because the integration is going only for the values of argument smaller than the current one .this proves the estimate .a question appears whether the limit exists .the existence of a limit means that at this limiting point ( let it be ) the limit of odd iterations differs from the limit of even iterations . but both the limit of odd iterations and the limit of even iterations belong to the solutions of the iterated equation .but we have already proved that the solution of the iterated equation is the unique one .so , the contradiction is evident .we come to the conclusion that there is no such a limit .the last conclusion particularly means that really , the analogous reasons show that the possible points of crossing , , etc . with for every can not have a finite limit .so , for every finite we see that since for every finite there is such which provides we see that this proves the necessary inequality .the last inequality leads to the important relation for any given value of the argument .we analyze the question : how many solutions with different ( or ) can have the maximum of ( or ) at ?the most evident answer is that there will be only one solution but this has to be proven . to prove it we shall use -representation . the equation for the coordinate of the spectrum will be the following now we can calculate then we shall take it at and if we are able to prove that then the necessary property will be established . the possibility to take here ensured by existence of the root which was proven a few sections earlier .now we see that the proof of existence of the root was really necessary . for the derivative one can get the following equation where the direct calculation gives in the last relation one has to consider as a function of .one can also get for the following equation now we shall calculate .one can get the following equation one can rewrite the last relation as since ( it has been already proven ) then the sub - integral function is positive and the we have then now we shall return to the function .one can rewrite it as \exp(x - g(x ) ) dx\ ] ] then \exp(x - g(x ) ) dx\ ] ] the sign of is important .then we consider the function \ ] ] we have to recall that takes here only negative values. then \ ] ] for one can take the estimate going from the first iteration with initial zero approximation. then \ ] ] one can easily see that the function has only one minimum at where this function is . then then and it means that the uniqueness of the root is proven .* suppose that and are two different solutions of the problem b. then ( this is the statement ) * we shall prove this equality .suppose that these integrals do not equal , let it be then for the problem a one can see that and .then ( because there is only one root of equation ) we come to the conclusion that at least one solution can not have maximum of at .so , at least one solution is not a solution of the problem b. we come to the contradiction .this completes the proof .we can formulate another statement : * the solution of the problem b is unique .* it is clear because as it is proven has one fixed value .then it is the problem of solution of equation ( [ 1 ] ) with fixed .the last problem has the unique solution .this completes the justification of the uniqueness of solution of the problem b. * one can also see that the solution of the problem a is the solution of the problem b. * to see this one can simply differentiate the solution and then use the condition for derivative .since the solution of the problem a is unique and solution of the problem b is unique one can easily see that : * the solution of the problem a and the problem b coincide . *having established the uniqueness of solution of equation ( [ 1 ] ) with given ( or ) , the uniqueness of the solution of the problem a and the uniqueness of the solution of the problem b we can investigate the correspondence between solutions with different .so , the spectrum , i.e. the function has one and same form , which differs only by rescaling of the argument and the rescaling of the amplitude .the numerical simulations confirm this conclusion ( see the figure 2 ) the certain question whether there can be simply multiple repeating of solutions appears here .but the uniqueness of solution of equation ( [ 1 ] ) with given and the property established earlier ensure the absence of such repeating .
the properties of the evolution equation have been analyzed . the uniqueness and the existence of solution for the evolution equation with special value of parameter characterizing intensity of change of external conditions , of the corresponding iterated equation have been established . on the base of these facts taking into account some properties of behavior of solution the uniqueness of the equation appeared in the theory of homogeneous nucleation has been established . the equivalence of auxiliary problem and the real problem is shown .
joint orthogonal _ projection measurements _ are an essential tool in quantum communication .the most prominent example is the bell measurement that is used , for instance , in quantum teleportation .the canonical way to perform these measurements relies on signal interaction .an example is the optical interaction of light pulses .the latter is particularly relevant for practical applications , since light , traveling at high speed through an optical fiber and allowing for an efficient broadband information encoding , is the most convenient medium for the implementation of quantum communication protocols . in discrete - variable implementations based on single photons ,the required strong nonlinear optical interactions are hard to obtain .alternatively , it is a promising approach to replace interaction by interference , readily available via _ linear optics _ , and by feedback after detection .there are important cases , however , where linear optics is not sufficient to enable specific projective measurements exactly .for instance , a complete measurement in the qubit polarization bell basis is not possible within the framework of linear optics including beam splitters , phase shifters , auxiliary photons and conditional dynamics utilizing photon counting .however , using non - trivial entangled states of auxiliary photons and conditional dynamics , a perfect projection measurement can be approached asymptotically with a failure rate scaling as or , in a modified version of the scheme of ref . based on similar resources and tools , with an intrinsic error rate scaling as . in any case , no - go statements for exact implementations always indicate whenever finite ( and cheaper ) resources and less sophisticated tools , such as a fixed array of linear optics , are not sufficient for an arbitrarily good efficiency . in this article , we propose a new approach to the problem of projective measurements with linear optics and photon counting . since orthogonal states remain orthogonal after linear optical mode transformations , the inability of exactly discriminating orthogonal states is due to the measurements in the fock basis . in the new approach ,we replace the actual detections by a dephasing of the ( linearly transformed ) signal states .in other words , the detection mechanism is mimicked by destroying the coherence of the signal states and turning them into mixtures diagonal in the fock basis . with the resulting density operators ,the distinguishability is then expressible in terms of quantum mechanical states . by considering exact distinguishability, this yields a hierarchy of simple conditions for a complete projection measurement .we give a few examples where we employ these conditions in order to make general statements and to derive ( some known and some new ) no - go theorems on linear - optics state discrimination .moreover , projection measurements based on detection schemes other than photon counting can also be described within the framework of our formalism . in this respect ,we include a brief discussion on homodyne - detection based quadrature measurements .however , the essence of our work is the proposal of a new universal method .the unified perspective upon which our approach is based shall open the path to new results and applications including more general measurements than projective ones .let us define the vectors and representing the annihilation and creation operators of all the electromagnetic modes involved , respectively .a linear - optics circuit can be described via the input - output relations or with a unitary matrix .conversely , the mixing of optical modes due to any unitary matrix is realizable with beam splitters and phase shifters .this excludes linear mixing between annihilation and creation operators , as it results from squeezing transformations .those require nonlinear optical interactions . on the hamiltonian level ,arbitrary states are unitarily transformed via linear optics such that is an hermitian matrix .we consider projection measurements that operate on subspaces of the hilbert space defined over some signal modes .the orthogonal projection measurement is characterized by one - dimensional projectors such that for , and the completeness relation on the subspace is fulfilled as . in thissetting , the problem of implementing the projection measurement is equivalent to the unambiguous discrimination of the orthogonal states . the state discrimination may be aided by an auxiliary state that is supported on auxiliary modes .the states to be distinguished then are .the entire discrimination process now consists of two steps , , where the first step is due to linear optics , . in the second step, the detection of the output modes in the fock basis is mimicked through dephasing , with and the diagonal matrix , .the distinguishability can then be analyzed on the level of the density operators .since exact discrimination is considered , this leads to a huge simplification , as we shall explain now . in order to decide on the exact distinguishability of any two states , where and are the corresponding states after linear optics and dephasing .we obtain the condition for exact distinguishability , where and . due to the positivity of the integrand ,this is equivalent to where the effect of linear optics is now put into the operators or .let us define , .since the derivatives of must also vanish , in particular , at , we obtain the set of conditions for exact state discrimination , these conditions are _ necessary _ for a complete projection measurement onto the basis .however , if the entire set of conditions is satisfied , this is in general also a _sufficient _ condition , since is an analytic function of the relative phases .note that orthogonality , , is the `` 0th - order condition '' . by exploitingthat is of the form with some coefficients and that =0 $ ] for , the higher - order conditions can be rewritten in an equivalent normally ordered form , provided the lower - order conditions are satisfied .this leads to the hierarchy of conditions , in this form , one can directly see that the hierarchy breaks off for higher - order terms if the number of photons in the states is bounded .hence , for finite photon number , we end up having a finite hierarchy of necessary and sufficient conditions for complete projective measurements .the states of an orthogonal set are , in principle , exactly distinguishable via a _ fixed array of linear optics _ represented by , if and only if these conditions hold for the complete set of modes .the subset of conditions referring only to a particular mode operator represents _ necessary _ conditions for exact discrimination based on _ conditional dynamics _ after detecting that mode .they are given by already the failure to find some fulfilling eq .( [ condcond ] ) means that as soon as one output mode is selected and measured , this will make exact discrimination of the states impossible .conversely , one may also use the conditions of eq .( [ condcond ] ) in a constructive way .the recipe is to find one that satisfies eq .( [ condcond ] ) , to calculate the corresponding conditional states of the remaining modes , and to test them for their distinguishability .it is instructive to view this in terms of the partially dephased states .after dephasing only one mode , we obtain where is the probability to find photons in the measured mode for given input state , and is the corresponding ( normalized ) conditional state of the remaining modes .failure to fulfill eq .( [ condcond ] ) implies that the conditional states form a non - orthogonal set in for each fixed combination of . for such sets , we know that a further exact discrimination is impossible. we will show now that the condition in eq .( [ condcond ] ) for suffices to reproduce easily all known no - go theorems for projective measurements with linear optics including auxiliary photons and conditional dynamics .in this section , we present a few examples that illustrate the simplicity and usefulness of the criteria derived in the preceding section .these examples include general statements on the effect of extra resources to the exact distinguishability of arbitrary quantum states and `` back of the envelope '' derivations of some known and some new no - go theorems . among them , the simplest and most remarkable example will be that for a pair of orthogonal two - photon states , because the previously known no - go results apply to sets of at least four orthogonal states ( e.g. , the bell states ) .we start by investigating the use of auxiliary photons .splitting the input modes into a set of signal and a set of auxiliary modes allows us to decompose the mode operator from eq .( [ condcond ] ) into two corresponding parts as ( we drop the index ) , with real coefficients and , so that .now we find the last term always vanishes for , since the are orthogonal . in any situation where either the signal states or the auxiliary state have a fixed photon number , the two middle terms vanish , and the first - order condition only depends on the signal states alone , , .the trivial case can be omitted without loss of generality .it is straightforward to extend this derivation to any order in eq .( [ condcond ] ) by inserting a mode operator decomposed into a signal and an auxiliary part ._ hence for signal states with a fixed photon number , auxiliary systems never help , and for signal states with an unfixed number , adding an auxiliary state may help , but only provided the auxiliary state has unfixed number too . _ the no - go theorem for the qubit bell states , is obtainable now in a very simple way . in order to check for the existence of a mode satisfying eq .( [ condcond ] ) for , let us again drop the index and use the ansatz , by defining .we have six conditions these conditions imply it can be easily seen that these conditions have only trivial solutions , , which proves the no - go theorem for the bell states including auxiliary photons and conditional dynamics . a similar no - go theorem is known for an orthogonal set of separable two - qutrit states the entire set of 36 first - order conditions for one mode with now leads to again , only trivial solutions exist . going beyond ref . , we can now easily investigate subclasses of the set .the full no - go theorem also applies to the eight states when leaving out state . for other subclasses, this example illustrates the role of conditional dynamics .for instance , leaving out state , the conditions remain exactly those in eq .( [ qutritconditions ] ) except that does not occur in the first line .the only nontrivial solution is now where and , .the interpretation is that in order to enable discrimination of the conditional states for the entire subset , mode 1 must be detected first .this can be seen intuitively in eq .( [ qutritstates ] ) and in fig .[ fig1 ] .= 1.0 in with the help of the hierarchy of conditions , one can now easily find new no - go theorems .consider the orthogonal set of four two - qubit states , if all four states are entangled , and , only trivial solutions exist for the six first - order conditions eq .( [ condcond ] ) with .hence , the full no - go statement applies including auxiliary photons and conditional dynamics . for only two entangled states , e.g. and , one mode always exists which satisfies eq .( [ condcond ] ) . however , there are only trivial solutions to the second - order condition in eq .( [ pickj ] ) for some pairs of modes and ( ) , if the two states are _ nonmaximally _ entangled .in fact , a fixed array of linear optics is not sufficient in this case , but a conditional - dynamics solution exists . if the two states are _ maximally _ entangled , any order in eq .( [ pickj ] ) is satisfied with a beam splitter .a particularly simple example is the following _ pair _ of orthogonal states , described in the fock basis .we find that the and conditions of eq .( [ condcond ] ) can be simultaneously satisfied only trivially , .thus , there is no linear optical discrimination scheme for the two states of eq .( [ cuteexample ] ) , not even with the help of conditional dynamics and auxiliary photons , since the two states have fixed photon number .in fact , this no - go statement applies to the whole family of pairs of orthogonal states , and , for . what about quantitative statements beyond the no - go theorems for exact state discrimination ? a linear - optics network with photon counting yields for each input state a classical probability distribution for the pattern of photon detections in the output modes .this distribution can be used to estimate the input state . a possible measure in the context of estimating an input state is the probability of minimum error .for four equally probable output distributions , it can be written as \;,\end{aligned}\ ] ] where is the conditional probability for obtaining the result ( pattern of the photon detections ) given the distribution . using the classical distributions of the results in the totally dephased states with the two - photon bell states of eq .( [ bellstates ] ) as the input states ( parametrized by an arbitrary unitary matrix ) , we found numerically that .this bound can be attained by using a beam splitter , where corresponding to the optimal partial bell measurement without auxiliary photons and conditional dynamics .so far , the dephasing approach has been solely used to describe the decohering effect of photon detections , i.e. , measurements in the fock basis .however , it is worthwhile pointing out that this method is applicable to other kinds of measurements too .we may also consider , for example , homodyne detections , i.e. , measurements in a continuous - variable basis . in that case , the appropriate replacement in the dephasing formula of eq .( [ dephasingmechanism ] ) is where are the quadratures of mode . for example , for and , we obtain respectively the position and momentum associated with the mode s harmonic oscillator . the derivation of a set of necessary and sufficient conditions for exact state discrimination , eqs .( [ derivation1]-[hierarchy ] ) , also follows through with the replacement in eq .( [ dephasingmechanismcv ] ) .the resulting conditions in that case become ( we drop the superscript ) where denotes the quadratures of mode after the linear - optics circuit with .a continuous - variable bell measurement discriminates between the two - mode eigenstates of the relative position and total momentum .this can be achieved with a simple beam splitter and subsequent and measurements at the two output ports .conditional dynamics is not needed .however , in order to satisfy the above conditions for all ( that is two ) modes , two conjugate quadratures must be detected , for example , and . here , and are the two conjugate quadratures of the input modes .hence , due to the orthogonality of the continuous - variable bell states , the described scheme represents a solution to the above conditions . in a very intuitive way , this explains why a fixed linear - optics scheme suffices to perform a continuous - variable bell measurement with arbitrarily high efficiency , in contrast to a qubit bell measurement : the continuous - variable bell states are eigenstates of the detected quadratures , whereas the qubit bell states are no eigenstates of the detected photon numbers .in summary , we have presented a new approach to describe the processing of quantum states via linear optics including photon counting or other measurements such as homodyne detection .the advantage of this approach is that the detection mechanism is included in the transformation from the input quantum states to the output quantum states . for the case of a complete projection measurement onto a ( joint ) orthogonal basis, we obtained a hierarchy of necessary and sufficient conditions .when photon counting is considered , this hierarchy breaks off and yields a finite set of simple conditions for states with finite photon numbers .apart from homodyne detection , our universal approach can also be used to include other `` continuous - variable tools '' such as displacments and squeezing .it also provides a promising method to treat more general scenarios , e.g. the realization of general measurements ( povm s ) with linear optics .any povm can be described via naimark extension as an orthogonal von neumann measurement in a larger hilbert space .the extended signal states may then be analyzed using the criteria derived in this paper .this generalization is particularly significant , because it would extend our approach from qualitative statements on exact projection measurements to quantitative statements on approximate projection measurements .although progress is being made in enhancing the effective strength of nonlinear optical interactions , it appears reasonable to exploit the entire toolbox of linear optics first and explore it , in order to be aware of its capabilities , but also its limitations .in the recent work of ref . , the authors demonstrate that the capabilities of linear optics are unexpectedly broad , however , unfeasibly many extra resources may be needed for a good performance .we hope that the question of the trade - off between these extra resources and the performance can be attacked utilizing our criteria .we are grateful to john calsamiglia and bill munro for useful comments .we also acknowledge the financial support of the dfg under the emmy - noether programme , the eu fet network ramboq ( ist-2002 - 6.2.1 ) , and the network of competence qip of the state of bavaria ( a8 ) .99 c. h. bennett _ et al ._ , phys .lett . , 1895 ( 1993 ) .n. ltkenhaus , j. calsamiglia , and k.a .suominen , phys .a * 59 * , 3295 ( 1999 ) .l. vaidman and n. yoran , phys . rev .a * 59 * , 116 ( 1999 ) .e. knill , r. laflamme , and g. j. milburn , nature * 409 * , 46 ( 2001 ) .j. d. franson _ et al ._ , phys .lett . * 89 * , 137901 ( 2002 ) .m. reck _ et al ._ , phys .. lett . * 73 * , 58 ( 1994 ) .p. trm , s. stenholm , and i. jex , phys .a * 52 * , 4853 ( 1995 ) .a. carollo and g. m. palma , j. of mod .49 * , 1147 ( 2002 ) .a. carollo _ et al ._ , phys .a * 64 * , 022318 ( 2001 ) . c. h. bennett _ et al . _ ,a * 59 * , 1070 ( 1999 ) . c. a. fuchs , phd thesis , univ . of new mexico ( 1995 ) .h. weinfurter , europhys ., 559 ( 1994 ) ; s. l. braunstein and a. mann , phys .a * 51 * , r1727 ( 1995 ) . j. calsamiglia and n. ltkenhaus , appl .b * 72 * , 67 ( 2001 ) .s. l. braunstein and h. j. kimble , phys .lett . * 80 * , 869 ( 1998 ) .j. calsamiglia , phys . rev .a * 65 * , 030301(r ) ( 2002 ) .
we derive a set of criteria to decide whether a given projection measurement can be , in principle , exactly implemented solely by means of linear optics . the derivation can be adapted to various detection methods , including photon counting and homodyne detection . these criteria enable one to obtain easily no - go theorems for the exact distinguishability of orthogonal quantum states with linear optics including the use of auxiliary photons and conditional dynamics .
turbulence is characterized by the transfer of an inviscid constant , such as kinetic energy or a passive scalar , between different length scales .one of the primary goals of turbulence research is to understand the mechanisms that drive this transfer process .numerical simulations performed by farge _ et al ._ suggest that in both two - dimensional ( 2d ) and three - dimensional ( 3d ) turbulence it is a relatively small number of `` coherent structures '' that dominate the turbulent dynamics . as a result, it is important to understand how the existence and interaction of these coherent structures affect the transfer processes .thus , the _ spatially - local _ transfer properties of the flow need to be measured and correlated with these structures . in this manuscripta tool for obtaining local information about the scale - to - scale transfer of inviscid constants from experimental / numerical data is examined .the method , called the `` filter approach '' ( fa ) , has traditionally been applied to large - eddy numerical simulations ( les ) , but is developed here in the context of experimental data analysis . by applying a low - pass spatial filter to the equations of motion ( the incompressible navier - stokes equation ) , separate equations for the filtered , or large - scale , fields and the remaining small - scale fields can be derived . withinthe resulting equations are coupling terms that represent the interaction of the large- and small - scale fields with each other . in lesschemes the coupling terms in the large - scale equations are modeled thereby eliminating the necessity of directly computing the small scales . in our application of the filter approach , however , data from direct numerical simulations ( dns ) or high resolution experiments are used to directly evaluate the coupling terms and obtain a quasi - local measure of the inter - scale transfer . for readers familiar with les and turbulence modeling itshould be stressed that the philosophy driving our use of the filter approach is different from the _ a priori _ development of les models discussed in . therethe objective was to determine empirically which of several les modeling schemes for the large- to small - scale coupling terms most successfully emulates physical data . in that case , fa was used primarily as a benchmark , and only went as far as measuring the inter - scale coupling term . rather than producing results for numerical benchmarking , fa can also be used as an analysis probe to determine where and when in a flow scale - to - scale transfer of inviscid constants takes place .in this way one can isolate important interaction events and form an understanding of turbulence inter - scale transfer mechanisms .of course , this understanding could eventually be incorporated into les models .the filter approach is applied in this paper to both experimental and numerical data .the intent is not to investigate the underlying physics of the turbulence , which will be presented in later papers , but rather to determine the appropriate interpretation of the results and the limitations imposed by different filters , spatial boundaries , and finite measurement resolution .experimental measurements were carried out in a flowing soap - film channel , a quasi-2d system in which decaying turbulence of low to moderate reynolds number can be generated ( ) .the channel was cm wide and was inclined at an angle of with respect to vertical .the mean flow was and the film thickness was about m . a more detailed description of the channel can be found in . using the empirical relationships measured in , the films kinematic viscosity was /s . the turbulence generating grid consisted of rods of cm diameter with cm spacing between the rods .thus , the blocking fraction is around , which is typical for turbulence in 2d soap film flows .the resulting reynolds number , , was based on the mean - flow velocity and an injection scale of cm .the turbulent velocity , , and vorticity , , fields generated by the grid were obtained by tracking m polystyrene spheres ( density approximately g / cc ) within a region located cm downstream from the grid ( 20 - 30 eddy rotation times) .the particles were illuminated with a double pulsed nd : yag laser and their images captured by a 12-bit , pixel camera . around particles were individually tracked for each image pair and their velocities and local shears were interpolated to a discrete grid .one - thousand velocity and vorticity fields were obtained in this way and were used to compute ensemble averages of the statistical measures described below .typical velocity and vorticity fields are shown in fig . [fig : typical - fields ] .mm increments , the mean flow ( subtracted out ) is in the direction .the top of the image is cm downstream from the energy injection grid.,title="fig:",width=307 ] 0.05 in mm increments , the mean flow ( subtracted out ) is in the direction .the top of the image is cm downstream from the energy injection grid.,title="fig:",width=307 ] to supplement the experimental data , a direct numerical simulation of the 2d navier stokes equation was performed .computational details are presented in .the equation was simulated in a square domain with side and periodic boundary conditions . here, is the -th component of the velocity ; is the vorticity ; and is a stirring force applied to wave numbers .the einstein summation convention is used throughout .two values of are considered : corresponding to laplacian viscosity ( ) , and corresponding to hyper - viscosity ( ) , which has the effect of extending the inertial range .the equation was solved using a fully de - aliased , parallel pseudo - spectral code with second - order adam - bashforth time - stepping .the resolution was .a statistically - stationary state was achieved after about 200 large - eddy turn - over times .representative examples of vorticity fields generated using laplacian and hyper - viscosity are shown in fig .[ fig : numerical - fields ] . ) and b ) hyper - viscosity ( ),title="fig:",width=307 ] 0.05 in ) and b ) hyper - viscosity ( ),title="fig:",width=307 ]the filter approach allows the direct measurement of the coupling between scales in non - linear systems .it was originally developed as a tool to truncate numerical simulations of turbulence by modeling small - scale behavior with the knowledge of large - scale behavior , _i.e. _ les . in this paper ,the general application of fa to nonlinear systems will be considered first , followed by a specific example : the case of energy and enstrophy transfer between length scales in 2d turbulence .general features of the transfer process are examined by considering a field , , that is evolved by a non - linear evolution operator , , as . given a scale , , one can separate the field , , into large - scale , , and small - scale , , components .this is done by convolving a filter function , , with and defining and , where is the dirac delta function . applying to the evolution equation for yields .adding zero , written as , results in : eq .( [ eq : general - filter - space ] ) indicates that the large - scale field , , evolves in exactly the same manner as the full field , , up to a coupling term .this coupling term , which can be considered as an external forcing or damping , arises from the small - scale field , , interacting with .thus , provides a measure of the interaction between large and small scales .the measurement of requires no assumptions about the field , , such as homogeneity or isotropy .moreover , is a _ field _ quantity , not simply an average , and can reveal not only information about the coupling between scales , but also when and where such interactions are taking place and with what strength .the field nature of the coupling term derived above , , makes fa very valuable in the context of studying turbulence . as mentioned in the introduction , turbulence is characterized by an average scale - to - scale flux of quantities such as the kinetic energy .many possible mechanisms underlying the transfer processes have been suggested , _e.g. _ , the stretching of vortex tubes into thin filaments may account for some fraction of the down - scale transfer of energy in 3d . in 2d, vortex merger has been postulated as a way of transferring energy to larger scales .the difficulty is that neither of these pictures has been conclusively correlated with topological flow structures or the underlying transfer dynamics , though some attempts have been made . by using fa to measure the inter - scale transfer and correlating this field with the position of flow structures ( identified by other means ), one can determine the veracity of these hypothesized transfer mechanisms .there are a number of subtleties to consider when applying fa to experimental data .first , there is the choice of the filter function , .the interpretation of the scale - to - scale transfer depends on the selection of the filter function .for example , if the convolution function is gaussian defined by length scale , the interpretation of the filtered functions is as given above . on the other hand , if the convolution is with the kernel , where again is gaussian and is a dirac delta function , the resulting interpretation of the convolved fields , , as `` large - scale '' is incorrect .indeed , the function produces `` small - scale '' fields . other ramifications of changing the filter function will be explored shortly but the convention is adopted that the filter function is always low - pass .two additional considerations when applying fa to real data are finite measurement resolution and the existence of spatial boundaries .the extent to which the measurements are sensitive to either of these factors depends on the quality of the data and on the form of the filter used .significant research in the context of les has addressed similar concerns , but always for the purpose of approximating the physical system in a numerical simulation .these issues are explored in some depth in a later section . in two - dimensional turbulence kinetic energy , , and enstrophy , ,are conserved in the inviscid limit .theory , numerics , and experiments all indicate that energy is transferred on _ average _ from small to large length scales ( up - scale ) and that enstrophy is transferred in the opposite direction ( down - scale ) .fa , however , can be used to obtain much more detailed information about the energy and enstrophy transfer within the flow .we begin by examining the scale - to - scale coupling of energy in the 2d euler equation ( viscous terms are linear and , hence , cause no direct scale - to - scale transfer , eliminating the need to examine the full navier - stokes equation ) : where is the -th component of the velocity field , is the density normalized pressure field , and summation over repeated indices is assumed .contracting this evolution equation with a filter function , , and extracting the coupling term as described above yields : the notation , or simply , will be used to denote the large - scale field .( [ eq : large - scale - euler - velocity ] ) is almost the equivalent of eq .( [ eq : general - filter - space ] ) .the one delicacy is that the term is not the large - scale pressure field , _ i.e. _ , it is not the field obtained using the gradients of .this is not an important issue here , however , because , as will be demonstrated later , does not contribute to the inter - scale transfer . from eq .( [ eq : large - scale - euler - velocity ] ) the scale - to - scale coupling term for the velocity is given by where is the subgrid - scale stress tensor . to obtain the equation for the evolution of large - scale energy , one multiplies eq .( [ eq : large - scale - euler - velocity ] ) by .( note that the notation is used rather than since . )the resulting equation , for the energy contained at scales larger than , is identical to the full energy evolution equation up to the coupling term on the right hand side ( again ignoring the pressure term ) .this coupling term is not yet in the form to directly yield scale - to - scale energy transfer information .there are two ways in which the small - scale velocities can change the large - scale energy : by physically transporting it from point - to - point or by transferring it between scales . to separate the two , the leibnitz rule is used to rewrite the right hand side of eq .( [ eq : large - scale - energy-1 ] ) as notice that the latter of these two terms is galilean invariant , whereas the former is not .boosts to the reference frame should not change the scale - to - scale transfer of energy but will change the point - to - point transport .therefore , the former term is attributed to the point - to - point transport and the latter to the scale - to - scale transfer . another way to contrastthe point - to - point coupling with scale - to - scale coupling is to consider the limit of a homogenous system . in this casetaking an ensemble average , , should eliminate all point - to - point transport terms , leaving only inter - scale transfer contributions . since the action of ensemble averaging commutes with the derivative operation , and since the spatial derivative of an ensemble average is zero in the limit of homogeneity , the ensemble average of eq .( [ eq : large - scale - energy-1 ] ) is simply not only does this demonstrate that $ ] is a point - to - point term , but it also demonstrates that all of the terms on the left , other than the time derivative , are point - to - point as well .this fact allows us to ignore the delicacy with respect to the pressure term : it does not affect scale - to - scale transfer . for simplicity[ eq : large - scale - energy-1 ] is rewritten as where and are defined as where is the large - scale strain tensor .the negative sign in the definition of is added so that down - scale transfer has a positive value whereas up - scale transfer is negative .an almost identical method can be used to determine the scale - to - scale enstrophy transfer of the flow .the starting point , however , is the 2d euler equation for vorticity , as above , the equation is contracted with a filter function , , and the coupling term is extracted , where is the subgrid scale vorticity transport vector .notice that the term defined in eq .[ eq : large - scale - euler - vorticity ] has an almost identical form to in the energy equations .this general form is typical of the filter approach .the coupling terms between large- and small - scale fields often take the form for quadratic nonlinearities . to change the large - scale vorticity equation to an equation for large - scale enstrophy evolution, one must multiply by .this yields where again the leibniz rule was used to separate point - to - point transport from inter - scale transfer .grouping the appropriate terms , as was done for the energy equation , leads to the final form , where where the negative sign in front of the scale - to - scale coupling term , , again makes down - scale transfer positive . to make the above derivations more concrete ,some examples are provided of filtered fields . for purposes of illustration , vorticity and enstrophy fieldsare presented and an analysis of velocity and energy fields is left to a later publication .figure [ fig : enstrophy - flux - pictorial ] displays steps in the calculation of the scale - to - scale enstrophy transfer , , for a typical vorticity field extracted from experimental data . for this calculation the filter function was a gaussian with fourier - space definition where .these figures illustrate that the application of fa is straightforward : ( 1 ) compute the secondary field , , from the measured velocity and vorticity fields ; ( 2 ) perform a convolution of these fields to obtain ; ( 3 ) take the scalar product of with the appropriate gradient of the large - scale fields , namely .there is , however , a caveat for the general case . it may not always be possible to separate the point - to - point transport from the scale - to - scale transfer terms . for the energy equation , the termswere determined by using the leibniz rule to separate out the galilean invariant part of the energy flux .for the enstrophy , the separation was obtained by analogy with the energy equation rather than by a strict application of galilean invariance .there is no _ a priori _ expectation that such a separation will be as simple , or even possible , for arbitrary nonlinear systems . in compressible flows , for example , it is possible to measure the coupling terms , but the point - to - point transport caused by small scales is tied to the scale - to - scale transfer in a non - trivial way .the interpretation of such results must , therefore , be done carefully .this section will address the effect of varying the filter , , on the interpretation of the results , the consequences of the data being limited in spatial extent ( _ i.e. _ by boundaries ) , and variations in the results caused by experimental or numerical limitations on the resolution of the data . that are used in the paper ( see eq .[ eq : sharpfourier ] and eq .[ eq : sharpreal ] for details ) for both plots the real space convolution function is presented with the root of the fourier spectrum inset .( a ) fourier filter convolution kernels , for orders n=2 ( solid ) , n=3 ( dash ) , n=4 ( dotted ) , n=5 ( dash - dot ) .( b ) real space filters for orders n=2 ( solid ) , n=3 ( dash ) , n=4 ( dotted),n=5 ( dash - dot).,title="fig:",width=307 ] that are used in the paper ( see eq .[ eq : sharpfourier ] and eq .[ eq : sharpreal ] for details ) for both plots the real space convolution function is presented with the root of the fourier spectrum inset .( a ) fourier filter convolution kernels , for orders n=2 ( solid ) , n=3 ( dash ) , n=4 ( dotted ) , n=5 ( dash - dot ) .( b ) real space filters for orders n=2 ( solid ) , n=3 ( dash ) , n=4 ( dotted),n=5 ( dash - dot).,title="fig:",width=307 ] the interpretation of the results of fa may depend on the form of the filter function , .a gaussian kernel was used in eq .( [ eq : gaussiankernel ] ) , but this particular choice was made only because it has a simple interpretation in both real- and fourier - space , taking the same form in both .the fa technique imposes no such constraints in general . in some cases, one may want to preferentially constrain the filter in real - space to a well - defined length scale or , instead , may want to filter so as to select only a sharp band of wave numbers .either possibility can be explored using this tool .it must be kept in mind , however , that sharpening the filter in real- or fourier - space causes a corresponding broadening of the filter in the other .the impact and interpretation of varying the form of the filter on the resulting fields is considered here .the form of the fourier filters used is where is the filter order .the case is the gaussian filter considered earlier .as increases , the filter sharpens around the fourier mode corresponding to filter length .similarly , the real - space filters are defined by : note that for , the real filter is equivalent to the fourier filter . as the real filter is sharpened it approaches an area average over a box of diameter . both of these sets of filtersare shown in fig .[ fig : filter - kernels ] . other types of filter are possible , but are not considered here . from an experimental point of view , the sharper real - space filters are more attractive than the fourier filters because the real - space envelope of the fourier filters grows as the order of the filter is increased . since experimental datais invariably windowed to the cross - section of the measurement apparatus , this means that sharper fourier filters quickly grow to interact with boundaries . for sharp real filtersthis is not a problem : as the filter increases in sharpness it becomes more spatially compact . for the purpose of comparing with theory ,however , sharper fourier filters approach the ideal ; see , for example , the discussion given in frisch in which the fields are filtered by an infinitely sharp cutoff in fourier - space . in fig .[ fig : enstrophyflux - ordered ] , the results of using a gaussian , two low - order fourier - space filters , and two low - order real - space filters in the calculation of the enstrophy transfer are shown ( the associated vorticity field is also displayed ) .superficially , the fields are fairly similar .the strength of the fluctuations , however , changes with filter ( the grey scale limits of the fields are noted in the captions ) . the change in magnitude is stronger for fourier filters than real - space filters , with the fourier - space filter experiencing larger fluctuations than for .the similarity in the fields seems to indicate that the _ relative _ magnitudes of enstrophy flux remain more or less constant . in particular , the stronger values of flux associated with powerful vortices ( middle top and middle left ) have almost identical forms , though there may be a slight sharpening of the lobes for both higher order real - space and fourier - space filters ( which , we note , have a quadrupolar form ) .higher - order fourier - space filters increase the symmetry of many features , _i.e. _ they appear less ovular ( see for example the lobes indicated by the white arrows in fig . [fig : enstrophyflux - ordered ] ) , whereas the contrast between features and the background is enhanced by higher - order real - space filters . on the whole , however , the qualitative features are fairly insensitive to the form of filter used , maybe surprisingly so .one might expect that the beating of the sharper fourier kernel in real space would show up more strongly in the fields .for the stronger enstrophy transfer signals this does not appear to be the case .weaker signals are more sensitive to the particular choice of filter . consider the weak lobe in the lower middle of the field indicated by the black arrow in fig .[ fig : enstrophyflux - ordered ] ( or the one in the middle right also indicated by a black arrow ) . for the gaussian filter ,the lobe is barely visible . for sharper filters in both real- and fourier - space ( ) ,however , the prominence of this lobe with respect to the stronger signals in the system is enhanced .the most important observation is that the qualitative structure of the field is relatively unaffected by the choice of filter .sharper filters increase the `` contrast '' , but neither eliminate nor create structural features in the flow . for the purposes of correlating scale - to - scale transfer to topological features , this is an important feature of fa .the above comparison is only qualitative .a more accurate quantitative comparison is presented in fig .[ fig : enstrophypdforder ] , where the probability distribution function of the enstrophy transfer is presented for the same set of filters used in fig .[ fig : enstrophyflux - ordered ] .the agreement between the different filters is quite good , but a little deceptive .note , first , that the magnitude of the rms fluctuations has been normalized out .second , there may be ( though it is , perhaps , below the noise level ) a slight increase in asymmetry , in particular in the negative tail of the pdf for real - space filters ( the open symbols ) . given that in fig .[ fig : enstrophyflux - ordered ] the large values of enstrophy transfer , corresponding to the tails of the pdf , are relatively insensitive to changes in the filter , this collapse of the pdfs is reasonable .obtained using a gaussian ( solid line ) two fourier - space filters of order (solid squares ) and ( solid circles ) and two real - space filters of order (open squares ) and ( open circles ) .the filter length was mm.,width=307 ] and ( b ) ( see text ) for a range of length scales calculated using data from the soap film and a gaussian filter function ( solid line ) , two fourier space filters of order (solid squares ) and ( solid circles ) and two real - space filters of order ( open squares ) and ( open circles ) . inset in ( a ) is the average fluxes normalized by the rms fluctuation size for the respective filters.,title="fig:",width=307 ] and ( b ) ( see text ) for a range of length scales calculated using data from the soap film and a gaussian filter function ( solid line ) , two fourier space filters of order (solid squares ) and ( solid circles ) and two real - space filters of order ( open squares ) and ( open circles ) . inset in ( a ) is the average fluxes normalized by the rms fluctuation size for the respective filters.,title="fig:",width=307 ] on the other hand , in fig .[ fig : enstrophyflux - ordered ] the weaker values of inter - scale transfer have somewhat increased contrast .this change is emphasized in the lowest - order moments of the distribution , rather than in the tails .the average , , and fractional sign - probability comparison , , are shown in fig .[ fig : avgenstrophyorder ] .here , there is a significant difference in the average enstrophy flux as a function of filter order . in particular , the average rises and falls more sharply for the higher - order fourier - space filters than it does for the real - space filters .also , the area fraction saturates at a smaller value , then falls more quickly .the signs of the average flux and the area difference , however , are quite robust , although the magnitudes seem to vary ( in some places by a factor of two over this range of filter orders ) .the rise in the peak of average scale - to - scale transfer is reminiscent of ringing such as takes place in the gibbs phenomenon .it is not clear whether or not this is the source of the change . at this point, one might ask : for which filter is the result closest to the `` real '' enstrophy flux ?this question depends entirely on what one means by `` enstrophy flux '' .the scale - to - scale transfer between wave numbers is most closely approximated by higher - order fourier filters . because of the associated broadening of the filter in real - space , however , the resulting fields are not as good a measure of the spatially - local enstrophy transfer . on the other hand , for the flux to be localized in physical space for comparison with real - space structures , sharp real - space filters are preferable . in this limit, the enstrophy flux can no longer be defined as the movement of enstrophy from small fourier modes to larger ones but is actually a measure of the flux out of some bands and into others ( which do not necessarily have larger wave number ) .the strength of fa does not lie in its ability to measure _ the exact magnitude of the transfer _ , but rather that the sign of the transport and the qualitative form of the fields is robust to changes in the filter .an important consideration when using fa , one that is also a major issue in les , is the presence of boundaries ( either physical or as limits of the viewing window ) . as a fourier - space filter grows in order , and correspondingly in real - space extent , less and less of the datacan be used as the boundaries begin to affect the computation of the convolution in regions farther and farther into the interior . to investigate the influence of finite system size a random periodic streamfunction was generated on a grid .this streamfunction was then used to obtain velocity and vorticity fields from which the enstrophy flux field , , was computed .the left half of the fields was then set to zero and the enstrophy field , , recalculated .the normalized rms difference , is shown in fig . [fig : filter - convergence ] as a function of , the distance from the introduced boundary normalized by the filter size .the various plots are given for fourier - space filters of different orders ( see eq .[ eq : sharpfourier ] ) . .the graphs denote the rms difference between the enstrophy flux measured in the presence of a boundary and that without .the various curves are for a gaussian filter ( solid - line ) , and fourier filters of order ( dashed ) , ( dash - dot ) , ( dotted ) , ( dash - dot - dot).,width=307 ] with the exception of the order filter there is a continuous increase in the boundary affects . taking a nominal error rate of as acceptable , a boundary of is appropriately sized for the data .this results in a loss of in linear box size since one must apply the condition to left - right ( top - bottom ) boundaries .this effective boundary has been adhered to throughout this paper . for real - space filtersthe boundaries become less of an issue . indeed , for the sharpest real - space filter ( a simple average over a circle ) , the boundary is .= ( dashed ) , ( dotted ) , ( dash - dot ) , ( dash - dot - dot ) . also displayed is the total enstrophy flux ( solid line ) in the simulations.,title="fig:",width=307 ] = ( dashed ) , ( dotted ) , ( dash - dot ) , ( dash - dot - dot ) .also displayed is the total enstrophy flux ( solid line ) in the simulations.,title="fig:",width=307 ] in this section the effect of finite measurement resolution on the ability of fa to resolve the behavior of the enstrophy flux for a given filter function is explored . in the preceding sectionsexperimental data was used in the analysis , but to estimate the effects of finite resolution it is necessary to consider numerical data where the range of scales and the uncertainty in the measured values is fairly well known .the numerical data also eliminates any concerns about the effect of boundaries since the boundary conditions are periodic .pre - filtering at the limit of the data resolution should result in no apparent difference in the transfer from that computed from the original field . as the pre - filter length , ,is increased above the resolution scale differences in the measured transfer will increase .the question this raises is : how far above are these finite - resolution effects felt ? to answer this question a series of order real - space filters with varying applied to two data sets simulating the local averaging inherent in experimental measurements ( such as image acquisition and particle tracking ) . from these pre - filtered fields the enstrophy transfer , , is calculated and compared with the full flux .the results of these calculations are shown for both data sets in fig .[ fig : resolutionanalysis ] . for both, the viscous cut - off scale is between and .for pre - filters with at or below this length there is little change in the magnitude of the average flux ( slightly more prominent in the case of hyper - viscosity ) .once the filter size exceeds the viscous scale , _i.e. _ , penetrates at all into the inertial range , the effects are quickly felt up to almost the injection scale , .this may be a general feature of fa , or it may be a peculiarity of the 2d enstrophy transfer process being non - local . in either case , it demonstrates the importance of resolving the entire inertial range , including the viscous scale , for an accurate representation of the enstrophy transfer to be possible .whether or not enstrophy transfer is local in fourier - space is an important question in its own right , and one which is sufficiently complex to warrant exploration in a separate paper .in the above discussion and use of fa , familiar turbulence assumptions , such as homogeneity , isotropy , and inertial range , were not requirements of the measurement .this is perhaps the greatest strength of fa : it can be applied to any type of turbulent flow , regardless of that particular flow s properties .all that is needed is the evolution equation for the system from which the inter - scale transfer can be obtained .the flow does nt have to be turbulent , fa can be applied to laminar or periodic flow and reasonable results obtained . in other words ,the interpretation of fa _ does not rely on a pre - existing theory_. rather , it is a tool that can directly test notions of how inter - scale transfer takes place in systems and can be used to build appropriate theories .this `` bottom up '' approach to physics ( rather than a theoretical trickle down ) comes at a price : one must have highly resolved fields of data with a significant range of spatial scales .the former is a requirement imposed by the resolution issues discussed above , which can only be relaxed if one assumes a wavenumber local inter - scale transfer process .the latter is a limitation imposed by the data windowing and interaction of the filters with the measurement boundary . of course, the standard techniques of measuring velocity fields in fluids , namely particle imaging velocimetry or particle tracking , more or less ensure that the measurements are not far from the viscous scale , and thus of high enough resolution to not assume local transfer .this is because these techniques rely on groups of particles moving coherently , which only holds when the scales being probed are small enough that local taylor expansions describe the flow .the difficulty is in simultaneously measuring a significant range of scales above the viscous scale for meaningful information to be obtained . with standard piv practices this is possible in 2d . only with holographic pivis this possible in 3d .there , however , the limitation is in the amount of data that can be obtained ( order 10 fields is possible ; currently , 1000 fields are not ) .as discussed in the introduction , our ultimate intention for the fa technique is to probe the mechanisms driving the inter - scale transfer in 2d turbulence , with a particular eye towards the role of coherent structures in the transfer process .this will be done both via statistical analysis of fields in the laboratory frame of reference ( eularian frame ) , as well as following the motion of fluid parcels ( lagrangian frame ) .fa , however , is not limited to these measurements alone , but could find application to the inter - scale transfer and mixing of passive scalars , or other quantities not necessarily conserved in the inviscid limit ( _ i.e. _ local topology ) . and as hinted at previously , fa can be applied ( carefully ! ! ! ) to general non - linear systems where an equation of motion is known and inter - scale transfer is of interest .we thank greg eyink , phil marcus , misha chertkov and boris shraiman for interesting discussions and suggestions .
many questions remain in turbulence research and related fields about the underlying physical processes that transfer scalar quantities , such as the kinetic energy , between different length scales . measurement of an ensemble - averaged flux between scales has long been possible using a variety of techniques , but instantaneous , spatially - local realizations of the transfer have not . the ability to visualize scale - to - scale transfer as a field quantity is crucial for developing a clear picture of the physics underlying the transfer processes and the role played by flow structure . a general technique for obtaining these scale - to - scale transfer fields , called the filter approach , is described . the effects of different filters , finite system size , and limited resolution are explored for experimental and numerical data of two - dimensional turbulence .
stationary points are the most interesting and most important points of potential energy surfaces .the relative energies of local minima and their associated configuration space volumes determine thermodynamic equilibrium properties. according to transition state theory , dynamical properties can be deduced from the energies and the connectivity of minima and transition states. therefore , the efficient determination of stationary points of potential energy surfaces is of great interest to the communities of computational chemistry , physics , and biology .clearly , optimization and in particular minimization problems are present in virtually any field .this explains why the development and mathematical characterization of iterative optimization techniques are important and longstanding research topics , which resulted in a number of highly sophisticated methods like for example direct inversion of the iterative subspace ( diis), conjugate gradient ( cg), or quasi - newton methods like the broyden - fletcher - goldfarb - shanno ( bfgs ) algorithm and its limited memory variant ( l - bfgs). since for a quadratic function newton s method is guaranteed to converge within a single iteration , it is not surprising that the bfgs and l - bfgs algorithms belong to the most efficient methods for minimizations of atomic systems. if the potential energy surface can be computed with an accuracy on the order of the machine precision , the above mentioned algorithms usually work extremely well . in practice , however , computing the energy surface at this high precision is not possible for physically accurate but computationally demanding levels of theory like for example density functional theory ( dft ) . at dft level, this is due the finitely spaced integration grids and self consistency cycles that have to be stopped at small , but non - vanishing thresholds .therefore , optimization algorithms that are used at these accurate levels of theory must not only be computationally efficient but also tolerant to noise in forces and energies .unfortunately , the very efficient l - bfgs algorithm is known to be noise - sensitive and therefore , frequently fails to converge on noisy potential energy surfaces . for this reason, the fast inertial relaxation engine ( fire ) has been developed. fire is a method of the damped molecular dynamics ( md ) class of optimizers. it accelerates convergence by mixing the velocity at every md step with a fraction of the current steepest descent direction .a great advantage of fire is its simplicity .however , fire does not make use of any curvature information and therefore usually is significantly less efficient than the newton or quasi - newton methods .potential energy surfaces are bounded from below and therefore descent directions guarantee that a local minimum will finally be found .furthermore , the curvature at a minimum is positive in all directions .this means , all directions can be treated on the same footing during a minimization .the situation is different for saddle point optimizations .a saddle point is a stationary point at which the potential energy surface is at a maximum with respect to one or more particular directions and at a minimum with respect to all other directions .close to a saddle point it is therefore not possible to treat all directions on the same footing . instead one has to single out the directions that have to be maximized .furthermore , far away from a saddle point it is usually impossible to tell , which search direction guarantees to finally end up in a saddle point .therefore , saddle point optimizations typically are more demanding and significantly less reliable than minimizations . in this contributionwe present a technique that allows to extract curvature information from noisy potential energy surfaces .we explain how to use this technique to construct a stabilized quasi - newton minimizer ( sqnm ) and a stabilized quasi - newton saddle finding method ( sqns ) . using benchmarks, we demonstrate that both optimizers are robust and efficient .the comparison of sqnm to l - bfgs and fire and of sqns to an improved dimer method reveals that sqnm and sqns are superior to their existing alternatives .the potential energy surface of an -atomic system is a map that assigns to each atomic configuration a potential energy .it is assumed that a second order expansion of about a point is possible : ^{t}{\boldsymbol{\mathbf{\nabla } } } e\left({\boldsymbol{\mathbf{r}}}^{i}\right)\notag\\ & \phantom{{}=e\left({\boldsymbol{\mathbf{r}}}^{i}\right)}+ \frac{1}{2 } \left[{\boldsymbol{\mathbf{r}}}- { \boldsymbol{\mathbf{r}}}^{i}\right]^{t } h_{{\boldsymbol{\mathbf{r}}}_{i } } \left[{\boldsymbol{\mathbf{r}}}- { \boldsymbol{\mathbf{r}}}^i\right]\\ { \boldsymbol{\mathbf{\nabla}}}e\left({\boldsymbol{\mathbf{r}}}\right ) & \approx { \boldsymbol{\mathbf{\nabla } } } e\left({\boldsymbol{\mathbf{r}}}^{i}\right ) + h_{{\boldsymbol{\mathbf{r}}}_{i}}\left[{\boldsymbol{\mathbf{r}}}- { \boldsymbol{\mathbf{r}}}^i\right ] , \label{eq : secant}\end{aligned}\ ] ] here , is the hessian of the potential energy surface evaluated at . if is a stationary point , the left hand side gradient of eq .[ eq : secant ] vanishes and newton s optimization method follows : in the previous equation was renamed to in order to emphasize the iterative character of newton s method for non - quadratic potential energy surfaces . in practice , it is in most cases either impossible to calculate an analytic hessian or it is too time consuming to compute it numerically by means of finite differences at every iteration. therefore , quasi - newton methods use an approximation to the exact hessian that is computationally less demanding . using a constant multiple of the identity matrix as an approximation to the hessian results in the simple steepest descent method . in most cases ,such a choice is a very poor approximation to the true hessian .however , improved approximations can be generated from local curvature information which is obtained from the history of the last displacements and gradient differences , where . in noisy optimization problems ,the noisy components of the gradients can lead to displacement components that correspond to erratic movements on the potential energy surface .consequently , curvature information that comes from the subspace spanned by these displacement components must not be used for the construction of an approximate hessian .in contrast to this , the non - noisy gradient components promote locally systematic net - movements , which do not tend to cancel each other . in this sense the displacement components that correspond to these well defined net - movement span a significant subspace from which meaningful curvature information can be extracted and used for building an approximate hessian . ]the situation is depicted in fig .[ fig : removenoise ] where the red solid vectors represent the history of normalized displacements and the blue dashed vectors constitute a basis of the significant subspace .all the red solid vectors in fig .[ fig : removenoise]a point into similar directions .therefore , curvature information should only be extracted from a one - dimensional subspace , as , for example , is given by the blue dashed vector .displacement components perpendicular to this blue dashed vector come from the noise in the gradients .in contrast to fig .[ fig : removenoise]a , fig .[ fig : removenoise]b shows a displacement that points into a considerably different direction than all the other displacements . for this reason , significant curvature information can be extracted in the full two - dimensional space . to define the significant subspacemore rigorously , we first introduce the set of normalized displacements where . with , linear combinations of the normalized displacementsare defined as : furthermore , we define a real symmetric overlap matrix as it can be seen from , that is made stationary by coefficient vectors that are eigenvectors of the overlap matrix . in particular the longest and shortest vectors that can be generated by linear combinations with normalized coefficient vectors correspond to those eigenvectors of the overlap matrix that have the largest and smallest eigenvalues . as motivated above, the shortest linear combinations of the normalized displacements correspond to noise . from now on , let the be eigenvectors of and let be the corresponding eigenvalues .with we finally define the _ significant subspace _ as where . in all applications presented in this work, has proven to work well .henceforth , we will refer to the dimension of as . by constructionit is guaranteed that .it should be noted that at each iteration of the optimization algorithms that are introduced below , the significant subspace and its dimension can change .the history length usually lies between 5 and 20 .our procedure is analogous to lwdins canonical orthogonalization, which is used in the electronic structure community to remove linear dependencies from chemical basis sets .we define the projection of the hessian onto as where for all and using eq .[ eq : secant ] and defining where , one obtains an approximation for each matrix element : in practice , we explicitly symmetrize in order to avoid asymmetries introduced by anharmonic effects : because the projection is the identity operator on , the curvature on the potential energy surface along a normalized is given by given the normalized eigenvectors and corresponding eigenvalues of the matrix , one can write normalized eigenvectors of with eigenvalues as where is the k - th element of . as can be seen from eq .[ eq : curv ] , the give the curvatures of the potential energy surface along the directions .the gradient can be decomposed into a component lying in and a component lying in its orthogonal complement : where , and . in this sectionwe motivate how the can be used to precondition .furthermore , we explain how can be scaled appropriately with the help of a feedback that is based on the angle between two consecutive gradients .let us assume that the hessian at the current point of the potential energy surface is non - singular and let and be its eigenvalues and normalized eigenvectors . in newton s method ( eq . [ eq : newton ] ) , the gradients are conditioned by the inverse hessian . for the significant subspace component it follows : \label{eq : firstidea}\end{aligned}\ ] ] as outlined in the previous section , we know the curvature along .therefore , at a first thought , eq .[ eq : firstidea ] suggests to simply replace by where and .indeed , if the optimization was restricted to the subspace this choice would be appropriate. however , with respect to the complete domain of the potential energy surface , one is at risk to underestimate the curvature if the overlap is non - vanishing .in particular , if is far from being negligible , underestimating the curvature can be particularly problematic because coordinate changes in the direction of might be too large .this can render convergence difficult to obtain in practice .we therefore replace in eq .[ eq : firstidea ] by where is chosen in analogy to the residue of weinstein s criterion as using equations [ eq : deltag ] , [ eq : curv ] and [ eq : eigenvec ] , this residue can be approximated by - \kappa_{j } \overset{\sim}{{\boldsymbol{\mathbf{v}}}}^{j}\right|.\end{aligned}\ ] ] with this choice for , the preconditioned gradient is finally given by : for and . is a measure for the quality of the estimation of the eigenvalue of the exact hessian .panel b ) shows the bin - averaged overlap .the frequency of severe curvature underestimation drops quickly in the region .the histogram in panel a ) peaks in the region of good estimation ( ) which coincidences with the region of large overlap , shown in panel b ) . the data for this figure come from 100 minimizations of a system described by the lenosky - silicon force field .] clearly , the residue can only alleviate the problem of curvature underestimation , but it does not rigorously guarantee that every single is estimated appropriately .however , in practice this choice works very well .the reason for this can be seen from fig .[ fig : weinstein ] . in fig .[ fig : weinstein]a , a histogram of the quality and safety measure is shown . if , the curvature is underestimated , if the curvature is well estimated and finally , if , the curvature is overestimated .overestimation leads to too small step sizes , and therefore to a more stable algorithm , albeit at the cost of a performance loss .critical underestimation of the curvature ( ) is rare .[ fig : weinstein]b shows the averages of the overlap in the corresponding bins .if has on average a large overlap with , the curvature along is estimated accurately ( histogram in fig .[ fig : weinstein]a peaks at ) .what remains to discuss is how the gradient component should be scaled . by construction, lies in the subspace for which no curvature information is available .we therefore treat this gradient component by a simple steepest descent approach that adjusts the step size at each iteration . for the minimizer that is outlined in section [ sec : findmin ], the adjustment is based on the angle between the complete gradient and the preconditioned gradient .if the cosine of this intermediate angle is larger than , is increased by a factor of , otherwise is decreased by a factor of . for the saddle search algorithmthe feedback is slightly different and will be explained in section [ sec : findsad ] . in conclusion , the total preconditioned gradient is given by in the next section , we explain how this preconditioned gradient can be further improved for biomolecules .the preconditioned subspace gradient was obtained under the assumption of a quadratic potential energy surface . however , if the gradients at the current iteration are large , this assumption is probably not satisfied .displacing along in these cases can reduce the stability of the optimization . hence ,if the exceeds a certain threshold , it can be useful to set the dimension of to zero for a certain number of iterations .this means that and therefore . in that case , is also adjusted according to the above described gradient feedback .however , as this fallback to steepest descent is intended as a last final fallback , it should have the ability to deal with arbitrarily large forces .therefore , we also check that does not displace any atom by more than a user - defined trust radius .however , to our experience , this fallback is not necessary in most cases .indeed , all the benchmarks presented in section [ sec : bench ] were performed without this fallback . many large molecules like biomolecules or polymers are floppy systems in which the largest and smallest curvatures can be very different from each other .steepest descent optimizers are very inefficient for these ill - conditioned systems , because the high curvature directions force to use step sizes that are far too small for an efficient optimization in the directions of small curvatures .put more formally , the optimization is inefficient for those systems , because the condition number , which is the fraction of largest and smallest curvature , is large. for biomolecules , the high - curvature directions usually correspond to bond stretchings , that is , movements along inter - atomic displacement vectors of bonded atoms . for the current purposewe regard two atoms to be bonded if their inter - atomic distance is smaller than or equal to times the sum of their covalent radii . for ,let be the coordinate vector of the i - th atom .for a system with bonds we define for each bond a bond vector , where the , are defined as the are sparse vectors with six non - zero elements .we separate the total gradient into its bond - stretching components and all the remaining components : let be coefficients that allow the bond - stretching components to be expanded in terms of the bond vectors using definition eq .[ eq : stretchgrad ] , left - multiplying eq . [ eq : stretchsep ] with a bond vector and requiring the to be orthogonal to all the bond vectors , one obtains the following linear system of equations , which determines the coefficients and , with it , the bond stretching gradient defined in eq .[ eq : stretchgrad ] : for the optimization of a biomolecule , the bond - stretching components are minimized in a simple steepest descent fashion .the atoms are displaced by .the bond - stretching step size is a positive constant , which is adjusted in each iteration of the optimization by simply counting the number of projections that have not changed signs since the last iteration .if more than two thirds of the signs of the projections have remained unchanged , the bond - stretching step size is increased by 10 percent .otherwise , is decreased by a factor of .the non - bond - stretching gradients are preconditioned using the stabilized quasi - newton approach presented in sections [ sec : sigsub ] to [ sec : precondsiggrad ] .it is important to note that in sections [ sec : sigsub ] to [ sec : precondsiggrad ] all have to be replaced by when using this biomolecule preconditioner . in particular , this is also true for the gradient feedbacks that are described in sections [ sec : precondsiggrad ] and [ sec : findsad ] .the pseudo code below demonstrates how the above presented techniques can be assembled into an efficient and stabilized quasi - newton minimizer ( sqnm ) .the pseudo code contains 4 parameters explicitly . and are initial step sizes that scale and , respectively . is the maximum length of the history list from which the significant subspace is constructed . is an energy - threshold that is used to determine whether a minimization step is accepted or not .it should be adapted to the noise level of the energies and forces .the history list is discarded if the energy increases , because an increase in energy is an indication for inaccurate curvature information . in this case , the dimension of the significant subspace is considered to be zero . furthermore , line 17 implicitly contains the parameter , which is described in section [ sec : sigsub ] .the optimization is considered to be converged if the norm of the gradient is smaller than a certain threshold value . of course ,other force criteria , like for example using the maximum force component instead of the force norm , are possible . & _ ; _ s _ ; & + & ; & + & k 1 ; & + & _ k & + & e_k e(_k ) ; & + & & + & & + & & + & & & + & & & + & _ k e(_k ) - e _ ; & + & _ k _ k - _ s e _ ; & + & & + & & + & _ k e(_k ) ; & + & & + & & + & _ k+1 _ k - e^ ; & + & & + & & + & & + & & + & & + & & + & & + & & + & k > m & + & & + & & + & k k + 1 ; & + & & + & & + in this section we describe a stabilized quasi - newton saddle finding method ( sqns ) that is based on the same principles as the minimizer in the previous section .sqns belongs to the class of the minimum mode following methods. for simplicity , we will denote the hessian eigenvector corresponding to the smallest eigenvalue as minimum mode .broadly speaking , a minimum mode following method maximizes along the direction of the minimum mode and it minimzes in all other directions .the optimization is considered to be converged if the curvature along the minimum mode is negative and if the norm of the gradient is smaller than a certain threshold . as for the minimization , other force criteria are possible .the minimum mode of the hessian can be found by minimizing the curvature function where along with the following definitions were used : and .the vector is the position at which the hessian is evaluated at . for the minimization of , we use the algorithm described in section [ sec : findmin ] where the energy as objective function is replaced by . in the pseudocodebelow , the here discussed minimization is done at line 6 . under the constraint of normalization, the gradient is given by blindly using the biomolecule preconditioner of section [ sec : bioprec ] for the minimization of would mean that the gradient of eq .[ eq : curvgraddiff ] was projected on the bond vectors of .obviously , the bond vector as defined in section [ sec : bioprec ] has no meaning for .therefore , eq . [ eq : curvgraddiff ] instead is projected onto the bond vectors of . at a stationary point ,systems with free boundary conditions have six vanishing eigenvalues .the respective eigenvectors correspond to overall translations and rotations. instead of directly using eq .[ eq : curvgraddiff ] for the minimization of the curvature of those systems , it is advantageous to remove the translations and rotations from and in eq .[ eq : curvgraddiff]. the convergence criterion for the minimization of has a large influence on the total number of energy and force evaluations needed to obtain convergence .it therefore must be chosen carefully .the minimum mode is usually not computed at every iteration , but only if one of the following conditions is fulfilled : 1 .[ item : first ] at the first iteration of the optimization 2 .[ item : pathlength ] if the integrated length of the optimization path connecting the current point in coordinate space and the point at which the minimum mode has been calculated the last time exceeds a given threshold value 3 .[ item : it ] if the curvature along the minimum mode is positive and the curvature has not been recomputed for at least iterations 4 .[ item : fnrm ] if the curvature along the minimum mode is positive and the norm of the gradient falls below the convergence criterion 5 .[ item : tighten ] at convergence ( optional ) in the pseudocode , these conditions are checked in line 5 . among these conditions ,condition no .[ item : pathlength ] is , with respect to the performance , the most important one .the number of energy and gradient evaluations needed for converging to a saddle point can be strongly reduced if a good value for is chosen .condition [ item : it ] and [ item : fnrm ] can be omitted for most cases . however , for some cases they can offer a slight reduction in the number of energy and gradient evaluations . for example for the alanine dipeptide system used in section [ sec : bench ] , these two conditions offered a performance gain of almost 10% .although possible , we usually do not tune , but typically use . in our implementation ,condition [ item : tighten ] is optional .it can be used if very accurate directions of the minimum mode at the saddle point are needed . in this case, this last minimum mode computation can also be done at a tighter convergence criterion .further energy and gradient computations are saved in our implementation by using the previously computed minimum mode as the starting mode for a new curvature minimization .as stated above , a saddle point is found by maximizing along the minimum mode and minimizing in all other directions .this is done by inverting the preconditioned gradient component that is parallel to the minimum mode .this is shown at line 19 of the pseudocode below .for the case of biomolecules , the component of the bond - stretching gradient that is parallel to the minimum mode is also inverted ( line 13 ) . as already mentioned in section [ sec : precondsiggrad ] , the feedback that adjusts the stepsize of is slightly different in case of the saddle finding method . let be the normalized direction of the minimum mode . then , in contrast to minimizations , the stepsize that is used to scale is not based on the angle between the complete and , but only on the angle between and .these are the components that are responsible for the minimization in directions that are not the minimum mode direction . otherwise , the gradient feedback is absolutely identical to that described in section [ sec : precondsiggrad ] .a saddle point can be higher in energy than the configuration at which the optimization is started at .therefore , in contrast to a minimization , it is not reasonable to discard the history , if the energy increases . as a replacement for this safeguard , we restore to a simple trust radius approach in which any atom must not be moved by more than a predefined trust radius . a displacement exceeding this trust radiusis simply rescaled .if the curvature is positive and the norm of the gradient is below the convergence criterion , we also rescale displacements that do not come from bond - stretchings .the displacement is rescaled such that the displacement of the atom that moved furthest , is finally given by .this avoids arbitrarily small steps close to minima . on very rare occasions, we could observe for some cluster systems that over the course of several iterations a few atoms sometimes detach from the main cluster . to avoid this problem, we identify the main fragment and move all neighboring fragments towards the nearest atom of the main fragment .below , the pseudocode for sqns is given .it contains 3 parameters explicitly . and are initial step sizes that scale and , respectively . is the maximum length of the history list from which the significant subspace is constructed .the path - length threshold that determines the recomputation frequency of the minimum mode is implicitly contained in line 5 .lines 14 and 21 imply the trust radius .besides all the parameters that are needed for the minimizer of section [ sec : findmin ] , line 6 additionally implies the finite difference step size that is used to compute the curvature and its gradient .line 18 implicitly contains the parameter , which is described in section [ sec : sigsub ] & _ ; _s _ ; & + & l 1 ; & + & _ l & + & & + & & + & & + & & + & & + & & & + & & & + & _s e _ ; & + & _ l e(_l ) - e _ ; & + & _ l _ l - + 2 ( _ ) _ ; & + & & + & & + & _ l e(_l ) ; & + & & + & & + & _ l+1 _ l - e^ + 2(e^ _ ) _ ; & + & & + & & + & l > m & + & & + & & + & l l + 1 ; & + & & +[ cols="^,^,^ , > , > , > , > , > , > , > , > , > , > " , ] ( [ yshift=2ex]13.north ) ( 14 ) ; ( [ yshift=2ex]15.north ) ( 16 ) ; ( [ yshift=2ex]17.north ) ( 18 ) ; ( [ yshift=2ex]19.north ) ( 19 ) ; ( [ yshift=2ex]20.north ) ( 20 ) ; [ tab : benchsad ] the sqns method was compared to an improved version of the dimer method as described in ref . andas implemented in the eon code. in this improved version , the l - bfgs algorithm is used for the rotations and translation of the dimer .furthermore , the rotational force and the dimer energy are evaluated by means of a first order forward finite difference of the gradients. the same force fields as for the minimization benchmarks were used . for the dft calculations , sqns was coupled to the bigdft code .the eon codes offers an interface to vasp, which consequently was used .the same test sets as for the minimizer benchmarks were used .in particular this means that the starting configurations are not close to a saddle point and therefore these test sets are comparatively difficult for saddle finding methods .again , parameters were only tuned for a subset of 100 configurations at force field level . with exception to the finite difference step size that is needed to calculate the curvature and its gradient, we used the same parameters at force filed and dft level .because of noise , the finite difference step size must be chosen larger at dft level . the same force norm convergence criteria as for the minimization benchmarks were used . in all sqns optimizations the minimum mode was recalculated at convergence ( condition 5 of section [ sec : findsad ] ) .the test results are given in table [ tab : benchsad ] .in contrast to the minimization benchmarks , we do not give averages for the number wavefunction optimization iterations , because the two saddle finding methods were coupled to two different electronic structure codes .therefore , the number of wavefunction optimizations is not comparable . in particular in case of the system ,both methods converged only seldom to the same saddle points and therefore the statistical significance of the corresponding numbers given in table [ tab : benchsad ] is limited .however , averages over large sets could be made in the case of convergence to an arbitrary saddle point . in the cases we considered ,the dimer method needed between and times more energy and force evaluations than the new sqns method . in particular for alanine dipeptide , the sqns approach was far superior to the dimer method . due to its inefficiency, it was impossible to obtain a significant number of saddle points for alanine dipeptide at dft level when using the dimer method .for this reason , only benchmark results for the sqns method are given for alanine dipeptide at dft level .optimizations of atomic structures belong to the most important routine tasks in fields like computational physics , chemistry , or biology .although the energies and forces given by computationally demanding methods like dft are physically accurate , they are contaminated by noise .this computational noise comes from underlying integration grids and from self - consistency cycles that are stopped at non - vanishing thresholds .the availability of optimization methods that are not only efficient , but also noise - tolerant is therefore of great importance . in this contributionwe have presented a technique to extract significant curvature information from noisy potential energy surfaces .we have used this technique to create a stabilized quasi - newton minimization ( sqnm ) and a stabilized quasi - newton saddle finding ( sqns ) algorithm .sqnm and sqns were demonstrated to be superior to existing efficient and well established methods . until now, the sqnm and the sqns optimizers have been used over a period of several months within our group . during this timethey have performed thousands of optimizations without failure at the dft level . because of their robustness with respect to computational noise and due to their efficiency, they have replaced the default optimizers that have previously been used in minima hopping and minima hopping guided path search runs .implementations of the minimizer and the saddle search method are made available via the bigdft electronic structure package .the code is distributed under the gnu general public license and can be downloaded free of charge from the bigdft website. 44ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] _ _ ( , ) link:\doibase 10.1063/1.1749604 [ * * , ( ) ] link:\doibase 10.1016/0009 - 2614(80)80396 - 4 [ * * , ( ) ] link:\doibase 10.1002/jcc.540030413 [ * * , ( ) ] * * , ( ) link:\doibase 10.1093/imamat/6.1.76 [ * * , ( ) ] link:\doibase 10.1093/comjnl/13.3.317 [ * * , ( ) ] link:\doibase 10.1090/s0025 - 5718 - 1970 - 0258249 - 6 [ * * , ( ) ] link:\doibase 10.1090/s0025 - 5718 - 1970 - 0274029-x [ * * , ( ) ] * * , ( ) link:\doibase 10.1007/bf01589116 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.97.170201 [ * * ( ) , 10.1103/physrevlett.97.170201 ] link:\doibase 10.1103/physrevb.50.10561 [ * * , ( ) ] link:\doibase 10.1016/s0021 - 9991(03)00308 - 5 [ * * , ( ) ] link:\doibase 10.1063/1.480097 [ * * , ( ) ] link:\doibase 10.1063/1.2815812 [ * * , ( ) ] link:\doibase 10.1080/00018735600101155 [ * * , ( ) ] _ _ ( , , ) _ _ ( , ) * * , ( ) _ _ ( , ) link:\doibase 10.1088/0965 - 0393/8/6/305 [ * * , ( ) ] link:\doibase 10.1016/s0010 - 4655(02)00466 - 6 [ * * , ( ) ] link:\doibase 10.1103/physrevb.64.161102 [ * * , ( ) ] link:\doibase 10.1063/1.442352 [ * * , ( ) ] link:\doibase 10.1039/ft9938901305 [ * * , ( ) ] link:\doibase 10.1063/1.454172 [ * * , ( ) ] in _ _ , ( , ) p. link:\doibase 10.1063/1.2949547 [ * * , ( ) ] link:\doibase 10.1063/1.4871876 [ * * , ( ) ] _ _ ( , ) link:\doibase 10.1063/1.4828704 [ * * , ( ) ] link:\doibase 10.1088/0965 - 0393/22/5/055002 [ * * , ( ) ] link:\doibase 10.1063/1.2104507 [ * * , ( ) ] link:\doibase 10.1063/1.1809574 [ * * , ( ) ] link:\doibase 10.1103/physrevb.47.558 [ * * , ( ) ] link:\doibase 10.1103/physrevb.49.14251 [ * * , ( ) ] link:\doibase 10.1016/0927 - 0256(96)00008 - 0 [ * * , ( ) ] link:\doibase 10.1103/physrevb.54.11169 [ * * , ( ) ] link:\doibase 10.1103/physrevb.59.1758 [ * * , ( ) ] link:\doibase 10.1063/1.1724816 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.95.055501 [ * * , ( ) ] link:\doibase 10.1063/1.4878944 [ * * , ( ) ]
optimizations of atomic positions belong to the most commonly performed tasks in electronic structure calculations . many simulations like global minimum searches or characterizations of chemical reactions require performing hundreds or thousands of minimizations or saddle computations . to automatize these tasks , optimization algorithms must not only be efficient , but also very reliable . unfortunately computational noise in forces and energies is inherent to electronic structure codes . this computational noise poses a sever problem to the stability of efficient optimization methods like the limited - memory broyden fletcher goldfarb shanno algorithm . we here present a technique that allows obtaining significant curvature information of noisy potential energy surfaces . we use this technique to construct both , a stabilized quasi - newton minimization method and a stabilized quasi - newton saddle finding approach . we demonstrate with the help of benchmarks that both the minimizer and the saddle finding approach are superior to comparable existing methods .
in quantum mechanics a physical system is described by its wave function ( quantum state ) , whose knowledge allows us to make predictions about the odds of the results of any measurement implemented on the system .also , all physical properties associated with the system are in principle derivable from its wave function .the goal of the quantum teleportation protocol is to transfer the wave function describing a system in one location ( alice ) to another system in a different place ( bob ) , without knowing the wave function . at the end of the protocol alice s systemis no longer described by its original wave function , which now describes bob s system .a key ingredient in quantum teleportation is a quantum channel connecting alice and bob that is supposed to be a maximally entangled bipartite pure state .however , in any realistic implementation of the protocol noise is inherently present and it affects the entangled state during its transmission to alice and bob .the main effect of noise is to turn pure states into mixed ones . a standard solution to overcome the effects of noiseis called entanglement distillation , where several copies of non - maximally entangled mixed states are needed to `` distill '' a pure one via local operations and classical communication ( locc ) .another important strategy to overcome this limitation is based on the direct use of the quantum channel connecting alice and bob , without resorting to distillation techniques , in which modifications in the standard teleportation protocol are made to maximize its performance .the formal solution to the optimal protocol when the most general quantum operations are allowed is given in ref . . however , it is not , in general , an easy experimental task to implement the optimal quantum operations , and more feasible options leading to an enhancement in the performance of the standard teleportation protocol are desired .in this paper we present a strategy to improve the efficiency of the teleportation protocol when both the qubit to be teleported and the quantum channel are subjected to noise .our strategy consists of minimalistic modifications in the standard teleportation protocol and an additional ingredient , the cheapest one , namely , more noise .we want to take advantage of the inevitable fact that noise is always present in any real world implementation of quantum communication protocols .we want to beat the decrease in the efficiency of the protocol due to noise with noise .indeed , we show for several realistic situations that it is possible by adding more noise or by choosing the right noisy environment to increase substantially the performance of the quantum teleportation protocol when compared to the less noisy case . in order to be as general as possible, we analyze the most common noise channels one usually encounters in the laboratory as well as many scenarios in which one , two , or all three qubits employed in the teleportation protocol are acted by noise .we show several cases in which more noise , less entanglement , or different noise acting on different qubits can enhance the efficiency of the teleportation protocol ( sec .[ results ] ) .we also show that different channels with initially the same amount of entanglement give different efficiencies when subjected to the same type of noise ( sec .[ difchannel ] ) .the physical ideas and mathematical formalism needed to understand those results are given in the first part of this paper , secs .[ seci ] , [ noise ] , and [ fidelity ] , where we present , respectively , the standard teleportation protocol in the density matrix formalism , list all noise channels we use and explain how they affect the qubits of the protocol , and show how to compute its efficiency in an input - state - independent way .in what follows we present the standard teleportation protocol in the density matrix formalism .this formalism allows us to easily implement the quantum operations representing the several kinds of noise that may affect the qubits involved in the protocol .the qubit to be teleported from alice to bob , hereafter called input qubit , is given by , with .its density matrix is where the subscript means `` input '' and denotes complex conjugation .the quantum channel shared between alice and bob is and its density matrix in the base is here the subscript means `` channel '' , the first qubit is with alice , and the second one with bob .note that when we have the bell state , a maximally entangled state whose entanglement can be quantified by the entanglement monotone called concurrence ( c ) . ] and .the normalization condition for the probability density is and by assuming a uniform probability distribution , , we immediately get noting that the average value of any function of and is , we define as the input state independent quantifier for the efficiency of the noisy teleportation protocol . here is given by eq .( [ f1 ] ) and by eq .( [ uniform ] ) .assuming for the moment that the quantum channel is protected from noise ( ) while the input state lies in a noisy environment ( ) as described in sec .[ noise ] , the average fidelity , eq .( [ f2 ] ) , for each type of noise can be written as \hspace{-.1 cm } , \label{bfzz}\\ \langle\overline{f}_{_{phf}}\rangle \hspace{-.1 cm } & = & \hspace{-.1 cm } \frac{2}{3 } \left[1+\frac{1 - 2p_{\!_i}}{2 } \sin(2\theta ) \sin(2\varphi)\right]\hspace{-.1 cm } , \label{phfzz}\\ \langle\overline{f}_{_{d}}\rangle \hspace{-.1 cm } & = & \hspace{-.1 cm } \frac{2}{3 } \left[1-\frac{p_{\!_i}}{4}+\frac{1-p_{\!_i}}{2}\sin ( 2\theta ) \sin ( 2\varphi)\right]\hspace{-.1 cm } , \label{dzz}\\ \langle\overline{f}_{_{ad}}\rangle \hspace{-.1 cm } & = & \hspace{-.1 cm } \frac{2}{3 } \left[\hspace{-.05cm}1-\frac{p_{\!_i}}{4}+\frac{1}{2 } \sqrt{1-p_{\!_i } } \sin ( 2\theta ) \sin ( 2\varphi)\hspace{-.05cm}\right]\hspace{-.1cm}. \label{adzz}\end{aligned}\ ] ] the subscripts in the left hand side of eqs .( [ bfzz ] ) to ( [ adzz ] ) represent the particular type of noise that the input state is subjected to , i.e. , bit flip , phase flip , depolarizing , and amplitude damping .the first thing worth noticing analyzing eqs .( [ bfzz ] ) to ( [ adzz ] ) is that for all of them but the optimal and are such that .this is true because and the maximum is obtained if . in the expression for , however , is multiplied by , which changes sign at . thus , for the optimal settings are while for we have .this means that if the input qubit is subjected to the phase flip noise for a considerable amount of time ( ) , alice can counterattack and improve the efficiency of the teleportation protocol by changing the measuring basis ( ) .alternatively , alice can keep using the original measuring basis and either she or bob applies a operation to the channel , changing it from to . using the above optimal settings , eqs .( [ bfzz ] ) to ( [ adzz ] ) simplify to ( color online ) efficiency of the teleportation protocol when only the input qubit is affected by a noisy environment , with representing the probability for the noise to act on the qubit .the dashed line , , marks the value below which classical protocols ( no entanglement ) give the same efficiency .,width=302 ] looking at fig .[ fig1 ] , where we plot eqs .( [ bfzzopt])-([adzzopt ] ) , we see that the bit flip noise is the most severe noise for all values of and that for the phase flip noise is as bad as the bit flip noise . on the other hand , from to ,the amplitude damping is the least severe noise , followed by the depolarizing channel . for high values of ,the phase flip noise gives the greatest average fidelity .let us now investigate what happens if one of the qubits belonging to the quantum channel is also subjected to noise . for definitenesswe choose bob s qubit , in addition to the input qubit , to lie in a noisy environment .however , it is not difficult to show that the same results follow if we choose alice s qubit of the quantum channel .we also introduce the following notation in order to make it clear which qubits are subjected to noise . in the present case the optimal efficiency of the protocol , eq .( [ f2 ] ) , is written as , where the first subindex means that the input qubit is acted by noise , the second one denotes that alice s qubit of the quantum channel is not acted by noise , and the third subindex means that bob s qubit is acted by noise . here and can be any one of the four kinds of noise described previously .we start studying the case where the input qubit is always subjected to the bit flip noise while bob s qubit may lie in one of the four different types of noisy environments given in sec .[ noise ] . the optimal efficiencies in those four cases are , \label{bfzphfopt}\\ \langle\overline{f}_{_{bf,\varnothing , d}}\rangle & = & 1-\frac{p_{\!_b}}{2 } - \frac{2}{3 } p_{\!_i}(1 - p_{\!_b } ) , \label{bfzdopt}\\ \langle\overline{f}_{_{bf,\varnothing , ad}}\rangle & = & \frac{2}{3}-\frac{1}{3}\left[p_{\!_i}+\frac{p_{\!_b}}{2}(1 - 2p_{\!_i})(1-\cos ( 2\theta))\right .\nonumber \\ & & \left .-(1-p_{\!_i})\sqrt{1-p_{\!_b}}\sin ( 2\theta)\right ] , \label{bfzadopt}\end{aligned}\ ] ] where the optimal parameters leading to eqs .( [ bfzbfopt ] ) and ( [ bfzdopt ] ) are and to eq .( [ bfzphfopt ] ) are for and when . in eq .( [ bfzadopt ] ) we have and the optimal given by the solution to , namely , such that , for , and for . an interesting result worth mentioning here is that we have a scenario where _ less _ entanglement means _more _ efficiency .this happens when bob s qubit is subjected to the amplitude damping noise ( eq .( [ bfzadopt ] ) ) , which was first noticed in ref . when the input state is always pure . herewe show that even if the input state is mixed we still have the same feature .we can see that less entanglement means more efficiency looking at eq .( [ opttheta ] ) , which tells us that the optimal is , the only solution meaning an initially maximally entangled channel , only when . for any other value of the optimal is not .hence , whenever , less entanglement leads to a better performance for the teleportation protocol in this case .( color online ) efficiency of the teleportation protocol when both the input qubit ( ) and bob s qubit ( ) are affected by a noisy environment .the dashed line , , marks the value below which classical protocols ( no entanglement ) give the same efficiency . herethe input qubit is always subjected to the bit flip ( ) noise while bob s qubit may suffer from several types of noise ., width=302 ] in fig .[ fig2 ] we plot eqs .( [ bfzbfopt ] ) to ( [ bfzadopt ] ) as a function of for several values of .looking at the two bottom panels of fig .[ fig2 ] we see another interesting and surprising result . whenever we have a scenario where _ more _noise means _ more _ efficiency .indeed , for we can see that ( solid - black curve ) is below the classical limit ( dashed curve ) for all values of ( the quantum teleportation protocol is useless in this situation ) .however , by adding more noise to the protocol , i.e. , by putting bob s qubit in a noisy environment described by the bit flip map , we can increase the efficiency of the protocol and surpass the limit for .this is illustrated looking at the curve for , the red - circle curve in fig .[ fig2 ] . on the other hand ,for the bit flip noise when acting on bob s qubit always decreases the efficiency of the protocol .it is worth mentioning that a similar fact occurs when the input qubit is not subjected to noise but the two qubits of the quantum channel are . in this scenario ref . shows that when both qubits of the channel are subjected to the amplitude damping noise we have a better performance than the case where only one of the qubits of the channel is acted by this type of noise .another interesting scenario happens when noise is unavoidable but bob can choose the noisy environment in which he keeps his qubit during the execution of the teleportation protocol . in this casethe optimal noise depends in a non - trivial way on the values of and .for example , when the protocol achieves a better performance if bob s qubit is subjected to the amplitude damping noise whenever does not exceed . however , if is greater than we get a better result if bob s qubit is subjected to the phase flip noise ( see the top panels of fig .[ fig2 ] ) .let us now move to the case where the input qubit is always subjected to the phase flip noise while bob s qubit can suffer any one of the four kinds of noise given in sec .[ noise ] . the optimal efficiencies are now , \label{phfzphfopt}\\ \langle\overline{f}_{_{phf,\varnothing , bf}}\rangle&= & \frac{2}{3 } \left[1-\frac{p_{\!_b}}{2}+\frac{|1 - 2p_{\!_i}|(1-p_{\!_b})}{2}\right ] , \label{phfzbfopt}\\ \langle\overline{f}_{_{phf,\varnothing , d}}\rangle & = & \frac{2}{3 } \left[1-\frac{p_{\!_b}}{4}+\frac{|1 - 2p_{\!_i}|(1-p_{\!_b})}{2}\right ] , \label{phfzdopt}\\ \langle\overline{f}_{_{phf,\varnothing , ad}}\rangle & = & \frac{2}{3 } \left[1-\frac{p_{\!_b}}{4}+\frac{p_{\!_b}\cos(2\theta)}{4}\right . \nonumber \\ & & \left .+ \frac{(1 - 2p_{\!_i})\sqrt{1-p_{\!_b}}\sin(2\theta)}{2}\right ] .\label{phfzadopt}\end{aligned}\ ] ] the optimal parameters leading to eq .( [ phfzphfopt ] ) are if and if . in eqs .( [ phfzbfopt ] ) and ( [ phfzdopt ] ) we have if and if . in eq .( [ phfzadopt ] ) a possible set of optimal parameters is such that and given by the solution to , i.e , where , if , and if .note that eq .( [ opttheta2 ] ) implies that the greatest efficiency is achieved with less entanglement whenever . in fig .[ fig3 ] we plot eqs .( [ phfzphfopt ] ) to ( [ phfzadopt ] ) as a function of for several values of .( color online ) efficiency of the teleportation protocol when both the input qubit ( ) and bob s qubit ( ) are affected by a noisy environment .the dashed line , , marks the value below which classical protocols ( no entanglement ) give the same efficiency . herethe input qubit is always subjected to the phase flip ( ) noise while bob s qubit may suffer from several types of noise.,width=302 ] now , contrary to the case where the input qubit is subjected to the bit flip noise , the addition of more noise to the protocol by putting bob s qubit in a noisy environment ( ) does not improve its efficiency when compared to the noiseless case ( ) .however , if noise is inevitable and bob can choose among different noise channels , he can improve the efficiency of the protocol by properly selecting the right noise .his choice depends on the probability for the noise to act on his qubit or , equivalently , on the time his qubit is subjected to a particular noisy environment .this is illustrated in fig .[ fig3 ] , where we note that for ranging from zero to , the best performance is achieved if bob s qubit is subjected to the amplitude damping noise ( orange - star curve ) . on the other hand , from to one ,it is better to have both the input and bob s qubit subjected to same type of noise ( red - circle curve ) .we have also investigated the two remaining cases , i.e. , when the input qubit is subjected either to the depolarizing or to the amplitude damping noise .the qualitative behavior for the efficiency of the teleportation protocol are similar to the phase flip noise just described and specific quantitative details are given in the appendix .we now want to investigate the scenario where all qubits with alice are subjected to the same type of noise .this scenario is relevant , for example , when the teleportation protocol is employed for quantum communication tasks and it is alice that generates the input and the entangled channel . in this casethe input qubit and alice s share of the entangled state always lie in the same environment and are thus subjected to the same type of noise during the same time span .this latter fact means that .bob s qubit , on the other hand , travels from alice to bob and may suffer a different type of noise .if alice s qubits are subjected to the bit flip noise we have the following optimal efficiencies according to the noise suffered by bob s qubit , |1 - 2p_{\!_b}|}{3 } , \label{bfbfphfopt}\\ \langle\overline{f}_{_{bf , bf , d}}\rangle & = & 1-\frac{p_{\!_b}}{2}-\frac{4p(1-p)(1-p_{\!_b})}{3 } , \label{bfbfdopt}\\ \langle\overline{f}_{_{bf , bf , ad}}\rangle & = & \frac{2}{3}-\frac{p_{\!_b}}{6}-\frac{2p(1-p)(1-p_{\!_b})}{3 } \nonumber \\ & + & \frac{(1 - 2p)^2p_{\!_b}\cos(2\theta)}{6 } \nonumber \\ & + & \frac{[1 - 2p(1-p)]\sqrt{1-p_{\!_b}}\sin(2\theta)}{3}. \label{bfbfadopt}\end{aligned}\ ] ] the optimal parameters giving eqs .( [ bfbfzopt ] ) , ( [ bfbfbfopt ] ) , and ( [ bfbfdopt ] ) are while for eq .( [ bfbfphfopt ] ) we have if and if .( [ bfbfadopt ] ) a possible set of optimal parameters are with given by \sqrt{1-p_{\!_b}}}{(1 - 2p)^2p_{\!_b}},\ ] ] where and . in fig .[ fig4 ] we plot eqs .( [ bfbfzopt ] ) to ( [ bfbfadopt ] ) as a function of for several values of .( color online ) efficiency of the teleportation protocol when alice s qubits suffer the bit flip noise ( ) and bob s qubit ( ) are affected by several types of noise . the dashed line , , marks the value below which classical protocols ( no entanglement ) give the same efficiency ., width=302 ] the first thing we note is that the addition of more noise does not improve the efficiency of the protocol ( no curve crosses the solid - black one ) .this is in contrast to the case where only the input qubit is affected by the bit flip noise and we put bob s qubit in a noisy environment too ( fig . [ fig2 ] ) .second , if noise is unavoidable bob can improve the efficiency of the protocol by selecting the right noise channel according to the values of and .we also studied the cases where alice s qubits are affected by the other three types of noise and we found that the qualitative behavior of all those cases are similar to what is shown in fig .indeed , we did not find any situation in which more noise ( allowing bob s qubit to be acted by noise ) improves the overall efficiency .however , for all three cases we noted that whenever bob s noise channel differs from alice s we may have significant improvement in the efficiency of the protocol . in particular , if bob s qubit is subjected to the amplitude damping noise we achieve the best performance for low values of . for high values of ,on the other hand , the phase flip channel is the one furnishing the best performance . finally , when the amplitude damping channel acts on at least one qubit we have observed the same trend described before concerning the degree of entanglement of the optimal quantum channel , namely , less entanglement means better performance .another important case is the one where the channel qubits are subjected to the same type of noise .this may occur in the implementation of quantum communication protocols in which the quantum channel is generated by a third party symmetrically positioned between alice and bob . in the case where both qubits find similar noisy environments during their trip to alice and bob , they will be acted by the noise during the same amount of time , which implies .the input qubit , on the other hand , may suffer a different type of noise .if the channel qubits are subjected to the bit flip noise we have the following optimal efficiencies depending on the type of noise acting on the input qubit , }{3 } , \label{phfbfbfopt}\\ \langle\overline{f}_{_{d , bf , bf}}\rangle & = & 1-\frac{p_{\!_i}}{2}-\frac{4(1-p_{\!_i})p(1-p)}{3 } , \label{dbfbfopt}\\ \langle\overline{f}_{_{ad , bf , bf}}\rangle & = & \frac{2}{3}-\frac{p_{\!_i}}{6}-\frac{2(1-p_{\!_i})p(1-p)}{3 } \nonumber \\ & + & \frac{\sqrt{1-p_{\!_i}}[1 - 2p(1-p)]}{3}. \label{adbfbfopt}\end{aligned}\ ] ] the optimal parameters giving eqs .( [ zbfbfopt ] ) , ( [ bfbfbfopt2 ] ) , ( [ dbfbfopt ] ) , and ( [ adbfbfopt ] ) are and for eq .( [ phfbfbfopt ] ) we have if and if . comparing eqs .( [ bfbfzopt])-([bfbfadopt ] ) with eqs .( [ zbfbfopt])-([adbfbfopt ] ) we see that , , if and in eq .( [ bfbfadopt ] ) .this means that the same analysis and discussions given in the previous section apply here .the only quantitative difference occurs for eq .( [ adbfbfopt ] ) since now , contrary to ( [ bfbfadopt ] ) , the optimal channel is a maximally entangled state .however , the qualitative behavior for the efficiency given by eq .( [ bfbfadopt ] ) applies here too , in particular the fact that it is the combination of noise channels leading to the best performance for low whenever is no greater than .a direct calculation also shows that and , , , , , , if and in the expressions for and . on the other hand, this symmetry does not hold for .however , its qualitative behavior still resembles the one for , in the sense that gives the best performance for low while for high the best scenario is .another aspect affecting the efficiency of the teleportation protocol in a noisy environment is related to the choice of the quantum channel , even if we choose among those having the same amount of entanglement . throughout this articlewe have employed the generalized bell state , which reduces to the maximally entangled bell state when ( see eq .( [ b1 ] ) ) .however , if we use the state , which approaches the maximally entangled bell state as ( see eq .( [ b3 ] ) ) , we may get a different efficiency .for example , studying the case where both qubits of the quantum channel are acted by the same type of noise during the same time ( ) , and employing either the quantum channel or , we obtain for all but a different efficiency when .this last fact is illustrated in eqs .( [ zadadphiopt ] ) and ( [ zadadpsiopt ] ) , }{3 } , \label{zadadphiopt}\\ \langle\overline{f}_{_{\varnothing , ad , ad}}\rangle_{\!_{|\psi^+\rangle } } & = & 1-\frac{2p}{3 } , \label{zadadpsiopt}\end{aligned}\ ] ] where the optimal parameters giving eq .( [ zadadphiopt ] ) are and , with , and the optimal ones leading to eq .( [ zadadpsiopt ] ) are such that .it is not difficult to see that for . in fig .[ fig5 ] we plot eqs .( [ zadadphiopt ] ) and ( [ zadadpsiopt ] ) as a function of .( color online ) efficiency of the teleportation protocol for different channels subjected to the same noise , namely , the amplitude damping noise ( ) , with the input qubit in a noiseless environment .the dashed line , , marks the value below which classical protocols ( no entanglement ) give the same efficiency .,width=302 ] we can qualitatively understand why we have different efficiencies using different bell states if we note , first , that is a superposition of and while is given by the superposition of and and , second , that the amplitude damping channel models energy dissipation and thus does not affect the ground state . with that in mind , we see that both states that make , and , are affected by this type of noise when it acts on both qubits . for , however , only is affected , which makes it more robust for this type of noise .we studied how the efficiency of the quantum teleportation protocol is affected when the qubits employed in its execution lie in a noisy environment . in order to model the noisy environment we employed the most common noise channels that one encounters in any realistic implementation of quantum communication protocols , namely , the bit flip , phase flip ( phase damping ) , depolarizing , and amplitude damping channels .we also studied many noise scenarios where one , two or all three qubits required for the implementation of the teleportation protocol are subjected to noise .the first remarkable result we obtained is related to the entanglement needed to get the optimal efficiency when noise is present in both the input qubit and in the quantum channel .specifically , we observed a non - trivial interplay among which quantum channel is used , its amount of entanglement , and the type of noise acting on it .indeed , for certain types of quantum channels acted by the amplitude damping noise , we showed that _less _ entanglement means _ more _efficiency , a similar behavior observed in ref . when noise acts only on the channel qubits . for other channels with the same initial entanglement we showed that this behavior is not seen .the results given here and in ref . are a counterintuitive fact since for pure state inputs and pure state channels ( no noise ) more entanglement leads to more efficiency .second , we showed a scenario where _ more _ noise leads to _ more _ efficiency .this fact occurred when the input qubit , the one to be teleported from alice to bob , lies in a noisy environment described by the bit flip noise and bob s qubit , his share of the entangled channel , is also affected by this same noise .the efficiency in this case is considerably greater when compared to the situation where only the input qubit is subjected to this type of noise .this kind of behavior was also observed in ref . when the channel qubits are subjected to the amplitude damping noise .third , when noise is unavoidable but either alice or bob can choose the kind of noise that acts on their qubits , we showed that the optimal combination of noisy environments leading to the highest efficiencies is related in a non - trivial way to the time ( or probability ) the qubits are affected by the noise . in many scenarios we showed that alice and bob should keep their qubits in different noisy environments in order to get the best performance for the teleportation protocol .furthermore , we showed that sometimes the optimal efficiency is obtained by letting one of the qubits be subjected for a longer time to noise than the other one .fourth , in a noisy environment we showed that the choice of the quantum channel can affect the efficiency of the teleportation protocol , even if we deal with channels having the same amount of entanglement . being more specific , we showed that if the quantum channel is a bell state and the amplitude damping noise acts on both qubits of the channel , different bell states lead to different performances for the teleportation protocol . in summary , the main message we can draw from the results given in this paper , and from the complementary ones given in refs . , is quite clear : when noise is taken into account in analyzing the performance of a quantum communication protocol we should not expect the optimal settings for the noiseless case to hold anymore .moreover , sometimes counterintuitive optimal settings occur and , therefore , we should always analyze the types of noise we are going to face in any realistic implementation of a particular protocol and determine those optimal settings on a case - by - case basis .rf thanks capes ( brazilian agency for the improvement of personnel of higher education ) for funding and gr thanks the brazilian agencies cnpq ( national council for scientific and technological development ) and cnpq / fapesp ( state of so paulo research foundation ) for financial support through the national institute of science and technology for quantum information. the optimal efficiencies assuming the input qubit is subjected to the depolarizing noise and bob s qubit to the four different types of noise are }{6 } \nonumber \\ & & + \frac{(1-p_{\!_i})\sqrt{1-p_{\!_b}}\sin(2\theta)}{3}. \label{dzadopt}\end{aligned}\ ] ] the optimal settings for eqs . ( [ dzdopt ] ) and ( [ dzbfopt ] ) are , for eq .( [ dzphfopt ] ) are if and if , and for eq .( [ dzadopt ] ) we have and such that and . in fig .[ figa1 ] we illustrate the behavior of eqs .( [ dzdopt ] ) to ( [ dzadopt ] ) for a particular value of .it is worth mentioning that as we increase the value of we obtain the same curves with all points translated to lower values of . for values of greater than we do not have any curve above the classical limit .( color online ) efficiency of the teleportation protocol when both the input qubit ( ) and bob s qubit ( ) are affected by a noisy environment .the dashed line , , marks the value below which classical protocols ( no entanglement ) give the same efficiency . herethe input qubit is always subjected to the depolarizing ( ) noise while bob s qubit may suffer from several types of noise.,width=302 ] ( color online ) efficiency of the teleportation protocol when both the input qubit ( ) and bob s qubit ( ) are affected by a noisy environment . the dashed line , , marks the value below which classical protocols ( no entanglement ) give the same efficiency . herethe input qubit is always subjected to the amplitude damping ( ) noise while bob s qubit may suffer from several types of noise.,width=302 ] finally , the optimal efficiencies when the input qubit is subjected to the amplitude damping noise are }{6 } \nonumber \\ & & + \frac{\sqrt{(1-p_{\!_i})(1-p_{\!_b})}\sin(2\theta)}{3 } , \label{adzadopt}\\ \langle\overline{f}_{_{ad,\varnothing , bf}}\rangle&= & \frac{2}{3 } - \frac{p_{\!_i}}{6}-\frac{(1-p_{\!_i})p_{\!_b}}{3 } \nonumber \\ & & + \frac{\sqrt{1-p_{\!_i}}(1-p_{\!_b})}{3 } , \label{adzbfopt}\\ \langle\overline{f}_{_{ad,\varnothing , phf}}\rangle & = & \frac{2}{3 } - \frac{p_{\!_i}}{6}+\frac{\sqrt{1-p_{\!_i}}|1 - 2p_{\!_b}|}{3 } , \label{adzphfopt}\\ \langle\overline{f}_{_{ad,\varnothing , d}}\rangle & = & \frac{2}{3 } - \frac{p_{\!_b}}{6 } + \frac{1-p_{\!_b}}{3}\hspace{-.1cm}\left(\hspace{-.1 cm } \sqrt{1\!-\!p_{\!_i } } \!-\!\frac{p_{\!_i}}{2}\hspace{-.1 cm } \right)\hspace{-.1cm}. \label{adzdopt}\end{aligned}\ ] ] the optimal settings for eq .( [ adzadopt ] ) are and such that and , for eqs .( [ adzbfopt ] ) and ( [ adzdopt ] ) are , and for eq .( [ adzphfopt ] ) are if and if . in fig .[ figa2 ] we plot eqs .( [ adzadopt ] ) to ( [ adzdopt ] ) for a specific value of . for other values of have similar curves and similar relations among the different curves . for than we do not have any curve crossing the classical limit .w .- li li , c .- feng li , and g .- c .guo , phys .a * 61 * , 034301 ( 2000 ) ; p. agrawal and a. k. pati , phys .a * 305 * , 12 ( 2002 ) ; g. gordon and g. rigolin , phys . rev .a * 73 * , 042309 ( 2006 ) ; g. gordon and g. rigolin , phys .a * 73 * , 062316 ( 2006 ) ; g. gordon and g. rigolin , eur .j. d * 45 * , 347 ( 2007 ) ; g. rigolin , j. phys .b : at . mol .. phys . * 42 * , 235504 ( 2009 ) ; g. gordon and g. rigolin , opt . commun . * 283 * , 184 ( 2010 ) ; r. fortes and g. rigolin , ann . phys .( n.y . ) * 336 * , 517 ( 2013 ) . in order to save space , we abuse notation and use the same symbol to denote , where is the identity matrix acting on bob s qubit . is the correct mathematical representation of alice s projective operator when acting on a three qubit state .s. massar and s. popescu , phys .74 * , 1259 ( 1995 ) ; s. l. braunstein , ch .a. fuchs , and h. j. kimble , j. mod . opt . * 47 * , 267 ( 2000 ) ; h. barnum , phd thesis ( university of new mexico , albuquerque , 1998 ) .
we investigate how the efficiency of the quantum teleportation protocol is affected when the qubits involved in the protocol are subjected to noise or decoherence . we study all types of noise usually encountered in real world implementations of quantum communication protocols , namely , the bit flip , phase flip ( phase damping ) , depolarizing , and amplitude damping noise . several realistic scenarios are studied in which a part or all of the qubits employed in the execution of the quantum teleportation protocol are subjected to the same or different types of noise . we find noise scenarios not yet known in which more noise or less entanglement lead to more efficiency . furthermore , we show that if noise is unavoidable it is better to subject the qubits to different noise channels in order to obtain an increase in the efficiency of the protocol .
gravitational lensing reveals the mass distribution of the universe by detecting deflections of photons in the gravitational potential generated by the mass . along most lines of sight, we can best measure the _ gradient _ of the deflection , characterized by an apparent shear and magnification of background sources .the lensing shear is detectable as a coherent alignment induced on nominally randomly - oriented resolved background galaxies .reliable measurement of this shear opens the door to a wealth of astrophysical and cosmological information , including the most direct measures of the dark components of the universe .see and for recent reviews of the power of this weak gravitational lensing technique .the full power of the weak lensing technique can only be realized , however , if we are able to infer the shear from real image data without significant systematic error .this apparently straightforward measurement is complicated by several factors : * the shear is weak , amounting to change in a galaxy s axis ratio on a typical cosmological line of sight . in a full - sky experiment , it is possible to measure shear with statistical errors below 1 part in of this 2% systematic errors must be extremely small else they will dominate the error budget . * the galaxy is viewed through an instrument ( possibly including the atmosphere ) which convolves the lensed appearance with a point spread function ( psf ) that typically induces larger coherent shape changes than the gravitational lensing , and can vary with time and with position on the sky .this instrumental effect must be known and removed . *the received image of the galaxy is pixelized , meaning it has finite sampling .even if the sampling meets the nyquist criterion so that the image is unambiguous , our shear extraction algorithm must handle the sampling and any other signatures of the detector . *the unlensed appearance of any individual galaxy is unknown , and galaxies have an infinite variety of intrinsic shapes .no finite parameterization can fully describe the unlensed galaxies . *the received image includes photon shot noise and additional noise from the detector .the shear inference must maintain exceptionally low bias even when targeting galaxies with signal - to - noise ratio if we are to extract the bulk of the shear information available from typical optical sky images . , , , and document a series of challenges in which the community was invited to infer the shear from simulated sky images , as a means of assessing our abilities to measure shear in the face of the above difficulties .these publications also summarize the impressive variety of techniques that have been proposed for shear inference .a useful parameterization for errors in shear inference is ( using a simplified scalar notation ) that the measured shear is related to the true shear via calculate that ambitious weak - shear surveys must obtain multiplicative errors to retain their full statistical power .the additive bias , which can arise when the psf or other element of the analysis chain is not symmetric under 90rotation , must be kept below , which is 100300 smaller than the typical psf ellipticity .the literature contains no demonstrations of robust shear algorithms yielding at . demonstrates the ability to attain at high , overcoming all but the last of the problems itemized above .this `` fdnt '' method has a rigorous formulation for noiseless data .it has , however , proven more difficult to derive a shear inference that is rigorously correct in the presence of noise . the lowest - order noise - induced bias in galaxy - shape estimates that are produced via maximum - likelihood fitting to parametric galaxy models .but it is not clear that this correction yields the necessary accuracy on real data .the most common approach to biases induced by noise ( or other systematic errors ) has been to use simulated sky data to infer and then apply a correction to the real - sky result .the accuracy of this approach is of course limited by the extent to which the simulated data reproduces the salient characteristics of the real sky . propose a somewhat more general scheme of calibrating shape - measurement biases vs a few parameters of the galaxy and measurement , _e.g. _ the level , resolution , sersic index , etc . , and then applying bias corrections on a galaxy - by - galaxy basis . but note that this galaxy - by - galaxy bias correction is inaccurate at low , and it is better to determine a single overall correction to the shear using prior high- information on a random subsample of the source population .all these avenues lead us back to the situation of requiring prior empirical high- information on the underlying source galaxy population in order to produce the noise - bias corrections .once we need an empirical prior on galaxy information , we should look to bayesian techniques to produce a rigorously correct shear estimator .the lensfit method of ( * ? ? ?* lensfit ) produces a bayesian posterior distribution for the ellipticity of galaxy given its pixel data , assuming that galaxies take a known functional form and given a prior distribution of and the other parameters of the presumed galaxy model .but in lensfit the inference of applied _ shear _ from an ensemble of these posterior densities for galaxy _ shapes _ is done using some _ ad hoc _ weighting and averaging schemes .testing of the lensfit codes on simulated data in yields at , necessitating an empirical multiplicative correction to shears derived from real data . in this paper we will derive a rigorous bayesian treatment of the inference of weak _ shear _ , not just galaxy shapes , from pixel data .the general approach is outlined in section 2 .then we examine three ways to apply this approach to real data : section [ method1 ] examines bayesian model - fitting ( bmf ) approaches and extends the lensfit bayesian method to shear estimation .section [ method2 ] then shows how to apply bayesian techniques without assuming a parametric model for unlensed galaxies , yielding an adaptation of the venerable ( * ? ? ?* ksb ) weighted - moment method that treats convolution , sampling , and noise rigorously .we consider this bayesian fourier - domain ( bfd ) method to be our most promising algorithm for high - accuracy shear inference . in section [ nulltests ] we explore whether the fdnt method that is successful at high can be embedded in a bayesian framework for accuracy at finite . in sections[ compare ] and [ conclude ] we compare to some extant shear - measurement algorithms and conclude .we begin by assuming that we wish to infer the posterior distribution of a constant shear from an image containing data that can be subdivided into statistically independent subsets , each covering a single galaxy . for a prior probability of the shear ,the standard bayesian formulation for the posterior is our fundamental assumption is that the posterior will be well approximated by a quadratic taylor expansion in , _i.e. _ the posterior will be gaussian in .this assumption will fail when the number of galaxies is small and the shear is poorly constrained , but should become valid when we combine information from a large number of galaxies on a weak shear . in appendix [ cubic ] we give a prescription for including 3rd - order terms in as perturbations , since terms are necessary if the estimate of is to be accurate to part in for .the bayesian formulation ( [ bayes1 ] ) produces an exact posterior distribution of the shear .the full posterior distribution could be propagated into cosmological inferences , but most current analyses require instead an estimator for the shear ( and an uncertainty ) . by adopting a quadratic taylor expansion for implicitly assume that the maximum of the posterior distribution is coincident with the mean of the posterior , so it would be natural to adopt the location of this posterior peak as a shear estimator .below we will consider whether this estimate is biased . for quadratic order , we need the 6 scalars that make up the scalar , vector , and symmetric matrix defined as : the dependence of the posterior on can now be expressed as \cdot { \mbox{\boldmath }}.\ ] ] if we presume that the data are much more informative than the prior on , we may drop the and find that the posterior for the shear is gaussian with covariance matrix and mean defined via note that we have _ not _ made any assumption about the gaussianity of the likelihood for each galaxy , nor about any priors , etc . ; only about the posterior of the applied shear . before describing methods of calculating and its derivatives ,we discuss generalization of ( [ pg ] ) to non - constant shear fields .once the quantities and are calculated for each galaxy , the posterior for any shear model can be calculated as long as the model predicts shears in the regime where ( [ pquad ] ) holds .consider a model with parameter vector which predicts shear values at each galaxy .we can slightly modify our formulation and proceed as before to derive the posterior \cdot { \mbox{\boldmath }}_i({\mbox{\boldmath }}),\ ] ] if is a linear function of ( and if the prior is approximated as quadratic ) , then the solution for the maximum - posterior is a closed - form matrix equation .one potentially interesting application is a fourier decomposition . if the source galaxies are uniformly distributed on the plane and we can make the approximation that then dependence of the posterior on the fourier coefficients becomes \notag\\ & + \frac{1}{4}\sum_\mu { \mbox{\boldmath }}_\mu \cdot { { \mbox{}}}^{-1}_g \cdot { \mbox{\boldmath }}_\mu^\star , \end{aligned}\ ] ] with as from ( [ cg ] ) .this posterior separates into a gaussian over each 2-component with identical covariance on each of the real and imaginary parts of each fourier coefficient and a simple matrix solution for the most probable .a similarly simple posterior could be derived for any decomposition of the shear field into orthogonal functions .bayesian estimation of -point correlations of a shear field should also be similarly straightforward , at least in the limit where galaxy shape noise and measurement noise are dominant over the sample variance of the shear field .we leave this derivation for future work .is the mean from an unbiased estimator of the true shear ?the bayes formalism does not guarantee that the mean of the posterior is an unbiased estimator .it does assure , however that the posterior converges to the input value if the posterior is narrow , _i.e. _ the bayesian posterior is not __ define as the expectation of the summand in over the population of target galaxies , and define for each galaxy we can define and .when we ignore the prior as weakly informative relative to the data from a large number of galaxies , the expectation of the mean of the posterior in becomes + o(n^{-2}).\end{aligned}\ ] ] the last line arises from expanding and using as , and the posterior narrows to a delta function at . to the extent that our quadratic approximation is valid , bayes theorem demands that this equal the true input .we find then that the bias for a finite set of galaxy is , to leading order in , .\ ] ] the first bracketed term is roughly a multiplicative bias ( not quite , because the distributions of will depend weakly on ) .if we have , then this multiplicative bias on shear is .this bias will be smaller than the statistical error that arises from shape noise in a weak lensing measurement .the forms of and suggest that in magnitude , so the second term can be expected to scale as .in addition , this term involving covariance between and will vanish by symmetry at if the psf and noise are isotropic .therefore we expect this bias to have another factor of or of , the psf ellipticity , in front of it , leading to an additive bias of perhaps . this will again always be below shape noise and insignificant for shear statistics constrained by galaxy measurements .these terms do represent a kind of noise bias on the mean of the shear posterior taken as a shear estimator . the principal difference from previous techniques , however , is that the size of this bias scales as the square of the measurement error on the _ shear _ , and is not affected by measurement errors on individual galaxy shapes .implementation of the bayesian method depends on assigning a probability to the pixel data given the shear . to doso we introduce some finite set of quantities describing the appearance of galaxy . we must be able to assign a likelihood of the pixel data being produced by a galaxy with properties .we also need to know the distribution of for the true galaxy population viewed through shear. then we can assign the derivatives with respect to needed to define and in ( [ pqr ] ) all propagate purely to the prior inside the integral . in the next two sections, we will consider first a model - fitting approach , in which the galaxy properties are assumed to predict all of the observed pixel data values ; and a model - free scheme , in which the pixel data are compressed to a smaller set of moments that will serve as our galaxy properties . in either case , the essential requirements are that : 1 . we have a rigorous means to assign a likelihood , and 2 . the distribution of real galaxies properties changes under application of shear , and that we can determine this dependence .e ._ there is a detectable and known signature of shear upon the galaxy population .if galaxy is assumed to be fully described by a model with a finite number of parameters , all the instrumental signatures are known , and there is a known noise model for the pixel data , then one can calculate the likelihood of the full pixel data vector . herewe divide into three subsets : * is a vector of observed parameters that are altered under the action of lensing shear , _i.e. _ the ellipticity of the galaxy model .we presume that there is an exactly known transformation under the action of shear from intrinsic source parameters to the observed parameters , .such is the case , for example , when is the 2-component ellipticity of a galaxy with self - similar elliptical isophotes and is any of the common representations of the shear linear transformation matrix .we leave the form of this transformation free at this point to accommodate any convention for the definition of the ellipticity and the shear .we do , however , require that the transformation be reversible : .* is the center of the galaxy , or more generally any parameters whose prior distribution can be taken as uniform both before and after the application of shear to the sky .we hence will not make an argument of our priors .* are other parameters of the model , which we take to be invariant under the action of shear on the image .examples would be the half - light radius , surface brightness , and sersic index of a simple elliptical sersic - profile galaxy model , _e.g. _ as used in .the shear - conditioned probability for single galaxy becomes where is the prior distribution the galaxy parameters given the shear .the shear enters the posterior only through this term .conservation of probability under shear requires that where is the _ unlensed _ distribution of galaxy ellipticities , and the last term is the jacobian of the ellipticity transformation . for an isotropic universe ,the unlensed prior should be a function only of the amplitude , not the orientation . given the unlensed prior , we can make a taylor expansion in the shear : once the transformation is specified , the functions and can be derived in terms of and its first two derivatives with respect to .the quantities needed from each galaxy for the bayesian posterior in are note that would be the bayesian evidence for in the absence of lensing shear . the operative procedure for obtaining the bayesianshear estimate is : 1 . determine the unlensed prior from a high- imaging sample , and perform the taylor expansion in knowing the ellipticity transformation equation .2 . for each observed galaxy ,compute the six distinct integrals in ( [ int1 ] ) with the likelihood over the ellipticity and structural parameters to get and .3 . sum over galaxies to obtain in . 4 .sum over galaxies to obtain the shear estimate as in .the shear posterior has this mean ( and maximum ) and covariance matrix .note that the division by occurs after the summation over galaxies , _i.e. _ we do not generate ellipticity or shear estimators on a galaxy - by - galaxy basis .also note that for galaxies with very noisy data , is very weakly dependent on galaxy properties and hence on shear : equations [ pqr ] then show that and both tend to zero .the low- galaxy hence has no influence on the shear likelihood .it will therefore be unnecessary to make cuts on galaxy size or ratio to obtain a successful measurement .as long as the source selection is made on a quantity ( such as total flux ) that is unaffected by galaxy shape or lensing shear , we avoid selection biases . the only approximation made in this derivation was that of weak shear , namely that is accurate over the range of permitted by the data . if the unlensed ellipticity distribution is characterized by a scale of variation , this means we are assuming that .extension to multiple exposures of the same galaxy is trivial : the pixel data for galaxy may be the union of distinct exposures information , assuming that different exposures have statistically independent errors , we have the exposures are in different filter bands , then the most general formulation is that and are the unions of distinct structural parameters and for each band , and the prior must specify the joint distribution of galaxies appearances in all observed bands .an alternative formulation to the above is to integrate over the source ellipticity instead of the lensed ellipticity , which puts the derivatives with respect to shear in the likelihood instead of the prior : in this case the quantities needed for the shear posterior are p_0 ( e^s_i , { \mbox{\boldmath }}_i ) \\ { \mbox{}}_i & \equiv \int d{\mbox{\boldmath }}^s_i \ , d{\mbox{\boldmath }}_i\,d{\mbox{\boldmath }}_i \left [ \left.{\boldsymbol{\nabla_g}}{\boldsymbol{\nabla_g}}{\mbox{}}({\mbox{\boldmath }}_i |{\mbox{\boldmath }}^s_i\oplus g , { \mbox{\boldmath }}_i ) \right|_{{\mbox{\boldmath }}=0}\right ] p_0 ( e^s_i , { \mbox{\boldmath }}_i ) \notag\end{aligned}\ ] ] we would expect equations ( [ int1 ] ) to be the more computationally efficient approach , since the derivatives of the prior with respect to shear can be pre - calculated once , whereas ( [ int1b ] ) require calculating 5 shear derivatives of the likelihood for every target galaxy .furthermore the isotropy and parity symmetries of the unlensed sky simplify the shear derivatives of the prior .a numerical test with a simplified model demonstrates the accuracy and feasibility of the bayesian shear estimates in the presence of highly non - gaussian likelihoods for the ellipticities of individual galaxies .we take a minimal set of galaxy properties to be , the two components of a ( post - lensing ) true galaxy shape .we take an absolute minimal data vector for each galaxy to be a measurement of the ellipticity , ignoring centroid and any structural parameters . for each galaxywe : * draw a source ellipticity from an isotropic unlensed distribution truncated to and defined by ^ 2 \exp\left[-(e^s)^2/2\sigma_{\rm prior}^2\right].\ ] ] the additional factor of atop the gaussian ensures the prior has two continuous derivatives at the boundary . *generate a lensed ellipticity for a constant shear , using the full non - euclidean transformation for ellipticity under shear _e.g. _ as described by . *obtain a measurement by drawing from a gaussian distribution with a variance of per axis .this measurement error is made non - gaussian by truncation to .furthermore we model biases in the measurement process by centering the gaussian at a value for some multiplicative error and additive error . in the numerical testsbelow , we adopt and . the likelihood is a known truncated , biased function .the measurement error distribution is asymmetric with non - zero mean , and strongly dependent upon the intrinsic ellipticity characteristics which induce biases in most extant shear - estimation methods . the integrals in ( [ int1 ] )are evaluated using a grid - based approach adapted from .we define a set of sampled points , starting by evaluating the integrand of in ( [ int1 ] ) at a random point in .we construct a square grid with initial resolution centered on the initial sample .we then iterate this process : 1 .evaluate the integrand at all grid points that neighbor existing members of the sample .2 . determine the maximum integrand value among all sampled points .discard any sampled point with integrand .we iterate this process until no new samples are added on an iteration .if the number of surviving above - threshold samples is , we halt the process .otherwise we decrease the grid spacing by a factor of by adding a new grid point at the center of each previous grid square .we then repeat the above iteration .the grid is refined until we reach the desired number of above - threshold samples or until we reach a minimum resolution .we can now calculate the integrals in ( [ int1 ] ) for each galaxy by summing over the sampled points : and evaluate and by summing over the same samples , reusing the calculated , changing only the last terms as per the integrands in ( [ int1 ] ) . for all tests we fix , , and . for each parameterset we simulate at least galaxies to reduce the shape noise below our desired accuracy .the left plot in fig . [ recovery ] shows the relative error for different values of and .the shaded region is the desired relative shear error of .we recover to this desired precision for all values of as long as is below , even when the shear measurement error is as large as , as one might obtain for real galaxies with .the number of likelihood evaluations per galaxy is between 50 and 200 for all cases tested .we also use a jacknife method to measure the statistical uncertainty in the shear estimator ( [ barg ] ) from each simulation . the measured uncertainty in the shearis found to agree ( to 12% ) with the covariance matrix derived in ( [ cg ] ) from the bayesian framework .the toy model illustrates the validity of the weak - shear bayesian formalism in the face of non - euclidean shear transformations and messy ( but known ) shape measurement errors . the right - hand plot in figure [ recovery ] plots the bayesian shear measurement error vs the per - galaxy ellipticity measurement error , for the case . for comparison we plot in bluethe shear estimated for the same simulated data using the lensfit estimator described in section 2.5 of .a galaxy weighting function is allowed in lensfit : we assume equal weighting for our simulated galaxies . the fully bayesian shear estimator attains the desired while the lensfit estimator does not .note that present a different lensfit estimator which has been applied to the canada - france - hawaii lens survey. the primary difficulty of implementation of these bayesian shear measurement methods will be the need to construct high - dimensional priors .we will be tempted to reduce the dimensionality of the galaxy parameter space . under what circumstancescan we omit a parameter from our bayesian calculation and still obtain a rigorously correct result ? consider the simple case where a galaxy property set is supplemented by a parameter which can take discrete values with probability the posterior contribution from a single galaxy is if we are unaware of this parameter or choose to ignore it , we will have a prior marginalized over and probably assign a likelihood that is also an average over the cases .our posterior calculation will then yield this can differ from the correct ( [ correct ] ) unless either the prior or data likelihood is independent of .in other words : _ the prior must specify the distribution of all galaxy parameters that the likelihood depends upon . _we illustrate such bias by dividing the galaxies in our toy model into two populations and with distinct and .galaxies are assigned to type b at random with some probability an observer ignorant of the existence of type a and type b galaxies would infer a prior and a measurement - error distribution that are found by averaging over the full population as in ( [ avg ] ) . fig .[ pop ] shows the accuracy of the bayesian shear estimate under these ( mistaken ) assumptions .we choose , , , and vary . as expected a shear bias ( up to 15% ) appears as becomes more distinct from the shear bias decreases when type b galaxies become rarer and when their measurement error ( hence likelihood function ) become indistinguishable from type a. of the b population .the a population has measurement error .the intrinsic ellipticity dispersions for a and b galaxies are 0.1 and 0.3 , respectively .the shear bias from the unrecognized structural parameter vanishes when the error distributions match and becomes independent of galaxy type .it is also reduced as the fraction of the population in type b is decreased.,width=307 ] for this bayesian model - fitting method , an example of a galaxy parameter that can be ignored is color , which does not affect the likelihood function of single - band pixel data as long as the other parameters completely specify the single - band appearance of the galaxy .the accuracy of the bayesian inference may be improved , however , if we can include measured color in the data vector and include color dependence in the prior , _ e.g. _ by distinguishing late- and early - type galaxies .in model - fitting , ignoring parameters that _ do _ affect the likelihood of pixel values can produce bias .an example would be assuming a fixed bulge - to - disk ratio when fitting a population of galaxies that has varying ratios .unfortunately there is no known finite parameterization for real galaxies appearances , and hence it is not possible to create a formally correct bayesian model - fitting shear method for real galaxies .the means by which incomplete galaxy models can bias shear measurements are illustrated and explained by , , and .the potential for shear biases from unrecognized galaxy characteristics is not limited to model - fitting methods .for example in the model - free moment - based method described in the next section , the bayesian formalism will break down if the likelihood of the moments depends on the detailed structure of the underlying galaxy . as noted above, we need to marginalize over any galaxy parameter that appears in the likelihood expression for the data .acknowledging that we can not construct a model of galaxy structure with which to predict all pixel values , we can instead compress the pixel data into a small number of quantities that carry most of the information on any shear applied to the source .we will call the compressed quantities `` moments '' since we propose that they be intensity - weighted moments .this method will hence resemble the venerable shear - measurement methodology , but with critical changes to eliminate the approximations inherent to ksb and a rigorous bayesian formulation to eliminate noise - induced biases .we choose to reduce the pixel data for any galaxy image to a small vector of derived quantities which we will select to be sensitive to shear .the must be chosen such that one can propagate the pixel noise model to the moment vector .if are the moments calculated from the pixel data for galaxy , we must be able to assign a likelihood of producing the measured moments given that the galaxy has true ( noiseless ) moments . in this casethe contribution to the shear posterior from galaxy is where is the prior distribution of moments of a galaxy centered at given a local shear .the taylor expansion of the prior can be written as and then the quantities needed from each galaxy for the bayesian posterior in are now the operative procedure for obtaining the bayesian shear is : 1 .determine the prior from a high- imaging sample , and perform the taylor expansion in . here is the vector from the coordinate origin of the moments to the center of the galaxy .the derivatives of this prior under shear must be obtained by simulating the action of shear on each member of the high- ensemble defining the prior .appendix [ momentcalcs ] shows that the derivatives of the moments are equal to a set of higher - order moments , which can hence be measured directly from the high- sample .there is no longer any explicit ellipticity parameter describing each galaxy to represent the shear - dependent aspects of that galaxy .2 . for each observed galaxy , measure the moments of that galaxy about some pre - selected coordinate origin and determine the likelihood function .we have no further need of the pixel data for the galaxy after this compression .3 . compute the six distinct integrals in ( [ int1c ] ) over the moment space to get and .this is the computationally intensive step .4 . sum over galaxies to obtain in . 5 .sum over galaxies to obtain the shear estimate as in .the shear posterior has this mean ( and maximum ) and covariance matrix . to make things more concrete ,we propose that the compressed quantities be intensity - weighted moments of the galaxy image .we will evaluate these in fourier domain where , in the absence of aliasing , the exact correction for the effect of the point spread function ( psf ) on the observed galaxy image is a simple division .this yields our bayesian fourier - domain ( bfd ) algorithm .the moment vector could be here and are the fourier transforms of the observed image and the psf , respectively , and is a window function applied to the integral to bound the noise , in particular confining the integral to the finite region of in which is non - zero , as detailed in .there is a specific weight that will offer optimal for a given galaxy pair , but any properly bounded weight with finite 2nd derivatives produces a valid shear method .one can select a single weight function for a full survey and retain good as long as the psf does not vary widely in size . in real datathe integrals revert to sums over the -space values sampled by a discrete fourier transform ( dft ) of the pixel data .the first motivation for this choice is that these moments are linear in the observed pixel data and therefore meet the requirement that we be able to construct a moment noise model from the known pixel noise model .in fact if the noise in the pixels is independent of the pixel values , then the covariance matrix of the moments is independent of the value of , and also independent of any other properties of the galaxy .such is the case for the background - limited conditions typical of faint - galaxy imaging .if the noise is stationary , than all galaxies with the same psf will have the same .even if the pixel noise is not gaussian , the central limit theorem implies that the moments , which are sums over many independent pixels , will tend toward a gaussian distribution .we therefore can take furthermore if the psf is ( nearly ) isotropic , then the azimuthal symmetries of our chosen basis set guarantee that will be ( nearly ) diagonal , with the exception of a covariance between the two monopole moments and .one departure from common shear - measurement practices is that calculating from pixel data for does not involve any iterative procedures such as centroiding , since iteration usually produce non - analytic likelihoods . in section [ nulltests ]we look for feasible approximations to the propagation of errors into some iterative compressed quantities .we are also tempted to reduce the dimensionality of our prior and speed up marginalization by working with the normalized ratios , , , and that figure prominently in ksb , then dropping and from our data vector .the probability distribution for such ratios of gaussian deviates is known , but depends on the mean value and variance of the denominator , so the moment likelihood function would still depend on and . as per the discussion in section [ hidden ], we would still need to include these quantities in the prior and marginalize over them , foiling our plan to simplify the prior .we choose the straightforward path of compressing the pixel data to un - normalized moments .what is the motivation for choosing this particular set of linear moments ?any choice of well - defined compressed data vector will yield a _ valid _bayesian shear estimator under this method , but the choices will differ in the _ precision _ to which they determine the shear from a given galaxy sample .we want to include quantities that unambiguously capture most of the impact of shear upon the pixel image .the quadrupole moments and respond at first order to shear .they are also sensitive at first order to the galaxy size and flux , and at second order to translation of the galaxy image , so we add the monopole and dipole moments , , , and , sensitive to flux , size , and translation in first order , to yield less degenerate shear information for each galaxy .the inclusion of furthermore would admit generalizing the bayesian formalism to infer the weak lensing magnification along with the 2 shear components .we defer testing of a full implementation of this method for a later publication , but we do outline the necessary steps and possible economies . the biggest astrophysical challenge will be to produce the prior , which is a function of the six components which can be reduced to 5 dimensions because the isotropy of the universe requires the unlensed prior to be invariant under coordinate rotation . at each point in the 5-dimensional space ,we require 5 shear derivatives of the prior as well as the unsheared prior , so the we must construct a 6-dimensional function on a 5-dimensional space , and for each source galaxy integrate the posterior over all 6 dimensions of .note that this bfd method will be much faster than the bmf method even at equal dimensionality , because bfd requires the evaluation of only one gaussian at each point of the integration , whereas bmf requires evaluation of the full likelihood of the pixel data of a model . since galaxy - evolution theory will not be able in the foreseeable future to provide an _ a priori _ distribution of galaxy appearances , the prior will always have to be empirical .the prior can be established by obtaining high- data on a sample of the sky .note that the values depend upon the choice of weight function to suit the observational psf , and implicitly depend upon the filter passband used for the observations , so the prior is best constructed from deep integrations using the same instrument as the main survey .the number of high- galaxies necessary to adequately define the prior is an important issue for future study .the prior can be generated by some kernel density estimation over the empirical moments where indexes the members of the high- template galaxy set . in ( [ posterior2 ] ) , however , we see that the prior is already being convolved with the likelihood function for a target galaxy . hence if the we are using target galaxies of sufficiently low that ,\ ] ] where we need , the moments for the template galaxy if it were sheared by . the posterior contribution for galaxy becomes .\ ] ] we perform a taylor expansion on this shear - conditioned probability to obtain our required properties : {{\mbox{\boldmath }}_\mu } \cdot { \boldsymbol{\nabla_g}}{\mbox{\boldmath }}_\mu , \\{ \mbox{}}_i & = \sum_\mu { \boldsymbol{\nabla_g}}{\mbox{\boldmath }}_\mu \cdot \left [ \frac{\partial^2}{\partial { \mbox{\boldmath }}^2 } { \mbox{}}\left({\mbox{\boldmath}}_i | { \mbox{\boldmath }}\right)\right]_{{\mbox{\boldmath }}_\mu } \cdot { \boldsymbol{\nabla_g}}{\mbox{\boldmath }}_\mu \notag\\ & + \sum_\mu \left [ \frac{\partial}{\partial { \mbox{\boldmath } } } { \mbox{}}\left({\mbox{\boldmath}}_i | { \mbox{\boldmath }}\right)\right]_{{\mbox{\boldmath }}_\mu } \cdot { \boldsymbol{\nabla_g}}{\boldsymbol{\nabla_g}}{\mbox{\boldmath }}_\mu , \notag\end{aligned}\ ] ] where all moments and derivatives are taken at .the determination of from the data for template galaxy is straightforward in fourier domain , and is given in appendix b. now we adopt the multivariate gaussian likelihood ( [ gausslikelihood ] ) for the moments .we also can define to be the moments that we would assign to template galaxy if it were translated to ( relative to the target galaxy s coordinate origin ) and rotated by . because the unlensed sky is isotropic , we can assume that the true prior contains replicas of template galaxy at all and ., essentially adopting a prior that we have indeed found a galaxy and its center is within our postage stamp .this avoids the problem of divergent position marginalization noted in . ]the parity invariance of the unlensed sky also implies that we can place a mirror image of each template galaxy in the prior as well .appendix [ momentcalcs ] shows how to calculate moments for these transformed versions of a template galaxy . for notational simplicity we will subsume the parity flip into the integration over rotation .the taylor - expanded posterior derived from the template sample now becomes : \cdot { \mbox{}}_i^{-1 } \cdot { \boldsymbol{\nabla_g}}{\mbox{\boldmath }}_\mu({\mbox{\boldmath }},\phi ) \\{ \mbox{}}_i & = \sum_\mu \int d{\mbox{\boldmath }}\ , d\phi\ , { \mbox{}}_{i\mu}({\mbox{\boldmath }},\phi ) \left\ { \left [ { \mbox{\boldmath }}_i - { \mbox{\boldmath }}_\mu({\mbox{\boldmath }},\phi ) \right ] \cdot { \mbox{}}_i^{-1 } \cdot { \boldsymbol{\nabla_g}}{\boldsymbol{\nabla_g}}{\mbox{\boldmath }}_\mu({\mbox{\boldmath }},\phi ) \right . \notag \\ & \phantom{\sum_\mu \int d{\mbox{\boldmath }}\ , d\phi\ , { \mbox{}}_{i\mu}({\mbox{\boldmath }},\phi ) } \left . + { \boldsymbol{\nabla_g}}{\mbox{\boldmath }}_\mu({\mbox{\boldmath }},\phi ) \cdot { \mbox{}}_i^{-1 } \cdot{ \boldsymbol{\nabla_g}}{\mbox{\boldmath }}_\mu({\mbox{\boldmath }},\phi ) \right\ } \notag \\ { \mbox{}}_{i\mu}({\mbox{\boldmath }},\phi ) & \equiv \exp\left\ { -\frac{1}{2 } \left [ { \mbox{\boldmath }}_i - { \mbox{\boldmath }}_\mu({\mbox{\boldmath }},\phi ) \right ] \cdot { \mbox{}}_i^{-1 } \cdot \left [ { \mbox{\boldmath }}_i - { \mbox{\boldmath }}_\mu({\mbox{\boldmath }},\phi ) \right ] \right\ } \notag\end{aligned}\ ] ] the computational challenge of this bayesian fourier - domain shear inference is the high multiplicity of this calculation : for every target galaxy , we collect 6 sums over every template galaxy , with each term of each sum being an integral of a gaussian function of the three dimensions plus parity flip of .there are obvious efficiencies to be gained in this calculation by pruning the template set to those with significant contributions to the sums .furthermore we recall that all science will come from sums over a large number of target galaxies , so we can subsample the template set when computing the posterior for individual target galaxies , if we can do so without inducing systematic biases on .a substantial speedup of the bayesian shear calculation is enabled if we can approximate as linearly dependent on , in which case two of the three dimensions of the integrals in equations ( [ final2 ] ) reduce to linear algebra .appendix [ momentcalcs ] shows how to determine these derivatives for the template galaxies .we leave the testing of a practical implementation of equations ( [ final2 ] ) and the investigation of the required size of the template galaxy sample to further work .the bfd method improves on the bmf method by eliminating the approximation that target galaxies are described by a low - dimensional model .we paid a price , however , in losing the convenience of having the action of shear be fully described by alteration of just the two components of .this allowed us to derive the lensed prior solely from derivatives with respect to of the unlensed prior in this section we ask whether null - testing methods can be used to make a model - free bayesian shear inference with the simplicity of a known shear transformation we conclude below that this is difficult .readers uninterested in the null - testing approach can safely skip this section .windowed centroiding procedures assign a center to a galaxy by translating the galaxy until the windowed first moments are nulled .the galaxy is assigned a centroid that is the inverse of the translation needed to null the moments . in the fourier domainnull test ( fdnt ) method , this is extended by shearing as well as translating the galaxy ( after correcting for seeing ) until we null the moments and in .the galaxy is assigned a shape that is the inverse of the shear that produces the null .the moment vector for galaxy is hence a function of the four - dimensional transformation and we assign the galaxy a shape and centroid such that .this approach assures a well - determined transformation of the measured shape under an applied shear , and there is no need of a galaxy model . demonstrates shear inferences errors of part in on low - noise data using fdnt .the application of rigorous bayesian formalism to fdnt is foiled , however , because there is no straightforward means of propagating the pixel noise model to a likelihood of measuring a null at when the underlying galaxy has true null at .it is possible , however , to approximate in the case where the measured shape and centroid are close to the true ones . at sufficiently high of the target galaxy, the likelihood will be confined to such regions and the bayesian formulation with this approximation will become accurate .the measured moments for galaxy when transformed by can be written as where is the true underlying moment and is the variation induced by measurement noise .since is a linear function of the pixel data , the likelihood d e\bf a k k x$}}_0}. \label{momenttransform}\end{aligned}\ ] ] we define a two - component shear of a galaxy image with the flux - conserving transformation it is convenient to adopt a complex notation at this point : with this notation the action of shear becomes can now be restated in the complex notation for the case of a shear transformation : we are interested in the 2 scalar and 2 complex moments defined as the shear derivative operators can be rewritten as now the derivatives of the moments with respect to shear are obtained by applying these operators to the moment definition ( [ momentcomplex ] ) after substituting in the shear wavevector transformation ( [ shearcomplex ] ) . for each moment , the derivatives can be expressed as \\ { \boldsymbol{\nabla}}_g{\boldsymbol{\nabla}}_g m_\alpha & = \int d^2k\ , \tilde i(k ) \left [ w(k\bar k ) c_\alpha(k ) + w^\prime(k\bark ) d_\alpha(k ) + w^{\prime\prime}(k\bar k ) e_\alpha(k ) \right ] . \label{mderivs}\end{aligned}\ ] ] table [ mtable ] summarizes the results of propagating the shear derivatives into our weight and moment functions .all of the moments and their derivatives are simple weighted polynomial moments of the galaxy fourier transform . in our adopted complex notation ,the effect of rotating the galaxy by angle is to send in the argument of in .the monopole moments and are unchanged , while the dipole and quadrupole moments and acquire factors and , respectively .the rotational behavior of all the shear derivatives of the moments can also be easily assessed by applying the phase factors to all powers of and in the elements of table [ mtable ] .a parity flip sends for all of the integrands of the moments and their derivatives in table [ mtable ] .the moments and are conjugated , meaning that and change sign , while the other moments are unchanged .a translation of the galaxy by adds a factor to in the integrand of all the moments ( and their derivatives ) . in general the integrations of all the moments and their shear derivativeswould need to be repeated .we may , however , elect to approximate the effect of translation on the moments and their shear derivatives by linearizing about .in this case we are interested in , etc .the derivative of any moment or its derivatives with respect to can be obtained by adding a factor of to all the functions in table [ mtable ] .similarly the derivative adds a factor to the integrands .if we adopt the linearization of the moments with translation , then for each template galaxy we need a substantial number of real - valued quantities : ( 6 moments ) ( 1 moment 5 derivatives for and ) ( 1 value 2 translation derivatives ) for a total of 108 numbers per template .the actual number of integrals we need to perform is much less than this since many elements are repeated in table [ mtable ] . recall too that these template moments that define the prior only need to be evaluated once for the experiment , and the integrations are expressed as standard linear algebra routines .evaluation of these quantities will not be a significant computational burden .
we derive an estimator of weak gravitational lensing shear from background galaxy images that avoids noise - induced biases through a rigorous bayesian treatment of the measurement . the derived shear estimator disposes with the assignment of ellipticities to individual galaxies that is typical of previous approaches to galaxy lensing . shear estimates from the mean of the bayesian posterior are unbiased in the limit of large number of background galaxies , regardless of the noise level on individual galaxies . the bayesian formalism requires a prior describing the ( noiseless ) distribution of the target galaxy population over some parameter space ; this prior can be constructed from low - noise images of a subsample of the target population , attainable from long integrations of a fraction of the survey field . we find two ways to combine this exact treatment of noise with rigorous treatment of the effects of the instrumental point - spread function and sampling . the bayesian model fitting ( bmf ) method assigns a likelihood of the pixel data to galaxy models ( e.g. sersic ellipses ) , and requires the unlensed distribution of galaxies over the model parameters as a prior . the bayesian fourier domain ( bfd ) method compresses the pixel data to a small set of weighted moments calculated after psf correction in fourier space . it requires the unlensed distribution of galaxy moments as a prior , plus derivatives of this prior under applied shear . a numerical test using a simplified model of a biased galaxy measurement process demonstrates that the bayesian formalism recovers applied shears to part in accuracy as well as providing accurate uncertainty estimates . bfd is the first shear measurement algorithm that is model - free and requires no approximations or _ ad hoc _ assumptions in correcting for the effects of psf , noise , or sampling on the galaxy images . these algorithms are good candidates for attaining the part - per - thousand shear inference required for hemisphere - scale weak gravitational lensing surveys . bmf has the drawback that shear biases will occur since galaxies do not fit any finite - parameter model , but has the advantage of being robust to missing data or non - stationary noise . both bmf and bfd methods are readily extended to use data from multiple exposures and to inference of lensing magnification .
one of the most simple and interesting quantum models that studies the interaction between radiation and matter is the jaynes - cummings model ( jcm ) .the model considers the interaction between a single two - level atom with a single mode of the electromagnetic field .the coupling between the atom and the field is characterized by a rabi frequency , and a loss of excitation in the atom appears as a gain in excitation of the field oscillator . the collapse andthe eventual revival of the rabi oscillation , described by the analytical solution of the jcm , shows a direct evidence of the quantum nature of radiation .the use of jcm has permitted to elucidate basic properties of quantum entanglement as well as some aspects of the relationship between classical and quantum physics .since it was proposed , the pattern has been of permanent interest in the quantum theory of interactions . in the decade of the eightiesit was discovered that the model exhibited highly non classic behavior , and the possibility of experimental realization appeared .the relative simplicity of the jcm and its extensions has drawn much attention in the physics community and recently in the field of the quantum computing . in this workwe use a generalization of the jcm to an state atom interacting with a single field mode . in ,shor described a quantum algorithm to decompose a number in its prime factors more efficiently than any classic algorithm .it was exponentially faster than the best known classical counterpart . in 2001 the experimental development of this algorithm has had a very interesting advance : vandersypen et _ al ._ using a seven - qubit molecule manipulated with nuclear magnetic resonance techniques has reported the factorization of the number 15 into its prime factors 3 and 5 .this algorithm illustrates a part of the theoretical challenge of quantum computation , _i.e. _ to learn how to work with quantum properties to obtain more efficient algorithms. tools like quantum parallelism , unitary transformations , amplification techniques , interference phenomena , quantum measurements , resonances , etc , must be used by the new computation science .grover , in , devised an algorithm which can locate a marked item from an unsorted list of items , in a number of steps proportional to , that is quadratically faster than any classical algorithm .continuous time search algorithms have been investigated by a number of researchers .the essential content of these proposals is to built a hamiltonian ( or alternatively a unitary operator ) with the aim to change the initial average state into another state , that also belongs to the same set of vector in the base of the hilbert space chuang .this last state is recognized by an unitary operator called oracle , that is part of the global unitary evolution , where the hamiltonian is expressed as probability to obtain the searched state is , and it equals after a time . in this frame , the search algorithm is seen as a rotation in the bloch sphere from the initial average state to the searched state .recently an alternative search algorithm was developed alejo , alejo1,alejo2,alejo3 that uses a hamiltonian to produce a resonance between the initial and the searched states , having the same efficiency than the grover algorithm .it can be implemented using any hamiltonian with a discrete energy spectrum , and it was shown to be robust when the energy of the searched state has some imprecision .the responses of this algorithm to an external monochromatic field and to the decoherences introduced through measurement processes was also analyzed in .however we do not know of any experimental implementation , not even for a small search set .in this paper we present a resonant quantum search algorithm implemented with a generalization of the two - level jcm .the paper is organized as follows . in section[ sec : generalizado ] we consider a generalization of the jcm , in section [ sec : resonancia ] we develop the search model . in section [ sec : numerical ] we present numerical results for our model . finally in section[ sec : conclution ] we draw some conclusions .we shall consider the generalization of the jcm to an state atom interacting with a single field mode with frequency synthesized by the following hamiltonian photon creation and annihilation operators act on the photon number state verifying & = & 1 , \label{commutador}\\ a^{\dagger } a|n\rangle & = & n|n\rangle , \text{\ } \label{prop1 } \\a^{\dagger } |n\rangle & = & \sqrt{n+1}|n+1\rangle , \label{prop2 } \\a|n\rangle & = & \sqrt{n}|n-1\rangle .\label{prop3}\end{aligned}\]] is the energy of atomic state , is the atom - field coupling constant and it is fixed by physical considerations such as the cavity volume and the atomic dipole moment , is a transition operator acting on atomic states defined by is the kronecker delta . inwhat follows the subindex shall indicate the initial state of the atom .this state will be the starting state for the search algorithm and it can be chosen as the ground state for experimental purposes .let us call the unknown searched state whose energy is known .this knowledge is equivalent to mark " the searched state in the grover algorithm .our task is to find the eigenvector with transition energy from the given initial state .then it is necessary to tune the frequency of the photon field with the frequency of the transition .this means that the frequency of the cavity mode is selected as .the transition between the atomic states is governed by the interaction term of the hamiltonian eq.([hamilton ] ) transition probability to pass from the initial atomic state with photons to any final state with photons is proportional to .after some steps we get calculate this transition probability independently of the initial and final numbers of photons the statistical weight of the photons must be incorporated , is the normalized photon number distribution and is an unknown constant . using eq.([transition1 ] ) in eq.([conjunta ] ) we obtain the dependence of with the average number of photons and the parameter , .taking into account the normalization condition , the dependence with the number of atom levels is obtained , last step is valid for large and .note that depends on the photon distribution function only through the mean value of .in the previous section we have determined the dependence of the atom - field coupling constant with the number of atomic states and the mean number of photons .now we want to study how this coupling constant determines the characteristic period of the dynamics and subsequently the waiting time for the search algorithm .the dynamics of the system is given by the the schrdinger equation for the wavefunction , is given by eq.([hamilton ] ) .the global hilbert space of the system is built as the tensor product of the spaces for the photons and the atom .therefore atom - field wavefunction is expressed as a linear combination of the basis \left\vert \varphi _ { m}\right\rangle \left\vert n\right\rangle , \label{psi1}\]]where is the basis for the photons and is the eigenvector basis for the atomic hamiltonian without electromagnetic field .the phase factor is introduced to simplify the final differential equations . substituting eq.([psi1 ] ) into eq.([schrodinger ] ) and projecting on the state the following set of differential equations for the time depended amplitudes are obtained b_{jk-1 } \notag\]] .\label{dife}\]]this set of equations shall be solved numerically in the next section . herewe proceed to study their qualitative behavior .these equations have two time scales involved , a fast scale associated to the bohr frequencies , and a slow scale associated to the amplitudes . if we are interested in the slow scale all terms that have fast phase in the previous equations can be ignored ; the most important terms are the ones with zero phase . in this approximation the previous set of equations eq.([dife ] ) becomes equations represent two oscillators that are coupled so that their population probabilities alternate in time .as we notice the coupling is established between the initial and the searched for states .to uncouple the previous equations we combine them to obtain these equations , for a given number of photons and with the initial conditions and , the following results for the amplitudes are obtained the probabilities to obtain the initial and the searched states , independently of the initial number of photons , should be calculated as , , which satisfy the conditions , .averaging over the number of photons and using eq.([omega ] ) these probabilities are where the new angular frequency is we see that the probabilities of the initial state and the searched state oscillate harmonically with a frequency and period , while the probability of the other elements of the search set are neglected .if we let the system evolve during a time at this precise moment we make a measurement , we have probability to obtain the searched state .it is important to indicate that this approach is valid only in the adiabatic approximation , this means that all the frequencies are much larger than .therefore the efficiency of our search algorithm is the same as that of the grover algorithm and additionally it is independent of the number of photons . in the next section ,we implement numerically eq.([dife ] ) and we show that this agrees with the above theoretical developments of the system in resonance .the jcm has been used to understand the behavior of circular rydeberg atoms , where the valence electron is confined near the classical bohr orbit .this suggests to choose for our purpose an atomic model with an attractive potential whose quantum energy eigenvalues are , where is the principal quantum number and is a parameter . in this frame the bohr transition frequency can be expressed as function of the parameter as ., with , has elements ( levels ) , and is the optimal search time proportional to .the initial state was taken to be and the searched state .note that the distribution probability is essentially shared between the initial and the searched states . ] takes the values , from top to bottom , , and .the size of the search set is . ]we shall choose the numerical values of the parameters taking into account some previous experimental data . in ref . the rabi oscillation of circular rydberg atoms was observed .the frequency of the single electromagnetic mode was tuned with the transition between adjacent circular rydberg states with principal quantum numbers and , where the fast scale associated to this bohr frequency was ghz , that is very large compared with the fundamental rabi frequency khz . in the micromaser configuration of ref . the field frequency of ghz produced the rydberg transitions used in this experiment , where the principal quantum number was about and the rabi frequency khz . from the above weconclude that the ratio between the bohr transition energy and the vacuum rabi frequency to be used in our search algorithm should be taken as .we have integrated numerically eq.([dife ] ) varying the parameter in a range between and .the initial conditions are ( i ) a uniform distribution for the photons and ( ii ) is chosen in such a way that .the calculations were performed using a standard fourth order runge - kutta algorithm . the procedure consisted in choosing at random the energy of the searched state and then to follow the dynamics of the set and of the initial state .we verified for several values of that the most important coupling is between the initial and the searched states ; the other couplings may be totally neglected . in fig.[t2 ]we show the probabilities for all levels at five different fractions of the time .we see that the dispersion among the states neighboring the searched state is relatively small for . for higher values of dispersion is even smaller , this confirms that our theoretical approximation of two coupled modes is correct .furthermore we conclude that the flux transfer process is essentially an interchange between the initial and the searched states and that the optimal time to measure the searched state is . at other timeswe have less chance of perform a measurement of the searched state .fig.[t1 ] shows the oscillation of the probability flux between the initial and the searched states as a function of time for three values of .the time is normalized for the theoretical characteristic time .the evolution shows for the lower values of an almost periodic behavior , however for the highest value the behavior is completely harmonic . in this last casethere is clearly a characteristic time when the probability of the searched state is maximum and very near and the initial state probability is near .the optimal time agrees with our theoretical prediction .this periodic behavior and the proportionality between and are also found in the grover algorithm .in this work we show how a generalized jcm to an state atom interacting with a single field mode can be thought of as a quantum search algorithm that behaves like the grover algorithm ; in particular the optimal search time is proportional to the square root of the size of the search set and the probability to find the searched state oscillates periodically in time . in the past ,the biggest difficulty to build a jcm has been to obtain a single electromagnetic mode that interacts with the atomic transition .this difficulty has been overcome in the last decades thanks to the experimental advances in the handling of rydberg atoms and to the building of micro - cavity for microwaves . in the frame of this work we can interpret these devices as experimental realizations of the analog " grover algorithm in the trivial case of the search of a marked item in an unsorted list of 2 elements .however this new way of looking at the problem is different from the usual point of view and opens new possibilities for the jcm . in summary , in this paper we reinterpret the jcm as the first step to build a more generic search algorithm with a large number of elements
we propose a continuous time quantum search algorithm using a generalization of the jaynes - cummings model . in this model the states of the atom are the elements among which the algorithm realizes the search , exciting resonances between the initial and the searched states . this algorithm behaves like grover s algorithm ; the optimal search time is proportional to the square root of the size of the search set and the probability to find the searched state oscillates periodically in time . in this frame , it is possible to reinterpret the usual jaynes - cummings model as a trivial case of the quantum search algorithm .
graph theory is an important tool in functional connectivity research for understanding the interdependent activity occurring over multivariate brain signals . in this setting , complete weighted networks ( cwns ) are produced from all common recording platforms including the electroencephalogram ( eeg ) , the magnetoencephalogram ( meg ) and functional magnetic resonance imaging ( fmri ) , where every pair of nodes in the network share a connection whose weight is the output of some connectivity measure .complex hierarchical structures are known to exist in real networks , including brain networks , for this reason it is important to find methods to specifically evaluate hierarchical complexity of network topology . herewe introduce methods specific to this end .complexity is understood neither to mean regularity , where obvious patterns and repetition are evident , nor randomness , where no pattern or repetition can be established , but attributed to systems in which patterns are irregular and unpredictable such as in many real world phenomena .particularly , the brain is noted to be such a complex system and this is partly attributed to its hierarchical structure .hierarchical complexity is thus concerned with understanding how the hierarchy of the system contributes to its complexity .here we introduce a new metric aptly named hierarchical complexity , , which is based on targeting the structural consistency at each hierarchical level of network topology .we compare our metric with network entropy and find that we can offer a greater magnitude and density range for establishing differences in complexity of different graph topologies . alongside this , we introduce the weighted complex hierarchy ( wch ) model which simulates hierarchical structures of weighted networks .this model works by modifying uniform random weights by addition of multiples of a constant , which is essentially a weighted preferential selection method with a highly unpredictable component provided by the original random weights .we show that it follows very similar topological characteristics of networks formed from eeg phase - lag connectivity .intrinsic to our model is a strict control of weight ranges for hierarchical levels which offers unprecedented ease , flexibility and rigour for topological comparisons in applied settings and for simulations in technical exploration for brain network analysis .this also provides an unconvoluted alternative to methods which randomise connections or weights of the original network .any rigorous evaluation of brain networks should address their inherent complete weighted formulation .however , the current field has largely lacked any concerted effort to build an analytical framework specifically targeted at cwns , preferring instead to manipulate the functional connectivity cwns into sparse binary form ( e.g. as well as wide - spread use of the watts - strogatz and albert - barabasi models ) and using the pre - existing framework built around other research areas which have different aims and strategies in mind . in our methodological approachwe propose novel generalisations of pre - existing sparse binary models to cwn form and thus allow a full density range comparison of our techniques . due to the intrinsic properties of these graph types we find minimal and maximal topologies which can help to shed light on a wide variety of topological forms and their possible limitations in a dense weighted framework .further , as part of our study we seek after straightforward metrics to evaluate other main aspects of network topology for comparisons and , in this search , found it necessary to revise key network concepts of integration - segregation and scale - freeness .we provide here these revisions : i ) that the clustering coefficient , , is enough to analyse the scale of integration and segregation , finding it unnecessary and convoluted to use the characteristic path length , , as a measure of its opposite , as generally accepted .ii ) we provide mathematical justification that the degree variance , , and thus network irregularity is a strong indicator of the scale - free factor of a topology .our study of hierarchical complexity , using a comprehensive methodological approach , provides mathematical quantification of the hierarchical complexity of eeg functional connectivity networks and reveals new insights into key aspects of network topology in general .our model provides improved comparative abilities for future clinical and technical research .we adopt the notation in so that a graph , , is a set of nodes , , connected according to an weighted adjacency matrix , .entry of corresponds to the weight of the connection from node to node and can be zero .an unweighted graph is one in which connections are distinguished only by their existence or non - existence , so that , without loss of generality , all existing connections have weight and non - existent connections have weight .the graph is undirected if connections are symmetric , which gives symmetric .a simple graph is unweighted , undirected , with no connections from a node to itself and with no more than one connection between any pair of nodes .this corresponds to a graph with a symmetric binary adjacency matrix with zero diagonal .such graphs are easy to study and measure .the degree , , of node is defined as the number of its adjacent connections , which is the number of non zero entries of the column of .then , for a simple graph , . for a graph with edges ,the connection density , , of a graph is .a cwn is represented by a symmetric adjacency matrix with zero diagonal ( no self - loops ) and weights , ] elsewhere .if we threshold the cwn at weight , we recover a binary erds - rnyi random graph from the random graph ensemble . starting from an erds - rnyi cwn we randomly distribute the nodes into hierarchy levels based on some discrete cumulative distribution function , , by generating a random number , , between 0 and 1 for each node and putting the node in the level for which is first less than .we then distribute additional weight to all connections of adjacent nodes in the level , for some suitably chosen .the parameters of this model are then .the parameter is the number of nodes in the network .the parameter is the strength parameter , which is constant since the random generation of the initial weights is enough to contribute to weight randomness .the parameter is the number of levels of the hierarchy , with a default setting of a random integer between and .the vector is the cumulative probability distribution vector denoting the probabilities that a given node will belong to a given level where the default , which we use here , is a geometric distribution with in hierarchical levels ( ) where the nodes with highest connectivity ( top hierarchical level ) are at the tail end of the distribution .[ fig1].c plots an example of the geometric distribution for a three level hierarchy .the text inside the box plots , above , indicates the additional weights given to connections adjacent to nodes inside the given level .the graphic below explains the additional weights provided by the strength parameter of connections between nodes in different levels as well as in the same level .for example , a connection between a level 1 node and a level 2 node has additional strength which consists of one provided by the node in level 1 and provided by the node in level 2 . at , we have the e - r random network and at the weights of the network are linearly separable by the hierarchical structure producing a strict class - based topology . between these values a spontaneous class - influenced topology emerges . herewe present justifications for metrics as measures of key topological factors- the global clustering coefficient , , for degree of segregation and the degree variance , , for irregularity , linked to scale - freeness .the concept of integration in brain networks is closely tied in to the small world phenomenon , where real world networks are found to have an efficient trade off between segregative and integrative behaviours .the most widely used topological metrics in network science- and the characteristic path length , - are commonly noted as measures of these quantities , respectively .here , is defined as the average of the shortest paths between each pair of nodes and is defined as the probability that a path of length , or triple , in the graph has a shortest path of length . that is , where a closed triples is such that , for triple , , for distinct .since integration implies a non - discriminative behaviour in choice , we argue that the random graph ensemble , defined by its equal probability of existent connections between all pairs of nodes , is the most exemplary model of an integrated network .anything which deviates from equal probability is a discriminative factor which favours certain connections or nodes over others , likely leading to more segregated activity .further , it is clear that integration and segregation are opposite ends of the same spectrum- something which is not integrated must be segregated and vice versa .having one metric to inform on where a network lies on that spectrum is therefore sufficient .thus , here we propose as the topological measure to evaluate levels of integration ( and so segregation ) of a given network .firstly , we note that values of for random graphs and small - world graphs are often much more distinguishable than those of and it is certainly assumed that these graphs have very different levels of integration .secondly , since the random network is optimally integrated and = e[p_{\text{ran}}] ] , the more segregated is the network .we will include both and in our analysis in order to provide evidence to back the above proposal .another topological factor of small world networks is noted as a scale - free nature characterised by a power law degree distribution . to understand this aspect of network topology another factor of network behaviouris formulated distinguishing between line like and star like graphs . here , we show that characterisation of scale - freeness is closely connected to the regularity of a network .regular graphs have been studied for over a century .they are defined as graphs for which every node has the same degree .an almost regular graph is a graph for which the highest and lowest degree differs by only .thus a highly irregular graph can be thought of as any graph whose vertices have a high variability .such behaviour can be captured simply by the variance of the degrees present in the graph , that is where , is the set of node degrees on a given graph . for regular graphs by definition , but more probing is necessary to distinguish high topology .for a graph with degrees , and , on multiplying out the brackets simplifies to where is the connection density and , is the squared norm of .this tells us that is proportional to the sum of the squares of the degrees of the graph , , and , for fixed number of connections , , in fact depends only on .now , it is known that is maximal in quasi - star graphs and quasi - complete graphs .essentially , the quasi - star graph has a maximal number of maximum degree nodes in the graph for the given connection density and the quasi - complete graph has a maximal number of isolated , or zero - degree , nodes in the graph .this tells us that , for low , high denotes the presence of a few high degree nodes and a majority of relatively low degree nodes , i.e. scale - free - like graphs .thus , due to the restriction placed on possible degree distributions by the number of edges ( the small number of edges in sparse networks means the number of high degree nodes is very limited ) , the irregularity of degrees is a strong indicator of the strength of decay of the given distribution , relating to how scale - free the graph is . in the supplementary materialwe detail the method to generalise sparse binary network archetypes to cwn form .the pre - requisit of such a generalisation is that we require obvious higher density versions of lower density forms which can be arranged in adjacency matrix form such that each non - zero entry , , of the lower density adjacency matrix exists as a non - zero entry , in the higher density adjacency matrix .this is indeed the case for the regular ring lattice , star , grid lattice and fractal modular cwns ( see fig .[ ordered ] a , b , c , d respectively ) .we explain these higher and lower density forms of the binarised cwn in terms of weight categories where , if we choose an appropriate threshold , , we can recover all edges in the same and all higher weight categories and none of the edges existing in all lower categories .here we apply methods to graphs of nodes , typical of medium density eeg . for analysiswe employ connection density thresholds at integer percentages of strongest weighted connections , rounded to the nearest whole number of connections .we then implement metric algorithms on each of these binary networks and plot the obtained values on a curve against connection density , similar as in e.g. .this generates metric curves plotted against connection density which provides a detailed analysis of the cwn topology .other methods exist to analyse cwns such as weighted metrics or density integrated metrics , but these metrics still give only singular values for a given network which belies little of topological behaviour at different scales of connectivity strength .for random and wch cwns we use sample sizes of for each network and for the eeg functional connectivity cwns we have a sample size of . on the metric curves for thesewe plot the median with the interquartile range shaded in . for ordered networksthere is only one network per type by definition .our analytical framework is composed of a mixture of entirely new concepts and novel generalisations of existing concepts to cwn form .it is constituted of the following elements : four metrics , , , , characterising four important and distinct topological features ; five cwn archetypal models- random , star , regular lattice , fractal modular , grid lattice ; the wch model . in an architecture of network topologyis proposed involving the three most widely studied properties of brain networks- integration ( and segregation ) , scale - freeness and modularity . for our analysis in comparison with hierarchical complexity , , we choose a straightforward metric for each of these topological factors- for integration , for scale - freeness and for modularity where where is the module containing node and is the kronecker delta function .highly efficient algorithms have been created aiming to maximise the value of for a given network . to compute the modularity of our networks, we use the undirected modularity function in the brain connectivity toolbox .we compare our hierarchical complexity metric with a commonly used metric for analysing the entropy of the network degrees .this is defined using the normalised degree distribution , where is the degree of node and is the proportion of nodes in the graph with the same degree as node which relates to probabilities of ` going to'/ ` coming from ' neighbouring nodes in directed graph problems .then the entropy of graph is a straightforward derivation of shannon s entropy equation for the degrees of the graph : thus , network entropy encodes the eccentricity of the graph degrees .we implement comparisons with the watts - strogatz small - world model which randomly rewires a set proportion of edges starting from a regular lattice .we use the full range of parameters for initial degree specification ( 2 up to 62 ) and random rewiring parameters from 0.05 in steps of 0.05 up to 0.95 . for each combination of parameters ,100 realisations of the model were computed and , , , and were measured .we further compare with albert - barabasi s scale - free model which begins with a graph consisting of core of highly connected nodes to which the rest of the nodes are added one by one with a set degree but paired by edges to randomly selected nodes .we use an initial number of nodes of 15 and the additional node s degree from 3 up to 14 in order to reach larger densities .we use an eyes open , resting eeg data set with nodes .we report on networks created from the beta ( - ) band using coherence and the debiased weighted phase - lag index ( dwpli ) in order to account for different possible types of eeg networks while reducing redundancy of similar topological forms found between the frequency bands ( see supplementary material ) .the dataset , recorded using the bci2000 instrumentation system , was freely acquired from physionet .the signals were recorded from 64 electrodes placed in the main in accordance with the international 10 - 10 system .we took the eyes open resting state condition data , consisting of minute of continuously streamed data which were partitioned into epochs and averaged for each of volunteers .fieldtrip was used for pre - processing , frequency analysis and connectivity analysis to obtain the adjacency matrices of complete weighted networks .the channels were re - referenced using an average reference , the multi - taper method was implemented from seconds onwards using slepian sequences and spectral smoothing . a resolution was obtained using one second of zero padding .we chose to analyse the matrices obtained from both the coherence and the debiased weighted phase - lag index ( dwpli ) to look for differences between network topologies of zero and non - zero phase lag dependencies in the channels .we treat the data of all tasks as a single dataset to allow for the variability of the eeg network topologies since we are not interested here in the tasks themselves but on the behaviour of general eeg networks obtained from dwpli and coherence . due to the polynomial formulation of the complexity measure , producing a non - normal distribution ,we compare metric distributions using the wilcoxon rank sum test .the z - score is used to ascertain the magnitude and direction of the relationship of the distributions .fig . [ fig3 ] shows the metric curves ( i.e. metric plotted against network density ) for , , , , and for all archetypes as well as for the eeg dwpli ( red shade ) and coherence ( blue shade ) networks . from these plots we see experimental evidence of maximal and minimal topologies for the given topological characteristics .these maximal and minimal topologies are explained as the curves whose lines are consistently lowest or highest over all densities .fractal modular networks ( purple lines ) are maximal for both and ( top left and centre left , respectively ) .this is to be expected since the modules are complete sub - networks with very few connections between modules , maximising .further this minimises the number of open triples in the graph , maximising , by restricting open triples to relating only to those few connections which do extend between modules .the star cwn ( orange lines ) acts as a maximal topology for , as expected from the theory explained in section 2 , while being a minimal topology for ( bottom left ) .regular graphs , such as the ring lattice network ( blue lines ) , give degree variance and hierarchical complexity , thus are minimal topologies of these features .the results of fig .[ fig3 ] for 30 node networks , found in the supplementary material , follow the same relationships , providing evidence that these features are independent of network size . comparing the plots in fig .[ fig3 ] of ( top left ) with ( bottom left ) and ( centre right ) with ( bottom right ) , it is immediately clear that and show extreme behaviour at low densities while remaining consistent at higher densities .this exemplifies how these metrics are aimed at analysis of sparse networks , where it appears that values can take a much greater range than for higher density networks . to explore these comparisonsfurther we perform statistical analysis with wilcoxon rank sum tests on the differences of distributions of metric values of eeg dwpli and e - r random networks as well as of eeg dwpli and eeg coherence networks ( fig [ comparemetrics ] ) .the results show that ( right ) and ( left ) attain a greater range over edge density , , of significant differences than their counterparts , and .particularly , distinguishes differences from 1% up to 44% densities in the eeg dwpli and coherence comparison ( solid blue line ) , whilst entropy only can distinguish differences from 1% up to 27% ( solid yellow line ) .further , the z - scores indicate that in the range 1 - 27% , the differences found in are greater than those found using . comparing the eeg dwpli networks with e - r random networks ( fig [ comparemetrics ] , left , dashed lines ) ,both metrics find differences at all levels , but the magnitude of difference found by ( blue ) is consistently greater than those found by ( yellow ) .thus , our metric outperforms entropy in both magnitude and range of differences found .similarly , finds a greater range and magnitude of differences than , fig [ comparemetrics ] right .in fact , discerns differences at all connection densities for the two comparisons , while fails to find differences after 62% in comparing dwpli and coherence networks ( solid yellow line ) and after 73% in comparing dwpli and random networks dashed yellow line ) . furthermore , displays inverse differences at low densities ( 1 - 12% ) compared to higher densities in the dwpli vs random comparison ( dashed yellow line ) .this inconsistency is undesirable for translatability of integrative behaviour of network types from sparse networks to more dense networks . does not suffer from such behaviour , displaying a constant relationship of metric values through the full range of densities ( solid and dashed blue lines ) .given these results , for the rest of our analysis , we will drop and and focus on the four proposed metric , , , and .we must emphasise that this is taken purely in terms of the simplicity of explaining a general topological factor and does not mean that and are not useful for other purposes .[ wchmodel ] shows the mean results of ( top left ) , ( top right ) , ( bottom left ) and ( bottom right ) over 100 realisations of each of the wch models .we include a reduced number of strength parameters in the figure ( ) than those computed ( ) for greater clarity . above 0.75the parameter begins to saturate as the weights of the hierarchy levels tend to linear separability ( linear separability occurs when since then places the edge weights , originally in ] ) .we see that wch networks ( grey shaded lines ) exhibit curve behaviour similar to the eeg networks and e - r random graphs ( as in fig .[ fig3 ] ) . the scale - free model ( red error bars ) also exhibits a similar behaviour , however in stark contrast , the small - world model ( blue error bars ) exhibits very different behaviours than those of the eeg or wch networks , exhibiting a strong unsuitability for comparisons with eeg networks with much higher modularity and highly right skewed curve ( fig .[ wchmodel ] , top left ) towards high densities as well as a similar right skew in ( bottom left ) which is opposite to the left skew found for wch and eeg network types .although the scale - free model exhibits similar tendencies in topological metrics to the wch and eeg networks , its range of values and densities is clearly very limited and so , therefore , its ability for topological refinement . by increasing the strength parameter of the wch modelwe change the topology in a smooth fashion with decreasing integration , regularity and modularity ( fig .[ wchmodel ] , top left , top right and bottom left , respectively ) .interestingly , ( bottom left ) rises with increasing strength parameter from up to where it takes its maximum values at densities ranging from 1 - 30% before falling again from until .further , above , the curves begin to deviate significantly from those of the eeg dwpli networks , exhibiting greater plateaus of high complexity ( lighter grey lines ) which are more comparable with the eeg coherence networks .interestingly , the complexity of the eeg dwpli networks appears to attain maximal values of of all the networks studied here ( fig . [ fig3 ] ) .the only model which comes close is the wch model ( fig .[ mimic ] , bottom right ) . to clarify this observation we perform wilcoxon rank sum tests on values of the eeg dwpli networks against that of the wch model with strength parameters ranging from up to , i.e. two steps before and after the maximal complexity setting of .the results are displayed in fig .[ comparemodel ] . in the vast majority of instances of strength parameter and density ,the eeg dwpli networks do indeed exhibit greater complexity than the wch model .the strong exception to this is an inability to distinguish significant differences between the maximal complexity wch model and dwpli networks within 7 - 23% densities ( bold yellow line ) .also , as the weight parameter increases , the high plateaus previously mentioned begin to take effect as in the medium ranges of density the values of the dwpli networks and wch model becomes more indistinguishable , with greater complexity found in the range 55 - 57% in the wch model with ( green line ) .[ mimic ] shows the values of the four topological features- complexity , integration , regularity and modularity for eeg dwpli networks and the wch network with strength parameter .we see clearly that these networks behave very similarly with respect to the given metrics .the most obvious difference is that the modularity , , of dwpli eeg networks is higher ( bottom left ) .also , as previously discussed , the dwpli network complexity is greater than the wch model , but it is still by far the most comparable model for complexity of those presented here .the behaviour demonstrated by the wch model with respect to indicates that high complexity arises from a hierarchical structure in which a greater degree of variability is present in the rankings of weights with respect to hierarchy level .too little difference between levels and the hierarchy is too weak to maintain complex interactions , too much difference between levels and the complexity of the hierarchy is dampened by a more ordered structure produced from the tendency towards linear separability of the edge weights enforced by the strength parameter .thus , we provide evidence that topological complexity is not driven by integration , arising as a middle ground between regular and random systems as previously conjectured , but , driven by hierarchical complexity , arising in the middle ground between weak hierarchical topology or all nodes are equal systems , such as random or regular networks , and strong hierarchical topology , such as star or strict class - based systems including grid lattice and fractal modular networks ( see fig .[ wchmodel ] ) .thus the hierarchical structure can be seen as a key aspect of the complexity inherent in complex systems .impressively , the dwpli eeg networks display a generally greater hierarchical complexity than that expressed by our model which is specifically designed to probe complex interactions in hierarchical structures . thus we pose such complexity as a key aspect of brain function as modelled by phase - based connectivity .there are two clear reasons why the wch model is a good fit for functional connectivity networks from eeg recordings .not only does it create several hub like nodes giving a high degree variability , but furthermore it simulates the rich club phenomena found in complex brain networks , as the higher the hierarchy levels of two nodes , the stronger the weight of the connection will be between them , see fig .[ fig1].c .one of the greatest benefits of this model over others is that it simulates brain networks previous to network processing steps because it creates cwns rather than sparse networks .this means that any and all techniques one wants to use on the brain networks can be applied elegantly and in parallel with this single null model free from any complications .particularly , methods which create sparse binary networks directly , whether these models are built independently from the brain networks or are constructed by the randomisation of connections of the networks being compared , run into problems with density specification ( in the case of independent models ) and reproducibility ( in both types of model ) . with the wch model , we can simply create a bank of simulated cwns which can be used throughout the study in exactly the same way as we use the functional connectivity cwns .as an example of the power and elegance of the proposed model , say we want to find maximum spanning trees of our brain networks and compare with a null model , then we simply take the maximum spanning trees of our null model . in contrast ,in they use a convoluted reverse engineering process by assigning random weights to the connections of watts - strogatz small world networks ( which are themselves of limited comparability to brain networks ) and computing the mst from these resulting sparse weighted networks .further , as seen in fig .[ fig1].c , for technical studies which rely on network simulations , the wch model is built on parameters which can be altered to subtly change the resulting topology .this allows for sensitive analysis of a new techniques ability to distinguish subtle topological differences .such paradigms are evident in clinical studies where , for example , one may try to distinguish between healthy and ill patients or between different cognitive tasks , so that this null model offers simulations which are directly relatable to clinical settings .we see there is a large difference in the integration , modularity and complexity of the eeg coherence and dwpli networks ( fig .[ fig3 ] , top left , centre left and centre right , respectively ) .the eeg coherence networks ( blue shade ) behave similarly to the ring ( blue lines ) and grid lattice ( yellow lines ) networks , agreeing with the volume conduction effects that dominate zero - lag dependency measures , i.e. the closer the nodes are the stronger the weights are .the dwpli networks ( red shade ) on the other hand have a more integrated and less modular nature , which reflects the notion that phase - based functionality mitigates volume conduction effects and is thus less confined by anatomical structure .the very high complexity of the dwpli networks ( and very possibly phase - lag measures in general ) provides evidence to support that phase - based connectivity does indeed largely overcome the volume conduction effect and therefore maintains a richer complexity echoing the complex interactions of brain functionality . with regards to how the wch model advances our understanding of dwpli and coherence network differences, we note that the high segregation of the coherence networks ( fig .[ fig3 ] top left ) is approached by the wch model with high values of strength parameter ( fig .[ wchmodel ] , top left ) and is comparable with regular lattice and grid lattice cwn curves ( fig .[ fig3 ] , top left , blue and yellow lines , respectively ) , denoting a move to a more strict class - based topology .this is also reflected in the hierarchical complexity ( bottom right of corresponding figures ) , where the lower complexity peaking at a later density to dwpli ( fig .[ fig3 ] , centre right ) is mimicked in the behaviour of increasing strength parameter in the wch model ( fig .[ wchmodel ] , bottom right ) .this provides further evidence of the relevance and flexibility of the wch model .in contrast there is an evident lack of ability to make similar comments with respect to the popular small world and scale - free models .this criticism can be extended towards network models which randomise connections while maintaining degree distributions , since such an enforced topological attribute does not allow one to analyse how that very important attribute is actually constructed .future work will provide extension to modular structures in our model to focus on what roles modularity plays on these aspects , since and behave contrastingly to this extrapolation .a striking feature seen is in the degree variance curves where a highly symmetric parabolic curve is noted with a central maximum value for random graphs , wch networks and eeg networks .this feature reveals to us a scale - free paradigm at all density levels and not just the classic sparse network scale - free at low densities .in other words , the scale - free nature found in brain networks is first and foremost encoded in the connectivity weights , which , through selective binarisation , therefore can reveal to us the scale - free property as expressed at different density ranges .as the density of the network increases one obtains more even distributions of high and low density nodes , indicated by the high values of , and , eventually , towards high densities the symmetry of values with low densities tells us that the scale - free network is characterised by a small number of low degree nodes and a majority of high degree nodes , i.e. the inverse ( or complement ) of the low density behaviour . if we define a uniformly random topology as that which exhibits a uniform distribution of topological values over the space enveloped by the minimal and maximal topologies , it is very apparent that e - r random networks do not satisfy this criteria , but , instead , have a restricted topology at all density levels where the interquartile range is much smaller in comparison with that of the eeg networks and the proposed null model .we thus see that uniformly distributed random weights do not lead to a uniformly random topology in this sense , but instead to a very particular optimally integrated , moderately regular , lowly modular and low complexity topology at all densities . based on this evidence and previous discussion of random networks in the methods section, we suggest that e - r random networks should be re - understood as optimally integrated networks .following from this the randomisation of connections used widely in null models is not a topologically randomising process but , more accurately , a topologically integrative process .such a feature is then not necessarily typical of network topology and thus one must be cautious to use this as a null model unless one wants to specifically target integrative behaviour .further , the practice of normalisation of graph values by e - r random graph values must also be used with due caution .the basis of such a normalisation is to contrast a networks values with those of the average network topology , rather than contrasting with a highly specific topology which behaves very differently to real world networks .this evidence provides further justification for the adoption of our wch model as a relevant and powerful replacement to these models .we introduced a metric for measuring the hierarchical complexity of a network and a highly flexible and elegant wch model .these provided key insights into what distinguishes functional brain networks from both ordered and spontaneous forms as generally the most complex kind of topology and the important role that hierarchical structure plays in this .further , we showed that phase - based connectivity topology was more complex than amplitude influenced connectivity topology , which we extrapolated as due to the more ordered structure enforced by volume conduction effects . in our analysiswe constructed a framework for cwns for brain functional connectivity to replace the framework for sparse networks adopted from other network science research areas .this included the synthesis of concepts from the literature in a succinct manner and the generalisation of sparse binary archetypes to cwn form .the perspective allowed by this comprehensive analysis provided new evidence regarding key factors of network topology in general .importantly we provided evidence of the non - topologically random nature of uniformly random weighted networks . from thisit follows that our model is more relevant and appropriate than prevalent connection randomisation processes .also , a scale - free paradigm was extended to all network densities .particularly , these insights help towards a comprehensive understanding of the framework within which functional connectivity networks are set and thus provide invaluable information and tools for future clinical and technical research in neuroscience .matlab codes for all synthesis and analysis of the networks as introduced in this paper are publicly available on publication at http://dx.doi.org/10.7488/ds/1520 .keith smith is funded by the engineering and physical sciences research council ( epsrc ) .99 e. bullmore , o. sporns , `` complex brain networks : graph theoretical analysis of structural and functional systems '' , _ nature neuroscience review _ , 10 : 186 - 198 , 2009 . c.j .stam , `` modern network science of neurological disorders '' , _ nature neuroscience review _, 15 : 683 - 695 , 2014 .d. papo , j. m. buldu , s. boccaletti , e. t. bullmore , `` complex network theory and the brain '' , _ phil .b _ , 369 : 20130520 , 2014 .e. ravasz and a.l .barabsi , `` hierarchical organization in complex networks '' , _ phys ., 67 : 026112 , 2003 .d. meunier , r. lambiotte & e. t. bullmore , `` modular and hierarchically modular organisation of brain networks '' , _ frontiers in neurscience _, doi : 10.3389/fnins.2010.00200 , 2010 .m. costa , a.l .goldberger , c .- k .peng , `` multiscale entropy analysis of biological signals '' , _ phys ., 71 : 021906 , 2005 .g. tononi , o. sporns , g.m .edelman , `` a measure for brain complexity : relating functional segregation and integration in the nervous system '' , _ proc natl acad sci _, 91(11):5033 - 7 , 1994 .r.sol & s. valverde , `` information theory of complex networks : on evolution and architectural constraints '' , _ lect .notes phys ._ , 650 : 189 - 207 , 2004 .watts & s.h .strogatz , `` collective dynamics of small - world networks'',_letters to nature _ , 393 : 440 - 442 , 1998 .o. sporns , `` small - world connectivity , motif composition , and complexity of fractal neuronal connections '' , _biosystems _ , 85 : 5564 , 2006 .m. rubinov , o. sporns , `` weight - conserving characterization of complex functional brain networks '' , _ neuroimage _ , 56(4):2068 - 2079 , 2011 . f. d.v .fallani , j. richiardi , m. chavez , s. achard , `` graph analysis of functional brain networks : practical issues in translational neuroscience '' , _ phil .b : biological sciences _ , 369 ( 1653 ) : 20130521 , 2014 .c. li , h. wang , w. de haan , c.j .stam , p. van mieghem , `` the correlation of metrics in complex networks with applications in functional brain networks '' , __ , doi:10.1088/1742 - 5468/ 2011/11/p11018 , 2011 .p. tewarie , e. van dellen , a. hillebrand , c.j .stam , `` the minimum spanning tree : an unbiased method for brain network analysis '' , _ neuroimage _ , 104 : 177 - 188 , 2015 .barabasi & r. albert , `` emergence of scaling in random networks '' , _ science _ , 286 : 509 - 512,1999 .newman , `` networks '' , _ oxford university press _ , oxford , 2010 .o. sporns , `` networks of the brain '' , _ mit press _ , ma , 2010 .m. rubinov , o. sporns , `` complex network measures of brain connectivity : uses and interpretations '' , _ neuroimage _ , 52:1059 - 1069 , 2010 .eguiluz , d.r .chialvo , g.a .cecchi , m. baliki , a. v. apkarian , `` scale free brain functional networks '' , _ physical review letters _ , 94: 018102 , 2005 .. snijders , `` the degree variance : an index of graph heterogeneity '' , _ social networks _ ,3(3 ) : 163 - 174 , 1981 .a. sandryhaila , j.m.f .moura , `` discrete signal processing on graphs '' , _ ieee transactions on signal processing _ , 61(7):1644 - 1656 , 2013 . j.j .mcauley , l.f .costa , t.s .caetano , `` the rich - club phenomena across complex network hierarchies '' , _ appl ._ , 91 , doi : 10.1063/1.2773951 , 2007 .p. erds & a. rnyi , `` on random graphs '' , _ publicationes mathematicae debrecen _ , 6:290 - 297 , 1959 .p. bonacich , p. lloyd , `` eigenvector - like measures of centrality for asymmetric relations '' , _ social networks _ , 23(3 ) : 191 - 201 , 2001 . m. molloy & b. reed , `` a critical point for random graphs with a given degree sequence '' , _random structures & algorithms _ , 6 ( 2 - 3 ) : 161 - 180 , 1995. s. milgram , `` the small world problem '' , _psychology today_,1(1 ) : 61 - 67 , 1967 .e. bullmore , o. sporns , `` the economy of brain network organisation '' , _ nature _, 13:336 - 349 , 2012 .barabasi , r. albert & h. jeong , `` mean - field theory for scale - free random networks '' , _ physica a _ , 272(1 - 2 ) : 173 - 187 , 1999 .tijms , a.m. wink , w. de haan , w.m .van der flier , c.j .stam , p. scheltens , f. barkhof , `` alzheimer s disease : connecting findings from graph theoretical studies of brain networks '' , _ neurobiology of ageing _ , 34 : 2023 - 2036 , 2013 .j. petersen , die theorie der regulren graphs `` , _ acta math ._ , 15 : 193 - 220 , 1891 .brego , s. fernndez - merchant , m.g .neubauer , w. watkins , `` sum of squares of degrees in a graph '' , _ journal of inequalities in pure and applied mathematics _ , 10(3 ) : 64 , 2009 .stam , b.f .jones , g. nolte , m. breakspear , p. scheltens , `` small - world networks and functional connectivity in alzheimer s disease '' , _ cerebral cortex _, 17:92 - 99 , 2007 . m. lynall , d. s. bassett , r. kerwin , p.j .mckenna , m. kitzbichler , u. muller & e. bullmore , `` functional connectivity and brain networks in schizophrenia '' , _j neurosci _ ,30(28 ) : 9477 - 9487 , 2010 .ginestet , t.e .nichols , e.t .bullmore , a. simmons , ' ' brain network analysis : separating cost from topology using cost - integration `` , _ plos one _ , 6 , e21570 , doi : 10.1371/journal.pone.0021570 , 2011 .goldberger , l.a.n .amaral , l. glass , j.m .hausdorff , p.c .ivanov , r.g .mark , j.e .mietus , g.b .moody , c.k .peng , h.e .stanley , `` physiobank , physiotoolkit , and physionet : components of a new research resource for complex physiologic signals '' , _ circulation _ , 101(23 ) : e215-e220 , dataset : doi:10.13026/c28g6p , 2000 .newman , m. girvan , `` finding and evaluating community structure in networks '' , _ phys .e _ , 69(2 ) : 026113 , 2004 .newman , `` modularity and community structure in networks '' , _ phys rev e _, 23 : 8577 - 8582 , 2006 .blondel , j - l .guillaume , r. lambiotte , e. lefebvre , `` fast unfolding of communities in large networks '' , _ j. stat_ , doi : 10.1088/1742 - 5468/2008/10/p10008 , 2008 .shannon , `` a mathematical theory of communication '' , _ bell system technical journal _ , 27 : 623656 , 1948 .g. schalk , d.j .mcfarland , t. hinterberger , n. birbaumer , j.r .wolpaw , `` bci2000 : a general - purpose brain - computer interface ( bci ) system '' , _ ieee transactions on biomedical engineering _ , 51(6):1034 - 1043 , 2004 .r. oostenveld , p. fries , e. maris and j - m .schoffelen , `` fieldtrip : open source software for advanced analysis of meg , eeg , and invasive electrophysiological data '' , _ computational intelligence and neuroscience _ ,volume 2011 , 156869 , 9 pages , 2011 .m. vinck , r. oostenveld , m. van wingerden , f. battaglia , c.m.a .pennartz , `` an improved index of phase - synchronization for electrophysiological data in the presence of volume - conduction , noise and sample - size bias '' , _ neuroimage _ , 55:1548 - 1565 , 2011 .e. van diessen , t. numan , e. van dellen , a.w .van der kooi , m. boersma , d. hofman , r. van lutterveld , b.w . van dijk , e.c.w .van straaten , a. hillebrand , c.j .stam , `` opportunities and methodological challenges in eeg and meg resting state functional brain network research '' , _ clinical neurophysiology _ , doi:10.1016/j.clinph.2014.11.018 , 2014 .van den heuvel , o. sporns , `` rich - club organisation of the human connectome '' , _ journal of neuroscience _ ,31(44 ) : 15775 - 15786 , 2011 . c.j .stam , p. tewarie , e. van dellen , e.c.w .van straaten , a. hillebrand , p. van mieghem , `` the trees and the forest : characterization of complex brain networks with minimum spanning trees '' , _ international journal of psychophysiology _ , 92 : 129 - 138 , 2014 .k. smith , h. azami , j. escudero , m.a .parra , j.m .starr , ' ' comparison of network analysis approaches on eeg connectivity in beta during visual short - term memory tasks " , _ proceedings of the ieee embc15 _ , 2207 - 2210 , 2015 . j. dauwels , f. viallate , t. musha , a. cichocki , `` a comparative study of synchrony measures for the early diagnosis of alzheimer s disease based on eeg '' , _ neuroimage _ , 49(1 ) : 668 - 693 , 2010 .humphries , k. gurney , `` network small - world - ness : a quantitative method for determining canonical network equivalence '' , _plos one _ ,3(4 ) : e0002051 , 2008 .b. bollobs , `` random graphs '' , ch.8 of `` modern graph theory '' , graduate texts in mathematics , _ springer new york _ ,184 : 215 - 252 , doi:0.1007/978 - 1 - 4612 - 0619 - 4_7 , 1998 .newman , `` random graphs as models of networks '' , ch.2 from s. bornholdt , h.g .schster,``handbook of graphs and networks : from the genome to the internet '' , _ wiley _ , uk ,doi : 10.1002/3527602755 , 2006 .
_ background _ : understanding the complex hierarchical topology of functional brain networks is a key aspect of functional connectivity research . such topics are obscured by the widespread use of sparse binary network models which are fundamentally different to the complete weighted networks derived from functional connectivity . _ new methods _ : we introduce two techniques to probe the hierarchical complexity of topologies . firstly , a new metric to measure hierarchical complexity ; secondly , a weighted complex hierarchy ( wch ) model . to thoroughly evaluate our techniques , we generalise sparse binary network archetypes to weighted forms and explore the main topological features of brain networks- integration , regularity and modularity- using curves over density . _ results _ : by controlling the parameters of our model , the highest complexity is found to arise between a random topology and a strict class - based topology . further , the model has equivalent complexity to eeg phase - lag networks at peak performance . _ comparison to existing methods _ : hierarchical complexity attains greater magnitude and range of differences between different networks than the previous commonly used complexity metric and our wch model offers a much broader range of network topology than the standard scale - free and small - world models at a full range of densities . _ conclusions _ : our metric and model provide a rigorous characterisation of hierarchical complexity . importantly , our framework shows a scale of complexity arising between all nodes are equal topologies at one extreme and strict class - based topologies at the other . functional connectivity , hierarchical complexity , brain networks , electroencephalogram , network simulation
from the famous einstein - podolsky - rosen ( epr ) paradox to bell s seminal discovery , quantum theory has never failed to surprise us with its plethora of intriguing phenomena and mind - boggling applications . among those who made the bizarre nature of quantum theory evident was schrdinger , who not only coined the term entanglement " , but also pointed out that quantum theory allows for _ steering _ : through the act of local measurements on one - half of an entangled state , a party can _ remotely _ steer the set of ( conditional ) quantum states accessible by the other party . taking a quantum information perspective , the demonstration of steering can be viewed as the verification of entanglement involving an untrusted party .imagine that two parties alice and bob share some quantum state and alice wants to convince bob that the shared state is entangled , but bob does not trust her . if alice can convince bob that the shared state indeed exhibits epr steering , then bob would be convinced that they share entanglement , as the latter is a prerequisite for steering .note , however , that shared entanglement is generally insufficient to guarantee steerability .interestingly , steerability is actually a necessary but generally insufficient condition for the demonstration of bell - nonlocality .hence , steering represents a form of quantum inseparability in between entanglement and bell - nonlocality .apart from entanglement verification in a partially - trusted scenario , steering has also found applications in the distribution of secret keys in partially trusted scenario . from a resource perspective , the steerability of a quantum state , i.e. , whether is steerable and the extent to which it can exhibit steering turns out to provide also an indication for the usefulness of in other quantum information processing tasks .for instance , steerability as quantified by steering robustness is monotonously related to the probability of success in the problem of subchannel discrimination when one is restricted to local measurements aided by one - way communications. the characterization of quantum states that are capable of exhibiting steering and the quantification of steerability are thus of relevance not just from a fundamental viewpoint , but also in quantum information .surprisingly , very little is known in terms of which quantum state is ( un)steerable ( see , however , ) . here , we derive some generic sufficient conditions for steerability that can be applied to quantum state of arbitrary hilbert space dimensions .importantly , in contrast to conventional approach of steering inequalities where an optimization over the many measurements that can be performed by each party is needed , our criteria only requires the relatively straightforward computation of the fully entangled fraction .given that some entangled quantum state can not exhibit steering , a natural question that arises is whether the steerability of such a state can be _ superactivated _ by allowing joint measurements on multiple copies of .in other words , is it possible that some that is not steerable becomes steerable if local measurements are performed instead on for some large enough ?building on some recent results established for bell - nonlocality , we provide here an affirmative answer to the above question .note that even for a quantum state that is steerable , it is interesting to investigate how their steerability scales with the number of copies .for instance , is it possible to amplify the amount of steering - inequality violation by an _ arbitrarily large _amount if only a small number of copies is available ( see for analogous works in the context of bell - nonlocality ) ?again , we provide a positive answer to this question , showing that an unbounded amount of amplification can be obtained by allowing joint measurements on as few as three copies of a quantum state that is barely steerable , or even unsteerable under projective measurements .the rest of this paper is structured as follows . in section [ sec : prelim ] , we give a brief overview on some of the basic notions in bell - nonlocality and epr steering that we will need in subsequent discussions .there , inspired by the work of cavalcanti _ et al . _ , we also introduce the notion of _ steering fraction _ and _ largest ( steering - inequality ) violation _ , which are crucial quantities that lead to many of the findings mentioned above .for instance , in section [ sec : characterization ] , we use these quantities to derive ( 1 ) a general sufficient condition for an arbitrary quantum state to be steerable and ( 2 ) upper bounds on the largest steering - inequality violation of an arbitrary finite - dimensional maximally entangled state as a function of its hilbert space dimension . quantification of steerability using a strengthened version of steering fraction is discussed in section [ sec : quantifysteering ] there , we also demonstrate how this novel steering monotone is related to the others , such as the steerable weight and steering robustness . in section [ sec : superampli ], we show the superactivation of steerablity , provide a procedure to construct a steering inequality for this purpose , and demonstrate unbounded amplification of steerability .we conclude in section [ sec : conclude ] with a discussion and some opened problems for future research .consider a bell - type experiment between two parties alice and bob .the correlation between measurement outcomes can be succinctly summarized by a vector of joint conditional distributions where , and ( and ) are , respectively , the label of alice s ( bob s ) measurement settings and outcomes .the correlation admits a _ local hidden - variable _ ( lhv ) model if is bell - local , i.e. , can be decomposed for all as for some _ fixed _ choice of satisfying , single - partite distributions , and .any correlation that is not bell - local ( henceforth _ nonlocal _ ) can be witnessed by the violation of some ( linear ) bell inequality , , or which involves _ complex _ combinations of have also been considered in the literature , but we will not consider them in this paper . ][ eq : bi ] specified by a vector of real numbers ( known as the bell coefficients ) and the _ local bound _ in the literature , the left - hand - side of eq .is also known as a _ bell polynomial _ or a _ bell functional _ , as it maps any given correlation into a real number .to determine if a quantum state ( and more generally if a given correlation ) is nonlocal , one can , without loss of generality consider bell coefficients that are non - negative , i.e. , for all . to see this , it suffices to note that any bell inequality , eq ., can be cast in a form that involves only non - negative bell coefficients , e.g. , by using the identity , which holds for all . specifically , in terms of the _ nonlocality fraction _ , violates the bell inequality corresponding to ( and hence being nonlocal ) if and only if .importantly , nonlocal quantum correlation \ ] ] can be obtained by performing appropriately local measurements on certain entangled quantum state , where ( ) are the sets of positive - operator - valued measures ( povms ) acting on alice s ( bob s ) hilbert space . from nowonwards , we will use ( ) to denote a set of povms on alice s ( bob s ) hilbert space , and to their union , i.e. , . whenever the measurement outcome corresponding to is observed on alice s side, quantum theory dictates that the ( unnormalized ) quantum state \ ] ] is prepared on bob s side , where denotes the partial trace over alice s subsystem and is the identity operator acting on bob s hilbert space .an _ assemblage _ of conditional quantum states is said to admit a _ local - hidden - state _( lhs ) model if it is _ unsteerable _ , i.e. , if it can be decomposed for all as for some _ fixed _ choice of satisfying , single - partite density matrices , and single - partite distribution .equivalently , a correlation admits a lhs model if it can be decomposed as : conversely , an assemblage that is steerable can be witnessed by the violation of a steering inequality , [ eq : steeringineq ] specified by a set of hermitian matrices and the _ steering bound _ in the literature , the left - hand - side of eq .is also known as a _ steering functional _ , as it maps any given assemblage to a real number . as with bell - nonlocality , in order to determineif a given assemblage is steerable , one can consider , without loss of generality , steering functionals defined only by non - negative , or equivalently positive semidefinite , i.e. , for all , . means that the matrix is positive semidefinite , i.e. , having only non - negative eigenvalues . ] to see this , it is sufficient to note that any steering inequality , eq . , can be rewritten in a form that involves only non - negative , e.g. , by using the identity , which holds for all .hereafter , we thus restrict our attention to ( ) having only non - negative ( ) . in analogy with the nonlocality fraction , we now introduce the _ steering fraction _ to capture the steerability of an assemblage an assemblage violates the steering inequality corresponding to if and only if .whenever we want to emphasize the steerability of the underlying state giving rise to the assemblage , we will write instead of where , , and are understood to satisfy eq .. in particular , is steerable with if and only if the largest violation of the steering inequality corresponding to is greater than 1 . as mentioned in section [ sec : intro ], bell - nonlocality is a stronger form of quantum nonlocality than quantum steering .let us now illustrate this fact by using the quantities that we have introduced in this section .for any given and bob s measurements specified by the povms , one obtains an _ induced _ steering inequality specified by using this equation , the definition of the steering bound }} ] and is the identity operator acting on the composite hilbert space . in this case, it thus follows from eq .that where is the contribution of towards the steering fraction . maximizing both sides over and dropping contribution from the second term gives .recall from the definition of that a steering inequality is violated if , thus rearranging the term gives the following sufficient condition for to be steerable .[ thm : sufficentsteerability ] given a state and acting on .a sufficient condition for to be steerable from alice to bob is since ^k ] , it is easy to see that =p+\tfrac{1-p}{d^2},\ ] ] thus the critical value of fef beyond which becomes steerable with projective measurements is >{\mathcal{f}}_{\text{iso , } d}^{\text{steer}}:=\tfrac{h_{d}+h_{d } d - d}{d^2} ] that is steerable according to theorem [ thm : sufficentsteerability ] , which is a contradiction .thus , the above necessary and sufficient condition for steerability of with projective measurements implies the following upper bound on the largest steering - inequality violation of .the largest steering - inequality violation of for all is upper bounded as : for projective measurements . to understand the asymptotic behavior of this upper bound , note that when , we have this means that scales as for sufficiently large . in particular , it can be shownas increases from 1 , the function increases monotonically until a maximum value at and decreases monotonically after that . ] that .thus our upper bound on has an asymptotic scaling that improves over the result of yin _ at al . _ by a factor of .. ] in addition , by using the sufficient condition of non - steerability of isotropic states under general povms [ i.e. , is unsteerable under general povms if , one can use eq . and the same arguments to arrive at the following upper bound under general povms : when , it can be shown that this upper bound scales as .let us also remark that since the upper bound of eq .. ] holds for _ all _ linear steering inequalities specified by non - negative * f * , it also serves as a legitimate upper bound on the largest bell - inequality violation of with projective measurements ( general povms ) for all _ linear bell inequalities _ with _ non - negative _ bell coefficients .a proof of this can be found in appendix [ app : bell ] . for linear bell inequalities specified by non - negative* b * , our upper bound on the largest bell - inequality violation of with projective measurements thus has the same scaling as the upper bound due to palazuelos ( see the last equation at page 1971 of ) , but strengthens his by more than a factor of 2 . for , such an upper bound on the largest bell - inequality violation of can be improved further using results from , see appendix [ app : bell ] for details .evidently , as we demonstrate in section [ sec : characterization ] , steering fraction is a very powerful tool for characterizing the steerability of quantum states .a natural question that arises is whether a maximization of steering fraction over all ( non - negative ) leads to a proper _ steering quantifier _ ,i.e. , a _ convex steering monotone _ . to this end , let us define , for any given assemblage , the _ optimal steering fraction _ as : where the supremum is taken over all non - negative * f*. from here , one can further define the optimal steering fraction of a quantum state by optimizing over all assemblages that arise from local measurements on one of the parties .superficially , such a quantifier for steerability bears some similarity with that defined in but in the steering measure defined therein , there is a further optimization over all possible steering - inequality violations by _ all possible quantum states _ , which is not present in our definition .in appendix [ app : so_proof ] , we prove that is indeed a convex steering monotone , i.e. , it satisfies : 1 . for all unsteerable assemblages ; 2 . does not increase , on average , under deterministic _one - way local operations and classical communications _ ( 1w - loccs ) ; 3 . for all convex decomposition of in terms of other assemblages and with , . moreover , quantitative relations between and two other convex steering monotones , namely , steerable weight ( ) and steering robustness ( ) can be established , as we now demonstrate . to begin with ,we recall from that for any assemblage , is defined as the minimum non - negative real value satisfying for all and , where and is a steerable assemblage .in other words , is the minimum weight assigned to a steerable assemblage when optimized over all possible convex decompositions of into a steerable assemblage and an unsteerable assemblage . in appendix [ app :sw ] , we establish the following quantitative relations between and . [prop : so - sw ] given an assemblage with the decomposition \sigma_{a|x}^\text{\rmus}+{\mathcal{s}_{\rm w}}({{\bm \sigma}})\sigma_{a|x}^\text{\rm s} ] .let us denote by the group of bit strings with , bitwise addition modulo 2 being the group operation .consider the ( normal ) hadamard subgroup of which contains elements .the cosets of in give rise to the quotient group with elements .the kv game can then be written in the form of a bell inequality , cf .eq . , with settings and outcomes : can be an arbitrary positive integer has been considered , for example , in . ] where is the hamming weight of and is the kronecker delta between and , . and , if and only if and are associated with the same coset in the quotient group . ] and is the set of bell coefficients defining the kv game. an important feature of given in eq .is that .for the specific choice of which makes sense only for this gives with . in this case, performing judiciously chosen rank-1 projective measurements specified by , where and , on ( with ) gives rise to a correlation with the following lower bound on the nonlocality fraction where .consider now the collection of non - negative matrices induced by the kv game and bob s optimal povms leading to the lower bound given in eq . .an application of inequality to eq. immediately leads to for any given state , lemma [ lemma : twirling - sf ] , eq . andeq . together imply the existence of such that where again is non - negative .note that if , we have both and }}^k ] , we obtain ^k,\ ] ] for thus , for , we see that with defined in eq .can become arbitrarily large if we make arbitrarily large .in particular , for any given , must be large enough so that the defined in eq .is larger than , the critical value of below which becomes separable .now , a direct comparison between eq .and the threshold value of [ where the isotropic state becomes ( un)steerable with projective measurements ] shows that if and only if the isotropic state with given in eq .is unsteerable under projective measurements .it is easy to verify that the quantity satisfies ( 1 ) for all , and ( 2 ) rapidly approaches 1 when .hence , for every , there exists an isotropic state ( with sufficiently large ) that is entangled but unsteerable with projective measurements , but which nevertheless attains arbitrarily large steering - inequality violation with .remarks on the implication of theorem [ thm : amplification ] are now in order .firstly , a direct observation shows that with given in eq is always unsteerable under projective measurements if , where one can verify that achieves its minimal value at .this , however , is still not enough to guarantee that the given isotropic state is unsteerable under general povms due to the lack of exact characterization of steerability under general povms .secondly , it is worth noting that the above results also hold if we replace steerability by bell - nonlocality . to see this ,let us first remind that the largest bell - inequality violation under projective measurements is upper bounded by the upper bound given in eq .[ see eq . ] .next , note that the lower bound on steering fraction that we have presented in eq. actually inherits from a lower bound on the corresponding nonlocality fraction using the kv bell inequality .therefore , exactly the same calculation goes through if is replaced by with derived from eq . assuming local povms that lead to eq . .in other words , for sufficiently large , one can always find entangled isotropic states that do not violate any bell inequality with projective measurements , but which nevertheless attain arbitrarily large bell - inequality violation with this improves over the result of palazuelos which requires five copies for unbounded amplification .in this paper , we have introduced the tool of steering fraction and used it to establish novel results spanning across various dimensions of the phenomenon of quantum steering .below , we briefly summarize these results and comment on some possibilities for future research .firstly , we have derived a general sufficient condition for _ any _ bipartite quantum state to be steerable ( bell - nonlocal ) in terms of its fully entangled fraction , a quantity closely related to the usefulness of for teleportation . as we briefly discussed in section[ sec : characterization ] , we do not expect these sufficient conditions to detect all steerable ( bell - nonlocal ) states .nonetheless , let us stress that to determine if a quantum state is steerable ( as with determining if a quantum state can exhibit bell - nonlocality , see , e.g. , ) is a notoriously difficult problem , which often requires the optimization over the many parameters used to describe the measurements involved in a steering experiment ( and/or the consideration of potentially infinitely many different steering inequalities ) .in contrast , the general criterion that we have presented in theorem [ thm : sufficentsteerability ] for steerability ( and theorem [ theorem : suff_condi_nonlocality ] for bell - nonlocality ) only requires a relatively straightforward computation of the fully entangled fraction of the state of interest .given that these sufficient conditions are likely to be suboptimal , an obvious question that follows is whether one can find an explicit threshold that is smaller than that given by theorem [ thm : sufficentsteerability ] ( theorem [ theorem : suff_condi_nonlocality ] ) such that still guarantees steerability ( bell - nonlocality ) .while this may seem like a difficult problem , recent advances in the algorithmic construction of local hidden - variable ( -state ) models may shed some light on this .more generally , it will be interesting to know if our sufficient condition can be strengthened while preserving its computability. in particular , it will be very useful to find analogous steerability ( bell - nonlocality ) criteria that are tight .on the other hand , the aforementioned sufficient condition has also enabled us to derive upper bounds as functions of the largest steering - inequality violation ( ) achievable by the maximally entangled state under general povms ( projective measurements ) . in particular , using the general connection between and the largest bell - inequality violation , , established in appendix [ app : bell ] , our upper bounds on and imply upper bounds on and by ( for non - negative * b * ) , respectively .notably , our upper bound on is somewhat tighter than that due to palazuelos .if any strengthened sufficient conditions for steerability , as discussed above , are found , it would also be interesting to see if they could lead to tighter ( non - asymptotic ) upper bound(s ) on the largest steering - inequality ( and/or bell - inequality ) violation attainable by . the tool of steering fraction , in addition , can be used to quantify steerability . in particular, we showed that if is optimized over all ( non - negative ) * f * , the resulting quantity can be cast as a _ convex steering monotone _ which we referred to as the _ optimal steering fraction _ .we further demonstrated how this monotone is quantitatively related to two other convex steering monotones steerable weight and steering robustness . in the light of quantum information , it would be desirable to determine an operational meaning of , e.g. , in the context of some quantum information tasks ( cf .steering robustness ) .establishment of quantitative relations between and other convex steering monotones , such as the relative entropy of steering , would certainly be very welcome .in particular , it would be highly desirable to establish quantitative relations that allow one to estimate from other easily - computable steering monotones , such as the steerable weight , or the steering robustness . using the established sufficient condition for steerability, we have also demonstrated the superactivation of steerability , i.e. , the phenomenon that certain unsteerable quantum state becomes , for sufficiently large , steerable when joint _local _ measurements on are allowed .a drawback of the examples that we have presented here is that they inherit directly from the superactivation of bell - nonlocality due to palazuelos and cavalcanti _ et al ._ .an obvious question that follows is whether one can construct explicit examples for the superactivation of steerability using quantum states whose bell - nonlocality _ can not _ be superactivated via joint measurements .one the other hand , with joint local measurements , we showed that the steering - inequality ( bell - inequality ) violation of certain barely steerable ( bell - nonlocal ) [ or even unsteerable ( bell - local ) with projective measurements ] can be arbitrarily amplified , in particular , giving an arbitrarily large steering - inequality ( bell - inequality ) violation with .could such unbounded amplification be achieved using joint measurements on two copies of the same state ?our proof technique , see eq ., clearly requires a minimum of three copies for unbounded amplification to take place but it is conceivable that a smaller number of copies suffices if some other steering ( bell ) inequality is invoked , a problem that we shall leave for future research .the authors acknowledge useful discussions with nicolas brunner , daniel cavalcanti , flavien hirsch , marco tlio quintino and helpful suggestions from an anonymous referee of aqis2016 .this work is supported by the ministry of education , taiwan , r.o.c ., through aiming for the top university project " granted to the national cheng kung university ( ncku ) , and the ministry of science and technology , taiwan ( grant no.104 - 2112-m-006 - 021-my3 ) ._ note added. _ while completing this manuscript , we became aware of the work of who independently ( 1 ) derived a sufficient condition of steerability in terms of the fully entangled fraction and ( 2 ) demonstrated the superactivation of steering of the isotropic states .here , we provide a proof of lemma [ lemma : twirling - sf ] . forany given state , local povms and non - negative , let us note that } { { \omega_s } ( { { { \bf f}}})}\nonumber\\ & = \int_{u(d ) } \frac{\sum_{a , x } \text{tr}\left[(u^\dag\,{e}_{a|x}\,u \otimes u^{*\dag}\,{f}_{a|x}\,u^ { * } ) \rho'\right ] } { { \omega_s } ( { { { \bf f}}})}\,du\nonumber\\ & = \int_{u(d)}\gamma_s\left(\left\{\rho ' , { { \mathbb{e}}}_u\right\ } , { { { \bf f}}}_u\right)du\nonumber\\ & \le \max_{u\in u(d)}\gamma_s\left(\left\{\rho ' , { { \mathbb{e}}}_u\right\},{{{\bf f}}}_u\right ) \label{eq : maxgammas : u : twirl}\end{aligned}\ ] ] where and . denoting by the unitary operator achieving the maximum in eq ., then the above inequality implies that for any given state , let us further denote by , the unitary operator that maximizes the fef of in eq ., i.e. , defining and , we then have [ gammas:2 ] with ,\ ] ] where the last equality follows from the fact that if is attained with , then ], a direct corollary of theorem [ thm : sufficentsteerability ] is the following sufficient condition for to be bell - nonlocal : ^{\frac{1}{k}}.\end{aligned}\ ] ] as an explicit example , let us consider the family of 2-setting , -outcome inequality due to collins - gisin - linden - massar - popescu . this inequality can be re - written in a form that involves only the following non - negative coefficients : such that a ( tight ) lower bound on the largest bell - inequality violation of this inequality can be inferred from the result presented in as : \right\},\end{aligned}\ ] ] where } ] as the largest steering - inequality violation arising from a given .in particular , the largest bell - inequality violation achievable by maximally entangled state for _ any _ non - negative under projective measurements must also be _upper bounded _ by eq .: note that eq .with the fact given in footnote [ fn : upperbound]palazuelos upper bound of the largest bell - inequality violation of maximally entangled states under projective measurements ( for non - negative ) .also , eq . implies the following upper bound : which scales as when .it is worth noting that in the case of general povms , palazuelos upper bound ( theorem 0.3 in ) is better than ours by a scaling factor , but we have used a much simpler approach in our derivation ( than the operator space theory approach of ) .a nice feature of the upper bounds on and presented above is that they apply to all dimensions and all non - negative .the drawback , however , is that they are generally not tight .for instance , for the two - qubit maximally entangled state , the inequality above gives , but if we make explicit use of the nonlocal properties of , then this bound can be tightened .firstly , let us recall from that the threshold value of above which violates some bell inequality by projective measurements is given by , where is grothendieck s constant of order 3 .although the exact value of the constant is not known , it is known to satisfy the following bounds : where the lower bound is due to vrtesi and the upper bound is due to krivine . note that for , corresponds to a fef of .then , in order for eq . to be consistent with this observation , we must have where both bounds in eq .have been used to arrive at the last inequality .the proof proceeds in two parts .we first show that is a convex function that vanishes for unsteerable assemblages .we then show that it is a steering monotone , that is , non - increasing , on average , under one - way local operations and classical communications ( 1w - loccs ) .the first part of the proof follows from the following lemma . from the definition of and , it follows immediately that if .note that for any _ convex _ decomposition of the assemblage with ] and is the subchannel of the completely positive map labeled by , i.e. , and is a _ deterministic wiring map _ that transforms a given assemblage to another assemblage with setting and outcome : {x'}\coloneqq \tilde{\sigma}_{a'|x'}=\sum_{a , x}p(x|x',\omega)p(a'|x',a , x,\omega)\sigma_{a|x}.\ ] ] to appreciate the motivation of formulating 1w - loccs in the above manner and the definition of the trace of the assemblage , we refer the readers to .moreover , to ease notation , henceforth , we denote by and define }}}\end{aligned}\ ] ] to prove theorem [ thm : monotonic ] , we shall make use of the following lemma . from the definitions given in eq . and eq . , we get }}\nonumber\\ & & \coloneqq\sup_{{{\bf f}}\succeq 0 } \sum_{a',x'}\frac{\text{tr}(f_{a'|x'}k_\omega \sum_{a , x}q(a',x',a , x,\omega)\sigma_{a|x } k_\omega^\dagger ) } { { \omega_s } ( { { \bf f } } ) { \text{tr}}{\ensuremath{\left [ { \mathcal{m}}_\omega ( { { \bm \sigma } } ) \right]}}}\nonumber\\ & & = \sup_{{{\bf f}}\succeq 0}\frac{1 } { { \omega_s } ( { { \bf f}})}\sum_{a , x}\text{tr}(\check{f}_{a|x}\sigma_{a|x}),\end{aligned}\ ] ] where is defined by }}}.\end{aligned}\ ] ] now we have {a'|x'}\right)\nonumber\\ & & \le\sup_{{{\bm \sigma}}\in\text{lhs } } \sum_{a',x'}\text{tr}(f_{a'|x'}\sigma_{a'|x'})= { \omega_s } ( { { \bf f}}),\end{aligned}\ ] ] where the last inequality follows from the fact that implies , and thus is a subset of lhs . combining the above results, we have }}\le\sup_{{{\bf f}}\succeq 0 } \gamma_s({{\bm \sigma}},\check{{{\bf f}}}) ] , the inequality }}\le{\mathcal{s}_{\rm o}}({{\bm \sigma}}) ] , lemma [ lemma : nonincreasing_lemma ] implies }}\le\sup_{{{\bf f}}\succeq 0 } \gamma_s({{\bm \sigma}},{{\bf f}})-1\le{\mathcal{s}_{\rm o}}({{\bm \sigma}}).\end{aligned}\ ] ] this means that }}\le{\mathcal{s}_{\rm o}}({{\bm \sigma}}) ] for all and }}\le1 ] holds by definition .[ [ app : sw ] ] proof of quantitative relations between optimal steering fraction and steerable weight ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ firstly , note that the chain of inequalities holds trivially if is unsteerable , since in this case . to prove that holds in general, we thus assume that and recall from the condition of proposition [ prop : so - sw ] that \sigma_{a|x}^\text{\rm us}+{\mathcal{s}_{\rm w}}({{\bm \sigma}})\sigma_{a|x}^\text{\rm s} ] , we note from the condition of the theorem , the definition of , , and the triangle inequality that \sup_{{{\bf f}}\succeq 0}\gamma_s({{\bm \sigma}}^\text{us},{{\bf f}})\\ & + { \mathcal{s}_{\rm r}}({{\bm \sigma}})\sup_{{{\bf f}}\succeq 0}\gamma_s(\tilde{{{\bm \sigma}}},{{\bf f}})-1\\\le & \ , \left[1+{\mathcal{s}_{\rm r}}({{\bm \sigma}})\right]{\mathcal{s}_{\rm o}}({{\bm \sigma}}^\text{us})+{\mathcal{s}_{\rm r}}({{\bm \sigma}}){\mathcal{s}_{\rm o}}(\tilde{{{\bm \sigma}}})+2{\mathcal{s}_{\rm r}}({{\bm \sigma}})\\ = & \,{\mathcal{s}_{\rm r}}({{\bm \sigma}})\left[{\mathcal{s}_{\rm o}}(\tilde{{{\bm \sigma}}})+2\right],\end{aligned}\ ] ] where the last equality follows from . to show the other inequality , we note that : \gamma_s({{\bm \sigma}}^\text{us},{{\bf f}})={\mathcal{s}_{\rm r}}({{\bm \sigma}})\gamma_s(\tilde{{{\bm \sigma}}},{{\bf f}})\end{aligned}\ ] ] rearranging the term , taking supremum over on both sides , and noting that , we thus obtain the desired inequality e. schrdinger , proc .cambridge philos .https://www.cambridge.org/core/journals/mathematical-proceedings-of-the-cambridge-philosophical-society/article/discussion-of-probability-relations-between-separated-systems/c1c71e1aa5ba56ebe6588aaacb9a222d [ , 555 ( 1935 ) ] ; https://www.cambridge.org/core/journals/mathematical-proceedings-of-the-cambridge-philosophical-society/article/probability-relations-between-separated-systems/641ddded6fb033a1b190b458e0d02f22 [ , 446 ( 1936 ) ] .h. m. wiseman , s. j. jones , and a. c. doherty , http://journals.aps.org/prl/abstract/10.1103/physrevlett.98.140402[phys .lett . * 98 * , 140402 ( 2007 ) ] ; http://journals.aps.org/pra/abstract/10.1103/physreva.76.052116[phys . rev . a * 76 * , 052116 ( 2007 ) ] .f. hirsch , m. t. quintino , t. vrtesi , m. f. pusey , and n. brunner , https://arxiv.org/abs/1512.00262[arxiv:1512.00262 ( 2015 ) ] ; d. cavalcanti , l. guerini , r. rabelo , and p. skrzypczyk , https://arxiv.org/abs/1512.00277[arxiv:1512.00277 ( 2015 ) ] .
quantum steering , also called einstein - podolsky - rosen steering , is the intriguing phenomenon associated with the ability of spatially separated observers to _ steer_by means of local measurements the set of conditional quantum states accessible by a distant party . in the light of quantum information , _ all _ steerable quantum states are known to be resources for quantum information processing tasks . here , via a quantity dubbed _ steering fraction _ , we derive a simple , but general criterion that allows one to identify quantum states that can exhibit quantum steering ( without having to optimize over the measurements performed by each party ) , thus making an important step towards the characterization of steerable quantum states . the criterion , in turn , also provides upper bounds on the largest steering - inequality violation achievable by arbitrary finite - dimensional maximally entangled states . for the quantification of steerability , we prove that a strengthened version of the steering fraction is a _ convex steering monotone _ and demonstrate how it is related to two other steering monotones , namely , steerable weight and steering robustness . using these tools , we further demonstrate the _ superactivation _ of steerability for a well - known family of entangled quantum states , i.e. , we show how the steerability of certain entangled , but unsteerable quantum states can be recovered by allowing joint measurements on multiple copies of the same state . in particular , our approach allows one to explicitly construct a steering inequality to manifest this phenomenon . finally , we prove that there exist examples of quantum states ( including some which are unsteerable under projective measurements ) whose steering - inequality violation can be arbitrarily amplified by allowing joint measurements on as little as three copies of the same state . for completeness , we also demonstrate how the largest steering - inequality violation can be used to bound the largest bell - inequality violation and derive , analogously , a simple sufficient condition for bell - nonlocality from the latter .
we consider now a further extension of the libs model by including the second nearest neighbours interactions in the simple one dimensional topology . the global - fitness of eq .[ global ] becomes where we the second order coefficients are not independent random numbers but .the reason behind this choice , that can be easily extended to the order neighbours , is motivated by the assumption that the higher order interactions are damped by the `` distance '' between the two species and therefore . by using this formulation , we attempt to mimic a hierarchical dependence of the global fitness in the ecology : species become explicitly related to their second nearest neighbours via the mediation of their first neighbours and so on . using these constrains the mutation rules remain the same as in eq .[ rules ] since a change in the first order coefficients triggers automatically a change also in the higher order ones .the dynamics that results from the numerical simulations is similar to the first neighbours libs model : after an extensive transient we reach a critical stationary state characterized by avalanches of mutations , which size , , is power law distributed .the distributions for and , in the stable regime , are shown in fig .[ fig6 ] along with the distribution obtained by considering just the first neighbour interaction . in the case of we can notice that by enlarging the neighborhood the distribution show a slower decaying rate and they appear to be smoother .in this case we have an exponential decay all the way down to zero , without any clear cut - off for low . despite their fitness ,all the species have a chance of survival if sustained by an healthy environment . regarding the global fitness , instead , a polynomial decay is still evident , although the order is higher compared to the first neighbours case .in is also important to notice that a relative large change in the theoretical range for , which bounds are now , does not lead to a consequent rise in the threshold value that moves just slightly from to .this is equivalent to say that , in the previous case , in order to be considered `` fit '' , a species has to exceed the 74% of the possible range for , now the 48% is sufficient ! in conclusion ,a hierarchical extension of the cooperation between species in the libs model leads to an easier adaptation and survival probability : the more compact the ecosystem is the higher will be the chances of survival of each single species as long as they cooperate for their mutual interest .this work was supported by the australian research council .bak , p. , tang , c. & wiesenfeld , k. self - organized criticality : an explanation of 1/f noise .lett . _ * 59 * , 381 ( 1987 ) .bak , p. , tang , c. & wiesenfeld , k. self - organized criticality . _ phys .a _ * 38 * , 364 ( 1988 ) .jensen , h.j ., _ self - organized criticality : emergent complex behavior in physical and biological systems _ , ( cambridge university press , cambridge , 1998 ) .turcotte , d. l. self - organized criticality ._ * 62 * , 1377 ( 1999 ) .lu , e.t . &hamilton , r.j .avalanches and the distribution of solar flares ._ astrophys.j ._ * 380 * , l89 ( 1991 ) .lu , e.t . ,hamilton , r.j . ,mctieran , j.m . ,bromund , k.r . solar flares and avalanches in driven dissipative systems . _ astrophys.j . _ * 412 * , 841 ( 1993 ) .chang , t. , _ et al ._ , _ advances in space environmental research , vol.i _ , ( kluwer academic publisher , ah dordrecht , the netherlands , 2003 ) .valdiva , a. , _ et al ._ , _ advances in space environmental research , vol.i _ , ( kluwer academic publisher , ah dordrecht , the netherlands , 2003 ) .bak , p. & tang , c. earthquakes as a self - organized critical phenomenon ._ j. geophys .res . _ * 94 * , 15 635 ( 1989 ) .sornette a. & sornette , d. self - organized criticality and earthquakes _ europhys .* 9 * , 197 ( 1989 ) .sornette , d. , davy , p. , sornette , a. structuration of the lithosphere in plate tectonics as a self - organized critical phenomenon ._ j. geophys .res . _ * 95 * , 17353 ( 1990 ) .gould , s.j . & eldredge , n. punctuated equilibrium comes of age ._ nature _ * 366 * , 223 ( 1993 ) .huang , j. , sauler , h. , sammis , c. & sornette d. precursors , aftershocks , criticality and self - organized criticality ._ europhys .lett . _ * 41 * , 43 ( 1998 ) .bak , p. & sneppen , k. punctuated equilibrium and criticality in a simple model of evolution _ phys .lett . _ * 71 * , 4083 ( 1993 ) .nagel , k. & herrmann , h.j .deterministic models for traffic jams . _physica a _ * 199 * , 254 ( 1993 ) .nagel , k. , & paczuski , m. emergent traffic jams .e _ * 51 * , 2909 ( 1995 ) .nagatani , t. self - organized criticality in 1d traffic flow model with inflow or outflow .a : math.gen ._ * 28 * , l119 ( 1995 ) .nagatani , t. self - organized criticality in 1d traffic flow ._ fractals _ * 4 * , 279 ( 1996 ) .roberts , d.c . ,& turcotte , d.l .fractality and self - organised criticality of wars _ fractals _ * 6 * , 351 ( 1998 ) .bak , p. , chen , k. , scheinkman , j. & woodford , m. aggregate fluctuations from independent sectoral shocks : self - organized criticality in a model of production and inventory dynamics . _econ . _ * 47 * , 3 ( 1993 ) .bak , p. , paczuski , m. & shubik , m. price variations in a stock market with many agents ._ physica a _ * 246 * , 430 ( 1997 ) .feigenbaum , j. financial physics .phys . _ * 66 * , 1611 ( 2003 ) .flyvbjerg , h. , bak , p. & sneppen , k. mean field theory for a simple model of evolution . _lett . _ * 71 * , 4087 ( 1993 ) .paczuski , m. , maslov , s. , & bak , p. avalanche dynamics in evolution , growth and depinning models .e _ * 53 * , 414 ( 1996 ) .vanderwalle , n. , brisbois , f. & tordoir , x. non - random topology of stock markets .. finance _ * 1 * , 372 ( 2001 ) .bonanno , g. , caldarelli , g. , lillo , f. , mantegna , r. n. topology of correlation - based minimal spanning trees in real and model markets .e _ * 68 * , 46130 ( 2003 ) .souma , w. , fujiwara , y. & aoyama , h. 9th workshop on economics and heterogeneous interacting agents ( wehia2004 ) may 27 - 29 , 2004 at kyoto university , kyoto , japan .preprint : physics/0502005 .tabelow , k. gap function in finite bak - sneppen model _ phys .* 63 * , 047101 ( 2001 ) .head , d.a .universal persistence exponents in an extremally driven system _ phys .e _ * 65 * , 027104 ( 2002 ) .axtell , r.l .zipf distribution of u.s .firm size _ nature _ * 293 * , 1818 ( 2001 ) .gaffeo , e. , gallegati , m. & palestrini , a. on the size distribution of firms : additional evidence from the g7 countries ._ physica a _ * 324 * , 117 ( 2003 ) .albert , r. & barab , a .-statistical mechanics of complex networks . _ rev .phys . _ * 74 * , 47 ( 2002 ) .dorogovtsev , s.n . and mendes , j.f.f . , evolution of networks . _phys . _ * 51 * , 1079 ( 2001 ) .mezard m. , g. parisi and virasoro m. a. , _ spin glass theory and beyond _ , ( world scientific , singapore , 1987 ) .ponzi , a. & aizawa , y. criticality and punctuated equilibrium in spin system model of financial market ._ chaos , solitons & fractals _ * 11 * , 1739 ( 2000 ) .bartolozzi , m. , leinweber , d.b . &thomas , a.w .self - organized criticality and stock market dynamics : an empirical study . _ physica a _ * 350 * , 451 ( 2005 ) .cuniberti , g. , valleriani , a. & vega , j.l .effects of regulation on a self - organized market .. finance _ * 1 * , 332 ( 2001 ) .yamano , t. regulation effects on market with bak - sneppen model in high dimensions .c _ * 9 * , 1329 ( 2001 ) .ausloos , m. , clippe , m. & pekalski , a. evolution of economic entities under heterogeneous political / environmental conditions within a bak - sneppen - like dynamics ._ physica a _ * 332 * , 394 ( 2004 ) .day , r.h ._ the divergent dynamics of economic growth _ , ( cambridge university press , cambridge , 2004 ) .grassberger , p. the bak - sneppen model for punctuated equilibrium .a _ * 200 * , 277 ( 1995 ) .
in the present work we extend the bak - sneppen model for biological evolution by introducing local interactions between species . this `` environmental '' perturbation modifies the intrinsic fitness of each element of the ecology , leading to higher survival probability , even for the less fit . while the system still self - organizes toward a critical state , the distribution of fitness broadens , losing the classical step - function shape . a possible application in economics is discussed , where firms are represented like evolving species linked by mutual interests . complex systems , evolution / extinction , self - organized criticality , econophysics 05.65.+b , 89.75.fb , 45.70.ht , 89.65.gh in the past two decades several studies have been devoted to the investigation of the ubiquitous presence of power laws in natural and social systems . an important contribution to this field of research has been given by bak , tang and wiesenfeld ( btw ) , who developed the concept of self - organized criticality ( soc ) . the key idea behind soc is that complex systems , that is systems constituted of many interacting elements , although obeying different microscopic physics , may exhibit similar dynamical behaviour , statistically described by the appearance of power laws in the distributions of their characteristic features . the lack of a characteristic scale , indicated by the power laws , is equivalent to those of physical systems during a phase transition that is at the critical point . it is worth emphasizing that the original idea was that the critical state is reached `` naturally '' , without any external tuning . this is the origin of the adjective _ self-_organized . in reality a certain degree of tuning is necessary : implicit tunings like local conservation laws and specific boundary conditions seem to be important ingredients for the appearance of power laws . the classical example of a system exhibiting soc behaviour is the 2d sandpile model . here the cells of a grid are randomly filled , by an external driver , with `` sand '' . when the gradient between two adjacent cells exceeds a certain threshold a redistribution of the sand occurs , leading to more instabilities and further redistributions . the avalanche dynamics that drives the system from one metastable state to another is the benchmark of all systems exhibiting soc . in particular , the distribution of the avalanche sizes , their duration and the energy released , all obey power laws . the framework of self - organized criticality has been claimed to play an important role in solar flaring , space plasmas and earthquakes in the context of both astrophysics and geophysics . in biology soc has been linked to the punctuated equilibrium in species evolution . some work has also been carried out in the social sciences . in particular , traffic flow and traffic jams , wars , as well as stock - market dynamics , have been studied . a more detailed list of subjects and references related to soc can be found in the review paper of turcotte . in the present work we extend the bak - sneppen ( bs ) model for evolution by introducing explicit coupling terms in the fitness of each species of the ecology . we find that the equilibrium configuration of the model can be deeply influenced by the environmental forces , leading to a wider survival probability also for species with a lower degree of adaptation . a possible application of our extension of the bs model to the economic world is that the distribution of global - fitness can be related to the size distribution of firms in the most developed markets . in this respect the evolution of firms is seen as a punctuated equilibrium process in which the convolution of mutual interest can justify the spreading in size of the firms themselves . the toy model proposed in 1993 by bak and sneppen is one of the most popular models for biological evolution . the main idea behind this model is that each species can be uniquely characterized by a single parameter called _ fitness_. the fitness of a species represents its degree of adaptation with respect to the external environment . highly adapted species will hardly undergo any successful , spontaneous mutations . at the opposite end of the scale , if a species has a very low degree of fitness it needs to mutate in order to survive and its mutation automatically influences the other species belonging to the same environment . these concepts can be easily formulated as a simple 1d model . suppose that the ecology can be represented by a periodic array of cells and each cell , , is assigned a fitness , , taken from a uniform distribution between 0 and 1 . once we have fixed the initial condition , for each discrete time - step , the dynamical evolution of the system works as follows : \a ) locate the species with minimum fitness that is , the one most likely to mutate , , \b ) change the fitness of and that of its neighbours ( species related ) according to where the new fitness value , , is a random number taken from a uniform distribution bounded between 0 and 1 . from numerical and analytical studies it has been shown that the values of the fitness evolve to a step function , in the thermodynamic limit ( ) , characterized by a single value , . for the distribution of fitness , , is uniformly equal to zero while for we have , determined by the normalization condition . an example is shown in fig . [ fig1 ] ( a ) and ( b ) . in this model it is also possible to define a bust - like avalanche dynamics . suppose we fix a threshold for the fitness , and consider as the minimal fitness at time step . if at a certain time step , , it happens that then we can measure the interval of time , , needed for having again . in this case an avalanche of duration , or size , has taken place in order to restore a minimal fitness in the system . if then we have : the system is critical , see fig . [ fig1](c ) . according to this model the great mass extinctions of species , like dinosaurs for example , can be explained in terms of burst - like dynamics . a small perturbation in an critical self - connected system can trigger a chain reaction that may influence a great part of the species in the ecosystem . time series of fossils samples seem to be in agreement with this avalanche dynamics in the extinction / evolution of species . a more detailed discussion of the bs model goes beyond the scope of the present work . for a general review see ref . . despite its simplicity , the bs model evolves according to a complex dynamics and it is able to explain some empirical features of the biological evolution . an implicit assumption in the model is that every species is deeply connected to its environment . a mutation on a single element automatically triggers a mutation in its neighbors . but is this approximation always appropriate ? consider for example three species in a one dimensional array and suppose that while . in the standard bs model the cell undergoes to mutation that also triggers a change in and . from a biological point of view it means that two extremely well adapted species have to mutate in order to cope with the mutation of the species . this can be interpreted as a very particular ( pessimistic ) case such as , for example , the case where the species is the main source of food for both the other species . in order to stress this idea we use some examples from different areas in which a similar evolutionary dynamics can be applied . suppose that a new unfit or unskilled player joins a strong team . will this player trigger a regression in the team performance or will the team compensate for this lack of skills ? this is a small perturbation after all . another example comes from economics . in this case , it has been shown , that the dynamics of different firms is correlated . in fact , it is not unusual for a company to own large amounts of stock of other companies and so on . the result is an entangled environment , where the evolution of a firm is , in a way , linked to the evolution of its network of interaction . is it then possible , in this case , for an wealthy environment to sustain an unfit element , or will its lack of `` fitness '' bring to the brink of the financial collapse all the other partners , as the bs model would suggest ? we provide an answer to these questions using a modified version of the bs model that takes into account the feedback of the environment on the single element . we refer to this model incorporating local interactions in the bs model as the libs model . for the sake of simplicity we do not consider the topology of the interaction , that may be very complex ; rather , we use a simple 1d array . the influence of the network structure on the dynamics of the model will be discussed in our future work . as a first approximation we consider our species to be arranged on a one dimensional array with nearest neighbor interactions . this means that the micro - environment is composed of three cells . the value of the fitness , , for each cell is taken , according to the bs model , from a uniform distribution between zero and one . the fitness parameter , , of the cell represents the _ self - fitness _ of the species . motivated by the aforementioned examples , we add an environmental contribution to the self - fitness that leads to a _ global - fitness _ , , according to where and are the fractions of fitness that the cell shares with its neighbours . the matrix of is not symmetric , reflecting the fact that the contribution in one direction can be very different that the contribution in the other . this is equivalent to considering a directed _ weighted graph _ with a trivial necklace topology . in the sport example , the global fitness corresponds to the fit players that contribute to sustaining the unskilled team - mate . from the economic point of view it represents the capability of a firm to gain benefits from its partnerships with other firms . in this particular case , represents the wealth generated by the firm itself , while the other two terms represent the contribution , in different forms , from the linked firms . in general , we can consider the new terms in the definition of as short ranged random forces acting on the cell . at the beginning of the simulation the self - fitness is drawn from a uniform distribution between zero and one . the same is done for the link weights , . it is worth emphasizing that , in general , for two cells and , . assuming that the neighbours can cooperate in defining the fitness of a species ( optimistic view ) , the extremal dynamics is moved from to . once the site with minimum global fitness , , is located , then the self - fitness and the interactions of this species are redrawn according to the following rules : where the new values for the changed quantities are taken from a uniform distribution between zero and one , as in the bs model . however , in contrast to the bs model , a change in the fitness of the species does not automatically trigger a change in the neighbours . only the interactions are changed . in order to test the stability of the model we monitor the _ average fitness _ and the _ gap function _ , , for both and . the gap function is nothing but the tracking function of the minimum of ( or ) . at we have ( or ) . as the evolution proceeds eventually for a certain we will have ( or ) as the minimum barriers are converging toward the critical value . the gap function is then updated as ( or ) and so on . it is easy to see that in the stationary state the gap function converges toward the critical value . gets closer to the critical value and , therefore , their average duration is suppose to diverge . as soon as we get very close to this point , an artifact regime sets in and the gap function start to saturate toward . the phase in which can be regarded as a transition point for the physically meaningful state : the larger the system is , that is closer to the thermodynamics limit , the slower is the drift from this point and the system can be regarded , in good approximation , as stable . an accurate study of this phenomenon in relation to the libs model , although very interesting , is not of fundamental importance in the contest of the present work , therefore we will consider the system to be stable as soon as the gap function and the average reach a plateau . ] in fig . [ fig2 ] the time series of average values and the gap function of are plotted for different number of species in the ecology . the time to reach the stable state depends strongly on the size : for , the largest system in our simulations , we need approximately mutations to achieve the equilibrium . note also that a simple rescaling , , leads to a collapse of these curves . the relaxation times in the bs model are , approximately , one order of magnitude lower compared to the libs model of the same size ( or in rescaled time ) . a snapshot of the grid in the stable configuration is shown in fig . [ fig3 ] . we notice immediately that that the local fitness is no longer distributed like a step function ( as for bs ) . rather a long , exponential , tail of low fitness is evident , as shown in fig . [ fig4 ] ( left ) . the cells with a higher local fitness still have a greater probability to survive but the global - fitness , or the presence of environmental partnerships , widens the possibility of survival , even for some species with a lower degree of self - fitness . if we examine the global - fitness , a single avalanche is present as in the classical bs model . moreover the probability distribution function for the avalanche duration , shown in fig . [ fig5 ] and computed with respect to , is power law distributed , in relation with the criticality of the model . the index of the distribution turns out to be different from that of bs : the change in the dynamics has also led to a change in the universality class of the model . the distribution of global - fitness , shown in fig . [ fig4 ] ( right ) , has a polynomial decay ( 4th order fit in the plot ) above a critical threshold , as a result of the convolution of stochastic variables . a similar behaviour can be found also in the size distribution of firms , suggesting a possible practical application to the libs model . axtell analyzed the size distribution of u.s . companies , defined as the total number of employee , during 1997 . he fund that it could be well represented by a zipf distribution , with , being the size of the firm . this seemed to confirm the validity of the `` gibrat s principle '' according to which the firm growth rate is independent on its size . further investigation on this issue has been carried on by gaffeo _ et al . _ , which analyzed a database of companies for the g7 countries from 1987 to 2000 . their analysis could confirm the findings of axtell , power law distribution with , just for some particular definition of firm size , that is not unambiguous , and for some particular business periods . what they found , in general , is a robust power law behaviour , although the index could change with the time window analyzed and the definition of firm size , in contrast with the standard theory of gibrat . the qualitative discrepancies between and the distribution empirically found for the size distribution of firms can be explained if we take into account more complex topologies for interactions between species , or economical entities in this case . different kinds of convolution can generate a different shape in the distribution of the global - fitness as it can be easily deduced by writing eq . [ global ] in a general form as where and the sum over is extended to all the neighbours of the species . in eq . [ global_gen ] no particular topology is specified . for an isotropic model on a -dimensional lattice , is equal for all the species and depends just on and the definition of neighborhood : the theoretical boundaries for are equal for all the species and we can talk about a `` democratic '' model . however , recent studies have shown that , in real biological and social systems the number of links per elementary unit , , is not constant , but characterized by a non - trivial probability distribution function , , as a result of the complex nature of the interactions between species or individuals . from eq . [ global_gen ] , we can immediately see that , by adopting a complex network as underlying structure for the interaction between species , we move to a model in which each species have different boundary values for the global - fitness , since . this inequality have a straightforward interpretation : species with a large number of connections will have an higher barrier against environmental changes due to the fact that they can relay on numerous resources . a simple way to obtain a complex network structure is by considering an open system , where the number of species is not fixed but grows in time , as for example the firms in a dynamic economy . in this case , it will be more likely for the new economic entities to be connected with a well established one that has already a large number of connections ( growth and preferential attachment are actually the two main ingredients for the albert - barab model for scale - free networks ) . according to our model these `` hubs '' , that is companies such as general motors or coca cola , have an higher chance of surviving a turbulent period compared to isolated nodes : they have a larger influence in the dynamics of the model . this simple consideration , although not exhaustive , show how the underling topology of the interactions can play an important role in the final distribution of the global fitness . further analytical and numerical test would be of great importance in order to understand the dynamics of the libs model and to which extend it can be applied to real systems . it is also worth pointing out that another parameter related to is the domain of itself , that , in the present , case we assume to be uniform in the interval ( 0,1 ) . in fact , a change in this distribution , while preserving the dynamics of the model , would lead to a different shape in the final distribution of . while these important issues will be addressed in our future work , in appendix we report a further extension of the model where also the second - nearest neighbour interaction is considered . the results obtained with the libs model confirm the relevance of self - organized criticality in complex systems and , in particular , economics . the concept of mutual cooperation , introduced via the global - fitness , can explain the ubiquity of broad tails in the distribution of characteristic quantities of physical and social systems in terms of a convolution of variables between elements of the network of interaction . in the economic context , this asymptotic behaviour can be related to the empirical findings concerning the distribution of the size of firms . the possible relevance of self - organized criticality in economics has already been suggested by recent theoretical and empirical studies , while possible applications of the bs model in this field can be found in refs . . the application of the soc concept to social sciences can , in general , be motivated by empirical observations of the `` intermittent '' activity in the human dynamics at every level , from wars to revolutions and , in particular , intellectual production where moments of frenetic activity can be alternated by long breaks , which length can not be predicted ( this holds , indeed , for one of the authors , m.b . ) . this process is , in a way , similar to the discharge , via avalanches , needed in the classical sandpile model , to restore the critical slope . in a real economic world a wide series of changes , similar to avalanches , can be triggered by exogenous or endogenous shocks related to structural changes at macroeconomic level , for example the creation and successive enlargement of the european community or the fall of the soviet empire , or at microeconomic level , as the invention of a new technology . since the shocks leading to avalanches are of different nature , we also expect the existence of different time scales involved in the self - organization process . in soc systems , in general , the existence of a sharp separation between time scales , energy storage and relaxation , appears to be a strict prerequisite . in the bs model , as in the libs model , by mutating one unit at the time , we implicitly assume that the time to extinction , , of a species depends exponentially on its global fitness , that is . the exponential separation of the extinction times is at the core of the `` punctuation '' . in economics terms we can still assume this behaviour : changes of poorly fitted firms can be simply related to small microeconomic fluctuations that can happen in time scales of the order of weeks or months while much longer times are needed to change the fitness of an highly adapted company . in the latter case radical changes are needed , as for example a switch from a political regime to another , that may take centuries to happen . in conclusion , we have extended the bak - sneppen model for biological evolution by introducing explicitly local interactions between elements of the ecology . numerical simulations have shown how the dynamics of the model , while still leading to a self - organized critical state , can be largely effected by the environmental forces , leading to smoother distributions in both the intrinsic fitness , , and the global fitness , . as already pointed out by grassberger the bs model can not be taken too seriously for describing the punctuated equilibrium of biological evolution . nevertheless , because of its simplicity , it can easily be used as paradigm for other complex systems . in the present work we suggest a possible application of our extension of the bs model to the economic world . in particular the distribution of global - fitness can be related to the size distribution of firms in the most developed markets . in this respect the evolution of firms is seen as a punctuated equilibrium process in which the convolution of mutual interest can justify the spreading in size of the firms themselves . it is worth pointing out that the actual shape of the distribution of global - fitness is related to the topology of the interaction . a simple 1d model can not be expected to account completely for the power law distribution observed in the size of firms . future work will be devoted to the application of the model proposed in this paper to more complex topologies as , for example , to scale - free networks , that are more likely to reproduce the real interactions between economic entities .
in the analysis of infections with multiple strains simultaneously co - circulating in a population , an important role is played by antigenic diversity , where hosts can be infected multiple times with antigenically different strains of the same parasite , which allows the parasite to maintain its presence in the host population ( craig and scherf 2003 , lipsitch and ohagan 2007 ) .major examples of pathogens employing antigenic diversity as a strategy of immune escape include malaria ( gupta et al 1994 , recker et al 2004 ) , meningitis ( gupta and anderson 1998 , gupta et al .1996 ) , dengue ( gog and grenfell 2002 , recker et al 2009 ) , and influenza ( earn et al 2002 , ferguson et al 2003 , smith et al 1999 ) . from the perspective of interactions between different strains, one can distinguish between two major types of strain interactions : _ ecological interference _ where a host infected with one strain is removed from the population susceptible to other strains ( levin et al 2004 , rohani et al 2003 ) , and _ immunological interference _, where infection with one strain may confer partial or full immunity to other strains ( gupta and anderson 1998 ) or lead to enhancement of susceptibility or transmissibility of other strains , as is the case for dengue ( recker et al 2009 ) and hpv ( elbasha and galvani 2005 ) .the underlying mechanism of cross - immunity is generic for all pathogens : an infection with one strain of a pathogen elicits a lasting immune memory protecting the host against infections with other immunologically related strains . in terms of analysis of the dynamics of multi - strain diseases , in the last twenty years a significant number of mathematical models have been put forward that aim to explore and explain different aspect of interactions between multiple strains . in terms of implementation , one can divide these models into agent- or individual - based models and equation - based models . for the first class of models , pathogen strains are treated as individuals interacting according to some prescribed rules ( buckee et al .2004 , buckee and gupta 2010 , cisternas et al 2004 , ferguson et al 2003 , sasaki and haraguchi 2000 , tria et al 2005 ) , which allows for efficient stochastic representation of immunological interactions but does provide an intuition arising from analytical tractability .the second class of models provides two alternative treatments of cross - immunity between strains , known as history - based and status - based approaches . in history - based models ,the hosts are grouped according to what strains of a pathogen they have already been infected with , and transitions between different compartments , which corresponds to infection with other strains , occur at rates depending on the strength of cross - protection between strains ( andreasen et al 1996 , andreasen et al 1997 , castillo - chavez et al 1989 , gomes et al 2002 , gupta et al 1998 , gupta et al 1996 , lin et al 1999 ) . on the other hand , in status - based modelsthe hosts are classified not based on their previous exposures to individual strains but rather by their immune status , i.e. the set of strains to which a given host is immune ( gog and grenfell 2002 , gog and swinton 2002 , kryazhimsky et al 2007 ) .once in a particular immune compartment , upon infection with a new strain individuals move to other immune compartments at rates determined by the probabilities of acquiring cross - immunity against other strains . in this approach ,partial cross - immunity can make some hosts become completely immune whilst other hosts will not gain immunity from the same exposure - this is known as _ polarized immunity _( gog and grenfell 2002 ) and is equivalent to an alternative formulation used in the analysis of effects of vaccination ( smith et a 1984 ) .since different strains of a pathogen form as a result of some common genetic process , they inherit immunological characteristics associated with this process .a convenient tool quantifying the degree of immunological relatedness between different strains arising from their antigenic structure is the _antigenic distance _ between strains , which can take into account antigenic structure as determined by the configuration of surface proteins , as well as the difference in antibodies elicited in response to infection with another genotype ( gupta et al 2006 , smith et al 1999 , smith et al 2004 ) .conventionally , one assumes that the larger is the antigenic distance between two strains , the smaller is the level of cross - immunity between them . in mathematical models of multi - strain diseases , one of the effective ways to include antigenic distance is to use a multi - locus system ( gupta et al 1998 , gupta et al 1996 ) , where each strain is represented by a sequence of loci with alleles in each locus , thus resulting in a discrete antigenic space (some authors have considered similar set - up in a continuous one - dimensional antigenic space ( adams and sasaki 2007 , andreasen et al 1997 , gog and grenfell 2002 , gomes et al 2002 ) . in this approach , for any two given strains, the number of locations at which their sequences are identical determines their immunological relatedness , which is taken as a proxy measure of cross - immunity ( calvez et al 2005 , cobey and pascual 2011 , ferguson and andreasen 2002 , gupta et al 1998 , minaev and ferguson 2009 , tria et al 2005 ) .alternatively , it is possible to map each genotype to a point in antigenic space ( koelle et al 2006 , recker et al 2007 ) and then separately introduce a function that determines the strength of cross - immunity between strains based on their antigenic distance ( adams and sasaki 2007 , andreasen 1997 , gog and grenfell 2002 , gomes et al 2002 ) .whilst significant progress has been made in the analysis of generic features of multi - strain models and possible types of dynamics they are able to exhibit , the effects of symmetry , which is present in many of the models , have remained largely unexplored .andreasen et al ( 1997 ) have considered a multi - strain epidemic model with partial cross - immunity between strains .they analysed stability of the boundary equilibria representing symmetric steady states with only immunologically unrelated strains present , and also showed that the internal endemic equilibrium can undergo hopf bifurcation giving rise to stable periodic oscillations .furthermore , these authors also demonstrated how this periodic orbit can disappear in a global bifurcation involving a homoclinic orbit through a two - strain equilibrium .this work was later extended to a system of three linear - chain strains ( lin et al 1999 ) , and again the existence of sustained oscillations arising from a hopf bifurcation of internal endemic equilibrium was shown . dawes and gog ( 2002 ) have considered a generalised model of an sir dynamics with four co - circulating strains and studied possible bifurcations leading to the appearance of periodic behaviour by performing bifurcation unfolding in the regime when the basic reproductive number very slightly exceeds unity .more recently , chan and yu ( 2013a , b ) have used groupoid formalism to analyse symmetric dynamics in models of antigenic variation and multi - strain dynamics , and they have also demonstrated the emergence of steady state clustering as a result of symmetry properties of the system .blyuss ( 2013 ) has investigated symmetry properties in a model of antigenic variation in malaria ( see also blyuss and gupta ( 2009 ) for analysis of other related dynamical features ) , and blyuss and kyrychko ( 2012 ) have extended this analysis to study the effects of immune delay on symmetric dynamics .in this paper we use the techniques of equivariant bifurcation theory to systematically study stability of steady states and classification of different types of periodic behaviour in a multi - strain model . using a classical multi - locus model of gupta et al ( 1998 ) as an example, we will illustrate how the symmetry in the interactions between strains can provide a handle on understanding steady states and their stability , as well as the emergence of symmetry - breaking periodic solutions .the outline of this paper is as follows . in the next sectionwe introduce the specific model to be used for analysis of symmetries in models of multi - strain diseases and discuss its basic properties .section 3 contains the analysis of steady states and their stability with account for underlying symmetry of the model . in sect .4 different types of dynamical behaviours in the model are investigated and classified in terms of their symmetries .section 5 illustrates how a similar methodology can be used for studying other types of multi - strain models .the paper concludes in sect . 6 with discussion of results and future outlookin order to study the effects of symmetry on dynamics in multi - strain models , we consider a multi - locus model proposed by gupta et al ( 1998 ) . in this model , denotes a proportion of population who are immune to strain , i.e. those who have been or are currently infected with the strain , is the fraction of population who are currently infectious with the strain , and is the proportion of individuals who have been infected ( or are currently infected ) by a strain antigenically related to the strain , including itself ( with ) .the model equations can then be written as -(\mu+\sigma)y_i,}\\\\ \displaystyle{\frac{dz_i}{dt}=\lambda_i(1-z_i)-\mu z_i,}\\\\ \displaystyle{\frac{dw_i}{dt}=(1-w_i)\sum_{j\sim i}\lambda_j-\mu w_i , } \end{array}\ ] ] where is the force of infection with strain defined as , where is the transmission rate assumed to be the same for all strains , and are the average host life expectancy and the average period of infectiousness , respectively , and is the cross - immunity , giving the reduction in transmission probability conferred by previous infection with one strain . in terms of disease transmission ,the population is assumed to be randomly mixed , and upon recovery from infection with a particular strain , the immunity to that strain is lifelong . to characterize strains and their immunological interactions ,each strain is described by a sequence of antigens consisting of loci , with , , alleles at each locus , so that . in system ( [ gfasys ] ) , expression refers to all strains sharing alleles with strain . in the simplest non - trivial case of a two locus - two allele system represented by alleles and at one locus , and and at the other, we have a system of four antigenically distinct strains as shown in fig .[ strains ] .a simple but justifiable assumption about such system is that as a consequence of immune selection , infection , for instance , with strain will have a negative impact on transmission of strains and but will have no impact on transmission of the strain , as they are completely immunologically distinct ( gupta et al 1996 ) .hence , when considering the equation for the strain , the sum in the right - hand side will include contributions from strains , and but will exclude strain . in order to quantify interactions between different strains , it is convenient to introduce an connectivity matrix , whose entries indicate whether or not two strains are antigenically related .if the antigenic distance between strains is not taken into account , the entries of the matrix would be zeros if the two variants are immunologically completely distinct , and ones if they are related .several papers have considered how one can make such a description more realistic by including antigenic distance between different strains , which can be done by using , for instance , the hamming between two strings representing alleles in the locus of each strain ( adams and sasaki 2009 , cobey and pascual 2011 , gog and grenfell 2002 , gomes et al 2002 , recker and gupta 2005 ) .since we are primarily interested in the symmetry properties of the interactions between different strains , we will not consider the effects of antigenic distance on the dynamics . before proceeding with the analysis of this system , one can reduce the number of free parameters by scaling time with the average infectious period , and we also introduce the basic reproductive ratio and the ratio of a typical infectious period to a typical host lifetime . using the connectivity matrix and the new parameters , the system ( [ gfasys ] ) can be rewritten as follows -y_i,}\\\\ \displaystyle{\frac{dz_i}{dt}={\lambda}_i(1-z_i)-ez_i,}\\\\\displaystyle{\frac{dw_i}{dt}=(1-w_i)(a{\bf \lambda})_i - ew_i , } \end{array}\ ] ] where and .for the particular antigenic system shown in fig .[ strains ] , if one enumerates the strains as follows , the corresponding connectivity matrix is given by the construction of the connectivity matrix can be generalized to an arbitrary number of loci and alleles .the above system has to be augmented by appropriate initial conditions , which are taken to be it is straightforward to show that with these initial conditions , the system ( [ sys ] ) is well - posed in that its solutions remain non - negative for all time .system ( [ sys ] ) has a large number of biologically realistic steady states .as expected , the trivial steady state is unstable when the basic reproductive ratio exceeds unity . in order to systematically study other steady states and their stability , as well as to illustrate howthe methods of equivariant bifurcation theory can be employed to obtain useful insights into stability and dynamics of the system , we concentrate on a specific connectivity matrix given in ( [ amat ] ) that corresponds to a two locus - two allele system ( [ var4 ] ) . in this case , and the system ( [ sys ] ) is equivariant under the action of a dihedral group , which is an 8-dimensional symmetry group of a square .this group can be written as , and it is generated by a four - cycle corresponding to counterclockwise rotation by , and a flip , whose line of reflection connects diagonally opposite corners of the square , see fig .[ d4sym](a ) .the group has eight different subgroups ( up to conjugacy ) : , , and , as well as generated by a reflection across a diagonal , generated by a reflection across a vertical , generated by reflections across both diagonals , and generated by the horizontal and vertical reflections .finally , the group is generated by rotation by .the lattice of these subgroups is shown in fig .[ d4sym](b ) .the group has two other subgroups and , which will be omitted as they are conjugate to and , respectively .there is a certain variation in the literature regarding the notation for subgroups of , and we are using the convention adopted in golubitsky and stewart ( 2002 ) , c.f .( buono and golubitsky 2001 , golubitsky et al 1988 ) .the group has four one - dimensional irreducible representations ( fssler and stiefel 1992 , golubitsky and stewart 1986 ) .equivariant hopf theorem ( golubitsky et al 1988 , golubitsky and stewart 2002 ) states that under certain genericity hypotheses , there exists a branch of small - amplitude periodic solutions corresponding to each -_axial _ subgroup acting on the centre subspace of the equilibrium . to find out what type of periodic solution the fully symmetric steady state will actually bifurcate to , we can use the subspaces associated with the above - mentioned one - dimensional irreducible representations to perform an isotypic decomposition of the full phase space ( blyuss 2013 , swift 1988 ) .the find the fully symmetric steady state ( i.e. when all strains are exactly the same ) we can look for it in the form substituting this into ( [ sys ] ) gives the system of coupled equations =1,\\\\ry(1-z)-ez=0,\\\\ 3ry(1-w)-ew=0 .\end{array}\ ] ] the last two equations can be solved to yield and substituting these expressions into the first equation of ( [ fsss ] ) gives the quadratic equation for : -(r-1)e^2=0.\ ] ] this equation can only have a positive root for : ^ 2 + 12(r-1)}\right].}\ ] ] hence , the fully symmetric steady state is given by and it only exists for , which , expectedly , is exactly the condition of instability of the trivial steady state . for the fully symmetric steady state , the jacobian of linearization takes the block form where and are zero and unit matrices , and is the connectivity matrix ( [ amat ] ) . rather than compute stability eigenvalues directly from this matrix, we can use isotopic decomposition of the phase space to block - diagonalize this jacobian .we note that acts to permute indices of different strains , hence our phase space consists of three copies of the irreducible representations of .dellnitz and melbourne ( 1994 ) have shown earlier that the sub - spaces are -irreducible and give isotypic components of .using such decomposition on , and , the jacobian ( [ jace ] ) can be block - diagonalized in the following way ( golubitsky and stewart 1986 , swift 1988 ) : where the matrix is the transformation matrix based on the isotopic decomposition ( [ isd ] ) , and here , matrix is associated with self - coupling , and is associated with nearest - neighbour coupling .isotypic decomposition of the phase space results in representation of this space as a direct sum of three linear subspaces ( swift 1988 ) where , called ` even ' subspace , is the invariant subspace where all strains behave identically the same ; , known as ` odd ' subspace , has each strain being in anti - phase with its neighbours , and in the subspace , each strain is in anti - phase with its diagonal neighbour .-invariance of these subspaces implies that stability changes in the , and matrices describe a bifurcation of the fully symmetric steady state in the even , odd , and subspaces , respectively ( swift 1988 ) .prior to performing stability analysis , we recall the routh - hurwitz criterion , which states that all roots of the equation are contained in the left complex half - plane ( i.e. have negative real part ) , provided the following conditions hold ( murray 2002 ) the above cubic equation has a pair of purely imaginary complex conjugate eigenvalues when as discussed by farkas and simon ( 1992 ) .+ * theorem 1 .* _ the fully symmetric steady state is stable when _ _ where _,}\\ \displaystyle{k_2=yr\gamma(w-4)+3y(1+r^2\gamma w)+2\gammae(w-1)-e / r,}\\ \displaystyle{k_3=12ry(e^2+r^2y^2)+r^2 ye(1 - 2\gamma)+22r^2y^2e+2e^3}. \end{array}\ ] ] _ the steady state is unstable whenever any of the above conditions are violated ; it undergoes a hopf bifurcation in the odd subspace at _ , _ and a steady - state bifurcation at _ _ and _ . +the proof of this theorem is given in the appendix .+ the implication of the fact that the hopf bifurcation can only occur in the odd subspace of the phase space ( swift 1988 ) is that in the system ( [ sys ] ) the fully symmetric state can only bifurcate to an odd periodic orbit , for which strains and are synchronized and half a period out - of - phase with strains and , i.e. each strain is in anti - phase with its nearest antigenic neighbours . besides the origin and the fully symmetric equilibrium , the system ( [ sys ] ) possesses 14 more steady states characterized by a different number of non - zero strains .there are four distinct steady states with a single non - zero strain , which all have the isotropy subgroup or its conjugate .a representative steady state of this kind is with the other steady states , and being related to through elements of a subgroup of rotations .the values of , and are determined by the system of equations =1,\\\\ry_1(1-z_1)-ez_1=0,\\\\ r(1-w_1)y_1-ew_1=0 , \end{array}\ ] ] which can be solved to yield similarly to the fully symmetric steady state , the steady states with a single non - zero variant are only biologically feasible for .+ * theorem 2 . * _ all steady states _ , , , _ with one non - zero strain are unstable . _ + the proof of this theorem is given in the appendix .+ before moving to the case of two non - zero strains , it is worth noting that elements of the symmetry group representing reflections split into two distinct conjugacy classes : reflections along the diagonals of the square , and reflections along horizontal / vertical axes .these two conjugacy classes are related by an outer automorphism , which can be represented as a rotation through , which is a half of the minimal rotation in the dihedral group ( golubitsky et al 1988 ) .now we consider the case of two non - zero strains , for which there are exactly six different steady states . the steady states with non - zero strains being nearest neighbours in fig .( [ strains ] ) , i.e. ( 1,2 ) , ( 2,3 ) , ( 3,4 ) and ( 1,4 ) , form one cluster : while the steady states with non - zero strains lying across each other on the diagonals , i.e. ( 1,3 ) and ( 2,4 ) , are in another cluster the difference between these two clusters of steady states is in the above - mentioned conjugacy classes of their isotropy subgroups : the isotropy subgroup of the first cluster belongs to a conjugacy class of reflections along the horizontal / vertical axes , with a centralizer given by , and the isotropy subgroup of the second cluster belongs to a conjugacy class of reflections along the diagonals , with a centralizer given by . substituting the general expression for the steady state into the system ( [ sys ] )shows that the values of , , and are determined by the following system of equations =1,\\\\ry_2(1-z_2)=ez_2,\\\\ ry_2(1-w_{21})=ew_{21},\\\\ 2ry_2(1-w_{22})=ew_{22}. \end{array}\ ] ] the last three equations of this system can be solved in a straightforward way to give and substituting this into the first equation of the system gives the quadratic equation for - e^2(r-1)=0,\ ] ] with the solution ^ 2 + 8(r-1)}\right],\ ] ] and this solution is biologically feasible only for . in a very similar way ,substituting the expected form of the steady state into the system ( [ sys ] ) gives the following system of equations for , , , =1,\\\\ ry_3(1-z_3)=ez_3,\\\\ ry_3(1-w_{31})=ew_{31},\\\\ 2ry_3(1-w_{32})=ew_{32}. \end{array}\ ] ] once again , we first solve the last three equations to find and substituting them into the first equation of the above systems yields the value of as and one can note that this steady state is again only biologically feasible when . + * theorem 3 . * _ all steady states _ , , , , _ are unstable .steady states _ _ and _ , _ are stable for _ + _ and unstable otherwise_. + the proof of this theorem is given in the appendix .+ for three non - zero variants , we again have four different steady states having an isotropy subgroup or its conjugate , with a representative steady state being and the other steady states , and being related to through elements of a subgroup of rotations . substituting this form of the steady state into the system ( [ sys ] )shows that the different components of satisfy =1,\\\\ r[1-\gamma w_{42}-(1-\gamma)z_{41}]=1,\\\\ ry_{41}(1-z_{41})-ez_{41}=0,\\\\ ry_{42}(1-z_{42})-ez_{42}=0,\\\\ r(y_{41}+2y_{42})(1-w_{41})-ew_{41}=0,\\\\ r(y_{41}+y_{42})(1-w_{42})-ew_{42}=0,\\\\ 2ry_{42}(1-w_{43})-ew_{43}=0 .\end{array}\ ] ] solving this system in a manner similar to that for other steady states considered earlier yields }{r[r(1-\gamma)-1-ry_{42}]},\hspace{0.5cm}z_{41}=\frac{ry_{41}}{ry_{41}+e},\hspace{0.5cm}z_{42}=\frac{ry_{42}}{ry_{42}+e},}\\\\ \displaystyle{w_{41}=\frac{r(y_{41}+2y_{42})}{r(y_{41}+2y_{42})+e},\hspace{0.5cm}w_{42}=\frac{r(y_{41}+y_{42})}{r(y_{41}+y_{42})+e},\hspace{0.5 cm } w_{43}=\frac{2ry_{42}}{2ry_{42}+e } } , \end{array}\ ] ] and is a positive root of the quartic equation ^2+\gamma(r-\gamma)(\gamma-1)}\\\\ \displaystyle{+\left[1+r^2 + 8r\gamma-2(r+\gamma)-r\gamma\left(6\gamma(1-r)+r(5 + 2\gamma^2)\right)\right]z=0 . }\end{array}\ ] ] it does not prove possible to find a closed form expression for the eigenvalues of linearization near , hence these eigenvalues have to be computed numerically .for all biologically realistic values of parameters we have studied , one of these eigenvalues is always positive , suggesting that a steady state ( and also , , ) is unstable .figure [ bifdia ] shows the bifurcation diagram for different steady states depending on the disease transmission rate and the cross - immunity . if , the only biologically feasible steady state is the disease - free equilibrium , and it is stable .when , the other steady states with different numbers of non - zero strains are also biologically feasible . for sufficiently small values of cross - immunity ,the fully symmetric steady state is the only stable steady state , and as increases , this steady state loses its stability either via hopf bifurcation or via a steady - state bifurcation .when the fully symmetric steady state undergoes hopf bifurcation , it gives rise to a stable anti - phase periodic orbit , however as is increased , this periodic orbit disappears via a global bifurcation upon collision with two steady states and ; such behaviour has been observed by dawes and gog ( 2002 ) who performed a very detailed bifurcation analysis of the case .figure [ bifdia ] also shows that although a large number of different non - trivial steady states may exist for , when the cross - immunity between strains is close to one , this will make it impossible for the immunologically closest strains to simultaneously survive , thus resulting in the fact that the only stable steady states in this regime are `` edge '' equilibria and with antigenically unrelated strains present ( dawes and gog 2002 ) .in the previous section we studied stability of different steady states of the system ( [ sys ] ) and found conditions under which a fully symmetric steady state can undergo hopf bifurcation , giving rise to a stable anti - phase periodic solution .now we look at the evolution of this solution and its symmetries under changes in system parameters . for convenience , we fix all parameters except for the cross - immunity , which is taken to be a control parameter . the results of numerical simulations are presented in fig . [ dyn_fig ] . when is sufficiently small , the fully symmetric steady state is stable , as shown in fig .[ dyn_fig](a ) . as crosses the threshold of hopf bifurcation as determined by * theorem 1 * , the fully symmetric steady state loses stability , giving rise to an odd periodic solution illustrated in fig .[ dyn_fig](b ) , where strains 1 and 3 are oscillating in complete synchrony and exactly half a period out of phase with strains 2 and 4 which also oscillate in synchrony .figures [ dyn_fig](c)-(e ) show that for higher values of , the periodic solution remains stable and retains its symmetry but changes the temporal profile .for very large values of , this periodic orbit becomes unstable , and the system tends to a steady state with isotropy subgroup , which is stable in the light of * theorem 3*. in this case , we conclude that the cross - immunity between any two strains which are immunologically closest to each other is so strong that it actually leads to elimination of one of these strains , thus creating a situation where two strains that are most immunologically distant survive , and the other two strains are eradicated .it is worth mentioning that due to the symmetry between the strains , there is no inherent preference for survival of the or pair of strains . to classify the symmetry of other possible types of periodic solutions ,it is convenient to refer to the theorem , which uses information about individual spatial and spatio - temporal symmetries of periodic solutions ( buono and golubitsky 2001 , golubitsky and stewart 2002 ) . to use this method, we note that due to -equivariance of the system ( [ sys ] ) and uniqueness of its solutions , it follows that for any -periodic solution and any element of the group , one can write for some phase shift . the pair is called a _ spatio - temporal symmetry _ of the solution , and the collection of all spatio - temporal symmetries of forms a subgroup .one can identify with a pair of subgroups , and , such that .we also define here , consists of the symmetries that fix at each point in time , while consists of the symmetries that fix the entire trajectory . under some generic assumptions on and ,the theorem states that periodic states have spatio - temporal symmetry group pairs only if is cyclic , and is an isotropy subgroup ( buono and golubitsky 2001 , golubitsky and stewart 2002 ) .the theorem was originally derived in the context of equivariant dynamical systems by buono and golubitsky ( 2001 ) , and it has subsequently been used to classify various types of periodic behaviours in systems with symmetry that arise in a number of contexts , from speciation ( stewart 2003 ) to animal gaits ( pinto and golubitsky 2006 ) and vestibular system of vertebrates ( golubitsky et al 2007 ) . from epidemiological perspective , the spectrum of behaviours that can be exhibited in the case of symmetry is quite limited , as it only includes a fully symmetric steady state , a steady state with two non - zero strains , and an anti - phase periodic orbit having a spatio - temporal symmetry with spatio - temporal symmetry . in order to explore other possible dynamical scenarios ,we extend the strain space by assuming that the system ( [ sys ] ) has three alleles in the first locus and two alleles in the second locus .this gives the symmetry group , which is isomorphic to a group - dihedral symmetry group of a triangular prism .the results of numerical simulations for such system of strains are shown in fig .[ gfa6_fig ] . for sufficiently small value of , the system again supports a stable fully symmetric steady state in a manner similar to the case of symmetry .however , when exceeds certain threshold , this steady state undergoes hopf bifurcation , giving rise to a periodic solution , which is a discrete travelling wave with the symmetry , as shown in fig .[ gfa6_fig](c ) . in this dynamical regimeall variants appear sequentially one after another with one sixth of a period difference between two neighbouring variants . from the perspective of equvariant bifurcation theory ,this solution is generic since the group is always one of the subgroups of the group for the ring coupling , or the group for an all - to - all coupling , and its existence has already been extensively studied ( aronson et al 1991 , golubitsky and stewart 1986 , golubitsky et al 1988 ) . from the epidemiological point of view, this is an extremely important observation that effectively such solution , which represents sequential appearance of antigenically related strains of infection , owes its existence not so much to the individual dynamics of the strains , but rather to the particular symmetric nature of cross - reactive interactions between them . as the value of increases , the discrete travelling wave transforms into a quasi - periodic solution , and then to a chaotic solution , where different strains appear in no particular order , and the temporal dynamics of each of them is chaotic , as illustrated in fig .[ gfa6_fig](d ) . for higher values of ,the dynamics becomes periodic again , albeit with a different type of spatio - temporal symmetry , given by , where is a reflection symmetry with respect to a plane going through the edges 2 and 5 , as well as mid - points of the sides 1 - 3 and 4 - 6 .as increases further still , the system tends to a stable steady state having the symmetry .this steady state is similar to the case of symmetry considered earlier in that it contains three non - zero strains , with maximal possible antigenic distance between them .the approach developed in the previous section is sufficiently generic and can be applied to the analysis of a variety of different models for multi - strain diseases , where the existence of a degree of cross - protection ( or cross - enhancement ) between antigenically distinct strains results in a certain symmetry of strain interactions , which then translates into different types of periodic dynamics .epidemiological data and mathematical models suggest that such systems may exhibit a wide range of behaviours , from no strain structure ( nss ) , which represents a system approaching a stable steady state , through the discrete or cyclic strain structure ( css ) , where the systems demonstrates single strain dominance and sequential trawling through the whole antigenic repertoire , to a chaotic strain structure .various aspects of the overlapping antigenic repertoires have already been investigated in a number of models , but so far the effects of symmetry in such systems have remained largely unexplored . as an illustration, we now use symmetry perspective to analyse simulation results in two different multi - strain models . in the first model ,analysed by calvez et al ( 2005 ) , each strain is characterized by a combination of alleles at immunologically important loci , and the strength of cross - immunity between different strains increases with the number of alleles they share .after some rescaling , the model for such a system can be written in the form , } \end{array}\ ] ] where is the fraction of individuals who have never been infected with the strain , is the faction of individuals who have never been infected with any strain sufficiently close to strain including strain itself , is the rescaled fraction of individuals currently infectious with strain , and , is the host life expectancy , is an average period of infectiousness , is the transmission rate . assuming the probability of cross - protection between strains and to be ( i.e. infection with strain reduces the probability that the host will be infected by strain is ) , the force of infection is taken as when this system is considered with three loci and two alleles at each locus , this results in an eight - dimensional strain space , as illustrated in fig . [clus_fig](a ) .analysis of possible dynamics for such a strain space suggests that `` _ ... it is already not so clear why in the eight - strain system the cluster structure of second type ( two clusters of four strains ) appears _ '' ( calvez et al 2005 ) , which is the solution shown in fig .[ clus_fig](b)-(c ) .the authors found this tetrahedral solution unexpected , and indeed stated that `` _ this second type of clustering can hardly be expected a priori _ '' ( calvez et al 2005 ) . at the same time , when considered from the equivariant bifurcation theory perspective , system ( [ gfa ] ) has the octahedral symmetry , and therefore , has three maximal isotropy subgroups : the dihedral group , the permutation group , and a reflection group ( jiang et al 2003 , melbourne 1986 ) .hence , the bifurcation of a fully symmetric steady state into a tetrahedral periodic solution with symmetry should be naturally expected as a result of an equivariant hopf theorem and an underlying symmetric structure of the antigenic space ( fiedler 1988 , jiang et al 2003 ) .this example highlights the importance of including symmetry properties of multi - strain epidemic models into consideration of possible steady states and periodic orbits , as it provides a systematic approach to understanding what types of periodic solutions should be expected in the system from a symmetry perspective . as another example , we consider a model for the population dynamics of dengue fever , which is characterized by an infection with one of four serotypes co - circulating in population .one of the main current theories explaining the observed dynamics of dengue fever is that of antibody - dependent enhancement ( ade ) , whereby cross - reactive antibodies elicited by a previously encountered serotype bind to the newly infecting heterologous serotype , but fail to neutralize it .this leads to the development of dengue haemorrhagic fever ( dhf ) and dengue shock syndrome ( dss ) , characterized by up to 20% mortality rate ( gubler 2002 , halstead 2007 ) . in order to explain the observed temporal patterns of disease dynamics , recker et al ( 2009 ) have proposed a model , which assumes that a recovery from an infection with any one serotype is taken to provide permanent immunity against that particular serotype , but it can lead to an enhancement of other serotypes upon secondary infection after which individuals acquire complete immunity against all four serotypes . in this modelthe population is divided into the following classes : denotes the fraction of the population that has not yet been infected with any of the serotypes and is thus totally susceptible ; is the proportion infectious with a primary infection with serotype , is the proportion recovered from primary infection with serotype ; is the proportion infectious with serotype , having already recovered from infection with serotype ; and , finally , is the proportion of completely immune ( those who have recovered after being exposed to two serotypes ) .the model equations are given as follows where is the average host life expectancy , and is the average duration of infectiousness .the force of infection with a serotype , is given by where is the transmission coefficient of serotype , and the ade is represented by two distinct parameters : the enhancement of susceptibility to secondary infections , , and the enhancement of transmissibility during secondary infection , .although in this case the antigenic space again consists of four distinct serotypes , but unlike earlier examples of dihedral symmetry the system now has an symmetry of four nodes with an all - to - all coupling . for simplicity , it is assumed that all serotypes enhance each other in identical way , i.e. , and also transmissibility is enhanced in the same way , implying that . hence , we fix all other parameters , and vary and to explore possible dynamical regimes .figure [ den_fig ] illustrates different types of behaviour that can be exhibited by the system ( [ reck_eq ] ) as the enhancement of susceptibility and enhancement of transmissibility are varied . in the case when both and are sufficiently small ( equal to or just above 1 ) , the system approaches a stable fully symmetric steady state shown in fig .[ den_fig](a ) .as the enhancement of transmissibility increases , the fully symmetric steady state loses stability via a hopf bifurcation , giving rise to a fully symmetric periodic orbit , as illustrated in fig . [ den_fig](b ) .depending on the values of and , it is possible to observe other types of periodic solutions : a solution where three serotypes have identical dynamics , and the fourth serotype has a different dynamics ( see fig .[ den_fig](c ) ) , and a solution with the symmetry of reflections across diagonals shown in fig .[ den_fig](d ) , where antigenically distinct strains have the same behaviour . for higher values of and ,the dynamics becomes quasi - periodic and eventually chaotic .in this paper we have shown how one can use the techniques of equivariant bifurcation theory to systematically approach the analysis of stability of steady states and classification of different periodic solutions in multi - strain epidemic models .once the underlying symmetry of the system has been established , the steady states can be grouped together using conjugacy classes of the corresponding isotropy subgroups , which significantly reduces computational effort associated with studying their stability . moreover, isotypic decomposition of the phase space based on irreducible representations of the symmetry group provides a convenient way of identifying the specific symmetry of a periodic solution emerging from a hopf bifurcation of the fully symmetric equilibrium .the theorem provides an account of possible types of spatial and temporal symmetries that can be exhibited by periodic solutions , and hence is very useful for systematic classification of observed periodic behaviours .an important question is to what degree real multi - strain diseases can be efficiently described by mathematical models with symmetry , bearing in mind that in reality systems of antigenic strains may not always fully preserve the assumed symmetry .there are several observations suggesting that the results of analysis of symmetric models are still applicable for understanding the dynamics of real multi - strain infections .the first of these comes from the fact that many features of the model solutions , such as single - strain dominance and sequential appearance of antigenically related strains in a manner similar to the discrete travelling wave solution discussed earlier , are also observed in epidemiological data ( gupta et al 1998 , minaev and ferguson 2009 , recker et al 2009 , recker et al 2007 ) .another reason why the conclusions drawn from symmetric models may still hold stems from an argument based on normal hyperbolicity , which is a generic property in such models , suggesting that the main phenomena associated with symmetric models survive under perturbations , including symmetry - breaking perturbations .the discussion of this issue in the context of modelling sympatric speciation using symmetric models can be found in golubitsky and stewart ( 2002 ) .andreasen et al ( 1997 ) have discussed the situation when the basic reproductive ratios of different strains may vary , showing that in this case the endemic equilibrium persists and can still give rise to stable periodic oscillations through a hopf bifurcation .similar issue was discussed by dawes and gog ( 2002 ) who also noted that despite the possibility of oscillations in multi - strain models , quite often the period of such oscillations is comparable to the host lifetime and hence is much longer than the periodicity of real epidemic outbreaks .one possibility how this limitation may be overcome is when there is a sufficiently large number of co - circulating strains , so that the combinations of some of them rising or falling would result in a rapid turnover of the dominant strain , as has been shown in gupta et al ( 1998 ) .reaching a definitive conclusion regarding the validity of symmetric or almost - symmetric multi - strain models requires a precise measurement of population - level transmission rates individual strains , as well as degrees of immunological cross - protection or cross - enhancement , and despite major advances in viral genotyping and infectious disease surveillance , this still remains a challenge .a really important methodological advantage of the approach presented in this paper is its genericity in a sense that the analysis of stability and periodic dynamics relies on the symmetries in immunological interactions between strains , rather than any specific information regarding their individual dynamics as prescribed by a disease under consideration .the fact that some of the fundamental dynamical features in the behaviour of multi - strain diseases appear to be universal suggests a possibility to make significant inroads in the understanding generic types of dynamics using the analysis of some recurring motifs of strain interactions with relatively simple topology . in the modelanalysed in this paper , we were primarily concerned with symmetric properties of the matrix of antigenic connectivity and assumed that the strength of immunological cross - reactivity is the same for all strains .one can make the model more realistic by explicitly including the antigenic distance between strains in manner similar to the hamming distance ( adams and sasaki 2009 , calvez et al 2005 , recker and gupta 2005 ) , which would not alter the topology of the network of antigenic variants but introduce different weights for connections between different strains in the network .another possibility is to consider the effects of time delay in latency or temporary immunity ( arino and van den driessche 2006 , blyuss and kyrychko 2010 , lloyd 2001 ) , which although known to play an important role in disease dynamics , have so far not been studied in the context of multi - strain diseases .the author would like to thank jon dawes for useful discussions and anonymous referees for their helpful comments and suggestions .this appendix contains detailed proofs of * theorems 1 - 3*. + * proof of theorem 1 .* stability of the fully symmetric steady state changes when one of the eigenvalues of the jacobian ( [ jbd ] ) goes through zero along the real axis or a pair of complex conjugate eigenvalues crosses the imaginary axis . due to the block - diagonal form of the jacobianit suffice to consider separately possible bifurcations in the matrices , . for the matrix given in ( [ cd ] ) , the characteristic equation takes the form with +e^4 + 9r^4y^4}{(e+ry)(e+3ry)}>0,}\\\\ \displaystyle{a_3=\frac{r^2 ye[r^2 y^2(9 - 8\gamma)+rye(6 - 4\gamma)+e^2]}{(e+ry)(r+3ry)}>0 . } \end{array}\ ] ] in this case , and also }{(e+ry)(e+3ry)}}\\\\ \displaystyle{+\frac{e^4 + 9r^4y^4-r^2 ye[r^2 y^2(9 - 8\gamma)+rye(6 - 4\gamma)+e^2]}{(e+ry)(e+3ry)}}\\\\ \displaystyle{=\frac{(e+ry)(e+3ry)[12ry(ry+e)+r^2ye(1 - 2\gamma)+22r^2y^2e+2e^3]}{(e+ry)(e+3ry)}=}\\\\ \displaystyle{=12ry(ry+e)+r^2ye+22r^2y^2e+2e^3>0 , } \end{array}\ ] ] which, according to the routh - hurwitz conditions ( [ rhcon ] ) , implies that all eigenvalues of the matrix are contained in the left complex half - plane for any values of system parameters .this means that the steady state is stable in the subspace .similarly , for the matrix we have the coefficients of the characteristic equation as +ry[8e^3 + 9r^3y^3+rye(22e+24ry+3r^2ye)]}{(e+ry)(e+3ry)}>0,}\\\\ \displaystyle{a_3=\frac{r^2 ye[3r^2y^2(3 - 2\gamma)+\mu^2(1 + 2\gamma)+6ery]}{(e+ry)(e+3ry)}>0 , } \end{array}\ ] ] and also +r^2y^2(22e+24ry+3r^2ye)\right]}{(e+ry)(e+3ry)}}\\\\ \displaystyle{+\frac{ry[8e^3 + 9r^3y^3]-r^2 ye[3r^2y^2(3 - 2\gamma)+\mu^2(1 + 2\gamma)+6ery]}{(e+ry)(e+3ry)}}\\\\ \displaystyle{=\frac{(e+ry)(e+3ry)[12ry(r^2y^2+e^2)+r^2ye(1 + 2\gamma)+2e(11r^2y^2+e^2)]}{(e+ry)(e+3ry)}}\\\\ = 12ry(r^2y^2+e^2)+r^2ye(1 + 2\gamma)+2e(11r^2y^2+e^2)>0 .\end{array}\ ] ] once again , using routh - hourwitz conditions ( [ rhcon ] ) we conclude that the eigenvalues of the matrix are always contained in the left complex half - plane , implying stability of the steady state in the even subspace .finally , for the matrix , the coefficients of the characteristic equation are ,}\\\\ \displaystyle{a_3=yr\gamma(w-4)+3y(1+r^2\gamma w)+2\gamma e(w-1)-e / r , } \end{array}\ ] ] substituting the value of from ( [ zw ] ) and computing gives as long as remain positive , and , the steady state will remain stable in the odd subspace .however , provided remain positive , but changes its sign , the steady state would become unstable through a hopf bifurcation in the odd subspace .if any of the or become negative , this would mean one of the eigenvalues going through zero along the real axis implying a steady - state bifurcation and the loss of stability of the steady state . + * proof of theorem 2 . *as it has already been explained , the steady states all lie on the same group orbit . in the light of equivariance of the system, this implies that all these states have the same stability type , and therefore it is sufficient to consider just one of them , for example , .the jacobian of linearisation near is given by with the characteristic equation for eigenvalues that can be factorized as follows (ry_1+e+\lambda)^3[(e+ry_1)\lambda^2+(e+ry_1)^2\lambda+r^2ey_1]\times\\ & & \big[\lambda+\frac{r^2y_1(1-\gamma)+e(r-1)+ry_1}{e+ry_1}\big]^2=0.\end{aligned}\ ] ] it follows from this characteristic equation that one of the eigenvalues is , and since the steady state is only feasible for , this implies that the steady state is unstable , and the same conclusion holds for , and . + * proof of theorem 3 . * using the same approach as in the proof of * theorem 2 * , due to equivariance of the system and the fact that within each cluster all the steady states lie on the same group orbit , it follows that for the analysis of stability of these steady states it is sufficient to consider one representative from each cluster , for instance , and .the jacobian of linearisation near the steady state is given by the associated characteristic equation for eigenvalues has the form (\lambda)=0 , } \end{array}\ ] ] where is a third degree polynomial in with + 4r^4y_2 ^4}{(ry_2+e)(2ry_2+e)}>0,}\\\\ \displaystyle{a_3=\frac{r^2ey_2[2y_2 ^ 2r^2(2-\gamma)+e^2(1+\gamma)+4\gamma e r]}{(ry_2+e)(2ry_2+e)}>0 . } \end{array}\ ] ] computing gives + 2(e^3 + 3r^3y_2 ^ 3),\ ] ] which with the help of routh - hurwitz criterion ( [ rhcon ] ) implies that all roots of lie in the left complex half - plane .it follows that all the roots of the characteristic equation ( [ ch_eq_12 ] ) have negative real part except , possibly , an eigenvalue given by substituting the expression for from ( [ y2def ] ) , it can be shown that this eigenvalue crosses zero when and . since , and due to the fact that the steady state is only biologically plausible for , it follows that stability of this steady state never changes as is varied irrespective of the value of , and , in fact , this steady state is always unstable .+ in a similar way , the jacobian of linearisation near the steady state has the form with the associated characteristic equation ^ 2=0 . }\end{array}\ ] ] all of the eigenvalues given by the roots of this characteristic equation have negative real part , except for solving the equation shows that the steady state is stable when and unstable otherwise . in the light of the restriction , the steady state can only be stable for . arino j , van den driessche p ( 2006 ) .time delays in epidemic models : modeling and numerical considerations , inn o. arino , m. l. hbid , & e. ait dads ( eds . ) , _ delay differential equations and applications_. springer verlag , new york ferguson n , andreasen v ( 2002 ) the influence of different forms of cross - protective immunity on the population dynamics of antigenically diverse pathogens , in s. blower , c. castillo - chavez , k.l .cooke , d. kirschner , p. van der driessche ( eds ) , _ mathematical approaches for emerging and re - emerging infections : models , methods and theory_. springer , newyork golubitsky m , stewart i ( 1986 ) hopf bifurcation with dihedral group symmetry : coupled nonlinear oscillators , pp 131 - 173 , in : golubitsky m , guckenheimer j ( eds . ) , multiparameter bifurcation theory .american mathematical society , providence recker m , blyuss kb , simmons cp , tinh hien t , wills b , farrar j , gupta s ( 2009 ) immunological serotype interactions and their effect on the epidemiological pattern of dengue . proc roy soc london b 276 : 2541 - 2548
in mathematical studies of the dynamics of multi - strain diseases caused by antigenically diverse pathogens , there is a substantial interest in analytical insights . using the example of a generic model of multi - strain diseases with cross - immunity between strains , we show that a significant understanding of the stability of steady states and possible dynamical behaviours can be achieved when the symmetry of interactions between strains is taken into account . techniques of equivariant bifurcation theory allow one to identify the type of possible symmetry - breaking hopf bifurcation , as well as to classify different periodic solutions in terms of their spatial and temporal symmetries . the approach is also illustrated on other models of multi - strain diseases , where the same methodology provides a systematic understanding of bifurcation scenarios and periodic behaviours . the results of the analysis are quite generic , and have wider implications for understanding the dynamics of a large class of models of multi - strain diseases .
an important problem in future telecommunication systems will be how to send large amount of data such as video through a wireless channel at high rate with high reliability in a mobile environment .one way to enable the high rate communication on the scattering - rich wireless channel is use of multiple transmit and receive antennas .it is well known that the capacity of a wireless channel linearly increases as the number of transmit and receive antennas under the condition that total power and bandwidth of signals are constant .a wireless communication system with multiple transmit and receive antennas is called multi - input multi - output ( mimo ) system and an encoding / modulation method for mimo system is called space - time code . in the design of space - time codes ,it is desirable to reduce the size of the circuit for encoding and decoding .jiang et al . and khan et al . derived a necessary and sufficient condition for symbolwise maximum likelihood ( ml ) decoding on linear dispersion codes ( ldc ) , and such code design is named single symbol decodable design ( ssdd ) .complex linear processing orthogonal design ( clpod ) , which is the subclass of ssdd , has full diversity and maximum coding gain because of an additional condition on codewords .many researchers have studied concrete construction , rate , ber and the capacity utilization efficiency of clpod .the capacity utilization efficiency of a code can be measured by calculating its attainable maximum mutual information ( mmi ) .mmi of a code is defined as the capacity of a channel which consists of an encoder of the target code and the original channel , so it is an upper bound on how much one can send information with vanishing error probability by using the code .therefore we know how much a space - time block code utilizes the capacity of wireless channel by calculating their mmi .mmi is also an important measure of an inner code of a concatenated code because it corresponds to maximum possible information rate with vanishing error probability by taking the block length of an outer code large .hassibi and hochwald computed mmi of alamouti s code and mmi of an example of rate-3/4 clpod and mentioned that these values are far below original channel capacity with more than one receive antenna .they also proposed ldcs whose mmi are close to original channel capacity for several numbers of transmit / receive antennas .sandhu and paulraj derived the expression of mmi of general clpod and showed that for rayleigh fading channel , the value equals to original channel capacity only when one receive antenna is used .however , there is no knowledge of mmi of ssdd , which is a subclass of ldc and includes clpod as a special case .the importance of this problem is also mentioned in the literature . in this paperwe compute mmi of ssdd over frequency non - selective quasi - static rayleigh fading channel and clarify the necessary symbol rate at which ssdd utilizes full capacity of original channel .this paper is organized as follows : in section [ sec.2 ] , we introduce a mathematical model for mimo systems and the definitions of ssdd and clpod . in section [ sec.3 ], we derive an upper bound on mmi of ssdd .also we give alternative derivation of the exact expression of mmi of clpod . in section [ sec.4 ], we show the tightness of the upper bound on mmi of ssdd by comparing it with delay optimal complex orthogonal design ( cod ) , which is a subclass of clpod at the same symbol rate. then we clarify the necessary symbol rate of ssdd at which mmi of ssdd can attain original channel capacity .we show that the necessary symbol rate is much larger than that of clpod upper bounded by 3/4 .finally , section [ sec.5 ] provides our conclusions ._ notation : _upper case letters denote matrices and bold lower case letter denote vectors ; and denote real and imaginary part of complex number , respectively ; and denote transpose and hermitian transpose , respectively ; and denote the entry of a matrix and entry of a vector , respectively ; denotes the identity matrix of size ; and denote determinant and trace of a matrix , respectively ; is a diagonal matrix with on its diagonal ; ] denote expectation over random matrix and random vector , respectively ; covariance matrix of random vector is denoted as .we always index matrix and vector entries starting from 1 .complex and real field are denoted as and , respectively .in this section , we introduce a mathematical model for mimo system and define several space - time codes .we consider a communication system that uses transmit antennas and receive antennas .each transmit antenna simultaneously sends a narrow band signal through a frequency non - selective rayleigh fading channel .the fading is assumed to be quasi - static so that the fading coefficients are constant for channel uses .we can write the relation between a transmitted block ( or a codeword ) and a received block as follows .\in{\bf c}^{t\times n}\nonumber \\ s&=&[s_{tm}]\in{\bf c}^{t\times m}\nonumber \\ h&=&[h_{mn}]\in{\bf c}^{m\times n}\nonumber \\v&=&[v_{tm}]\in{\bf c}^{t\times n}\nonumber \end{aligned}\ ] ] [ cols="<,^ , < " , ] [ db ] ] for 2 to 8 transmit / receive antennas ]in this paper we gave a tight upper bound of mmi of ssdd and alternative derivation of an exact expression of mmi of clpod .we showed the necessary symbol rate of ssdd at which mmi of ssdd can attain channel capacity and these value are much larger than the symbol rate of clpod . to find out more about performance of ssdd, the research of the maximum symbol rate of ssdd is needed .9 e. telatar , `` capacity of multi - antenna gaussian channels , '' _ europ .telecommun _ , vol.10 , pp.585 - 595 , nov .g. j. foschini , `` layered space - time architecture for wireless communication in a fading environment when using multi - element antennas , '' _ bell labs tech .vol.1 , pp 41 - 59 , autumn 1996 .v. tarokh , h. jafarkhani and a. r. calderbank , `` space - time block codes from orthogonal designs , '' _ ieee trans .inform . theory _ , vol.45 , pp.1456 - 1467 , july . 1999 .x. b. liang , `` orthogonal designs with maximal rates , '' _ ieee trans .inform . theory _ , vol.49 , pp.2468 - 2503 , oct .h. w. wang and x. g. xia , `` upper bound of rates of complex orthogonal space - time block codes , '' _ ieee trans . inform . theory _, vol.49 , pp.2788 - 2796 , oct .b. hassibi and b. m. hochwald , `` high - rate codes that are linear in space and time , '' _ ieee trans . inform . theory _ , vol.48 , pp.1804 - 1824 , july . 2002 .y. jiang , r. koetter and a. c. singer , `` on the separability of demodulation and decoding for communications over multi - antenna block - fading channels , '' _ ieee trans .inform . theory _ , vol.49 , pp.2709 - 2713 , oct .m. z. a. khan , b. s. rajan and m. h. lee , `` on single - symbol and double - symbol decodable stbcs , '' _ proc . of isit2003 _ , june 30-july 5 , yokohama , japan .s. m. alamouti , `` a simple transmit diversity technique for wireless communications , '' _ ieee j. select .areas commun ._ , vol.16 , pp.1451 - 1458 , oct . 1998 .t. m. cover and j. a. thomas , `` elements of information theory , '' wiley interscience , 1991 .s. sandhu and a. paulraj , `` space - time block codes : a capacity perspective , '' _ ieee commun ._ , vol.4 , pp.384 - 386 , dec . 2000 .
in this paper , we analyze the performance of space - time block codes which enable symbolwise maximum likelihood decoding . we derive an upper bound of maximum mutual information ( mmi ) on space - time block codes that enable symbolwise maximum likelihood decoding for a frequency non - selective quasi - static fading channel . mmi is an upper bound on how much one can send information with vanishing error probability by using the target code .
metastability in a stochastic process is described by rare , noise - induced dynamical events .for example , a brownian particle in a double well potential , where the fluctuations are weak compared to the force of the potential , occationally jumps back and forth between each well .metastability is of particular interest in gene regulation circuits because rare extreme shifts in the expression of a gene can have a profound effect on the behavior of a cell .the challenge for stochastic modeling is to elucidate possible metastable events and quantify the timescale on which those events are likely to occur .quantitative theoretical models can distinguish between events that may realistically occur on the timescale of cell division and those that occur on longer timescales .understanding the relative stability of metastable states in an artificial gene expression circuit is relevant in synthetic biology .because metastable events are by definition rare , an analysis based on direct simulation is computationally impractical . in this paper , we derive an asymptotic approximation using perturbation theory .one of the most difficult aspects of applying standard stochastic techniques to study gene regulation is accounting for reactions involving the gene .regulatory molecules , activators and repressors , bind to regulatory segments of dna and interact with the gene promotor to affect the transcription rate ( synthesis of mrna ) .there can be as few as one active copy of the gene in a given cell .the case of linear feedback regulation is analytically tractable and many exact results are available .however , for the general case of nonlinear regulation , approximation methods are necessary .metastable behavior necessarily occurs under weak noise conditions , where fluctuations , whatever their source , are weak compared to deterministic forces .a stochastic description of a given chemical reaction converges to deterministic mass action kinetics in the large system size limit where the number of molecules is large ; this limit is sometime referred to as the large limit , where is the characteristic number of molecules .hence , it is natural to consider weak noise conditions for a stochastic chemical reaction to occur when is large but finite .this is precisely the limit in which the chemical master equation is approximated by the chemical fokker - planck equation .clearly , no such limit is possible for a reaction involving a species having a single member .however , if the reaction involving the gene is fast , one can obtain a deterministic description by taking an adiabatic limit , where the gene is described as switching between its various states infinitely fast so that it obtains an averaged transcription rate . for example , a gene that switches between on and off states would , in the adiabatic limit , have an effective transcription rate scaled by the fraction of time spent in the on state . a stochastic gene regulation modelcan then be said to be under weak noise conditions when it switches between its different states fast but not infinitely fast .one could argue that mrna should also be regarded as an adiabatic species . in most situationsmrna copy number is quite low . while mrna are expensive to synthesize , a single copy is capable of producing many proteins .if a gene expression model displays metastable behavior ( i.e. , weak noise conditions ) and mrna is present in small numbers , then it follows that the mrna transcription and degradation must be fast ( on the same time scale as promotor switching ) .methods for approximating mean switching times are well known in the applied math literature for continuous markov processes described by a fokker planck equation . the rigorous mathematical basis of this theory is known as _ large deviation theory _ . the theory used to describemetastable behavior for chemical systems generally considers large--type weak noise conditions .the bistable switch has been analyzed using a variety of means to eliminate promotor switching from the problem , by using a diffusion approximation , by taking the adiabatic limit , or by assuming that mrna is synthesized in bursts .however , the first two approaches result in quantitatively inaccurate estimates for the mean switching times , and the latter is only applicable when the mrna degradation rate is large compared to the transcription rate and the promotor transition rates .the first to make progress on developing a general asymptotic approximation , assaf and coworkers obtained a partial description of bistable switching in a three - species stochastic model ( promotor , mrna , and protein ) .the result was significant because their model explicitly included mrna copy number and stochastic `` on - off '' promotor switching .however , their result does not account for more than two promotor states , and they assumed that mrna are present in sufficient numbers that it can be treated as a continuous quantity .additionally , they derived a logarithmically - accurate asymptotic estimate of the mean switching times , lacking a pre exponential factor ( pef ) .methods for computing the pef are well developed for the fokker planck equation , but they have not been widely applied to chemical systems .we argue that a different approach is necessary to solve the problem , one that applies to chemical systems where weak noise arises from species that can be either `` large '' or `` adiabatic '' . using theory first developed to study metastability in a molecular motor model with an adiabatic motor configuration , the authors later derived an approximation to the gene expression problem that accounts for an arbitrary number of promotor states and a mean switching time approximation that included the pef , but did not explicitly include mrna . in this paper, we develop a complete description of bistable switching in a simple gene regulation circuit that includes promotor switching , a discrete mrna reaction , and a protein concentration that regulates the promotor switching rates .our main assumption is that all of the transition rates ( the promotor switching rates , the mrna transcription and degradation rates , and the protein synthesis rate ) are large compared to the protein degradation rate .physically , this assumption is valid in a given system if ( i ) protein is present in sufficient quantity that it can be regarded as a concentration , ( ii ) mrna is present in small number , and ( iii ) intrinsic noise weakly affects the protein concentration . using a recently developed quasi - stationary analysis ( qsa ) , we obtain a arrhenius eyring kramers rate that includes the previously unknown pef .our result agrees with the logarithmically accurate approximation reported in under a less restrictive set of assumptions ( we make no assumption about the rate of transcription compared to the rate of mrna degradation ) . in addition to the kramers rate , the pef allows us to derive a uniformly accurate asymptotic approximation of the joint stationary probability distribution , including the discrete conditional distribution of mrna .the theory is independent of the particular choice of protein dependent promotor switching rates .the paper is organized as follows .first , we introduce the model in section [ sec : model ] , along with the deterministic limit . in section [ sec : quasi - stat - appr ] we introduce the qsa and the approximation formula for mean switching times .the wkb approximation of the stationary probability density function is calculated in section [ sec : wkb ] .finally , in section [ sec : results ] we compare our results with monte - carlo simulations ( obtained using the standard gillespie algorithm ) for a simple example of positive feedback regulation .let represent the gene state with when the gene is on and when it is off .when the gene is on , mrna is transcribed at a rate , and each mrna is removed at a constant rate .assume that the transitions are fast so that is a small parameter .each mrna synthesizes protein at a rate and each protein molecule is removed at a rate .then , we have the following set of chemical reactions , set the characteristic time to the average lifetime of a single mrna so that . then , is the average number of mrna , assuming the gene is permanently switched on .let be the number of proteins of type , and define the `` concentration '' of to be .note that is not a physical concentration since is a non dimensional parameter .assume that regulates the gene activity by affecting the promotor switching rates .the gene switches off ( ) and on ( ) randomly according to the two state markov process the analysis presented here is independent of the particular choice of and .the master equation for the process is {p}+ \mathbb{l}^{(x)}{p},\ ] ] where (s ) & \equiv ( 2s - 1)(\alpha f(0 ) - \beta f(1 ) ) , \\\mathbb{l}^{(m)}[f](m ) & \equiv s \sigma [ f(m-1 ) - f(m ) ] \\\nonumber & \qquad + \gamma[(m+1)f(m+1 ) - mf(m ) ] , \\\mathbb{l}^{(x)}[f](x ) & = \frac{1}{\epsilon}\left[m k_{d } ( e^{-\partial x } - 1)f + k ( e^{\partial x } - 1)xf \right].\end{aligned}\ ] ] formally , we write jump operators in terms of a taylor s series expansion with in the limit , the proceses becomes deterministic , with the concentration of protein satisfies assume that is bistable for a range of parameter values , having three fixed points , two of which are stable .label the two stable fixed points and the unstable fixed point so that . for a discusion on how the choice of and affect stabilitysee ref . .the master equation can be written as where we have defined the linear operator + \mathbb{l}^{(x)}.\ ] ] the solution to can be written in terms of the eigenvalues and eigenfunctions of with the process looks very different depending on whether it starts at or at .for the sake of illustration assume that . on intermediate time scales ,the solution will converge to a stationary density around that , figuratively speaking , does not see beyond to the other stable fixed point .slowly , over a long timescale , the solution converges to the full stationary density as probability slowly leaks out past toward .the timescale for this long - time convergence is exponentially large ( i.e. , ) ) .since a stationary solution exists , the smallest eigenvalue , called the principal eigenvalue , is , and the stationary density is the eigenfunction ( up to a normalization constant ) .the separation of time scales in the problem can be exploited to approximate the solution . to understand how this works consider the process where a boundary condition is placed at so that the process truly does not see beyond the unstable fixed point .we want to consider two different boundary conditions : reflecting and absorbing . to distinguish between each case ,we write the principal eigenvalue and eigenfunction ( dropping the subscript ) as and for absorbing and reflecting boundary conditions , respectively .if we place a reflecting boundary at the principal eigenvalue , but the eigenfunction is now restricted to ( or if we instead assume that ) .we call the _ quasi - stationary density _ ; it is a solution to note that is defined up to a normalization factor .one of the nice things about the quasi - stationary density is that it can be approximated using the wentzel kramers brillouin ( wkb ) method .now suppose that an absorbing boundary is imposed at .in this case , no stationary density exists , and the principal eigenvalue is perturbed by an exponentially small amount , that is , , for some .the eigenfunction is also perturbed , but away from the boundary , .thus , if we can calculate the eigenvalue and eigenfunction , we have an accurate approximation to the absorbing boundary problem with where is a normalization constant .the quantity we are most interested in calculating is the mean first exit times to switch between .let be the first exit time for the process , having started at , to reach . from, the survival probability is = \sum_{s , m}\int_{-\infty}^{\infty}{p}(s , m , x ) dx \sim e^{-{\lambda_{}^{(a ) } } t}.\ ] ] it follows that the first exit time is approximately an exponential random variable with mean .the quasi - stationary density and the principle eigenvalue are approximated as follows .the wkb approximation of proceeds with the anzatz , ^{-\frac{1}{\epsilon } \phi(x)},\end{gathered}\ ] ] where is the conditional distribution for the gene / mrna states and is called the quasipotential .the pef can be viewed as a normalization factor for .let us write the principle eigenvalue corresponding to as so that the mean exit time to transition from is given by . using a spectral projection method , one can derive an asymptotic approximation of the principle eigenvalue given by , } , \end{split}\ ] ] where is given by .the above formula is known in the literature as the arrhenius eyring kramers reaction rate formula . in the next sectionwe calculate the wkb approximation , which yields an approximation of the stationary density function and , using , the mean switching times . applying the jump operators defined by tothe wkb solution and expanding in powers of involves expressions of the type \sim \left[g(x)e^{\mp \phi'(x ) } + o(\epsilon)\right]e^{-\phi(x)/\epsilon},\ ] ] where is an arbitrary function . substituting into and collecting leading order terms in yields { \rho}(s , m | x ) = 0,\ ] ] where note that at so that has local minima / maxima at the deterministic fixed points .the goal of the first part of this section is to compute and ( the pef is determined at higher order ) .it is rarely possible to integrate to get a closed form solution for .however , using chebyshev interpolation , the solution can be efficiently computed numerically to any desired accuracy .there are many software packages that compute chebyshev approximations , including the gnu scientific library , which can be easily used from within python . for matlab, the chebfun package provides the necessary tools . for notational convenience ,let we proceed by developing a solution of the associated eigenvalue problem , r_{s , m}(x , p ) = 0,\ ] ] where is the eigenvalue and the eigenvector .then , is implicitly defined by setting .given , the conditional distribution is up to a normalization factor .for the case where the dimension of linear operator in is finite ( i.e. , a matrix ) , it follows from the perron frobenius theorem that there is a unique eigenvalue called the _ principal eigenvalue _ corresponding to a nonnegative eigenvector .the principal eigenvector is real , simple , and is greater than the real part of all other eigenvalues .we assume that the statement holds in the present situation when .define the generating function , multiplying both sides of by and summing over all yields ( in component form ) , we can transform the above system into a single second order equation . rearranging the first equation to obtain in terms of yields after substituting into ,changing variables with , and setting we obtain the second order equation , where .\end{aligned}\ ] ] recall that at fixed points , we must have , and notice that .if we set , , and in it simplifies to where .the solution is where is the so - called kummer function or confluent hypergeometric function ( sometimes written as ) and is the gamma function .the solution is consistent with results found in ref . for the generating function of the distribution of mrna transcribed by an on - off gene ( i.e. , ignoring protein synthesis and regulation ) .similar results utilizing generating function methods that involve have been obtained for a variety of linear feedback regulation models . for , we notice that there is a solution of the form provided that .of course , there are an infinite number of solutions , one for each of the eigenfunctions of the compact infinite dimensional linear operator .assuming there is a unique nonnegative eigenvector ( as is the case for appropriately defined finite dimensional matrices ) , we can confirm that we have selected the correct solution if the inverse transform of is nonnegative ( up to a normalization factor ) .setting yields the characteristic equation , \mu\\ - \left[(\alpha + u ) \frac{\sigma v}{1-v } - ( \alpha + \beta ) u - u^{2}\right ] = 0.\end{gathered}\ ] ] to obtain the wkb solution , we must solve for satisfying .substituting into yields - ( \alpha + \beta)u - u^{2 } = 0.\ ] ] let and rewrite as and . from the latter we have , which we substitute into to get after substituting into , we find that is a root of there is one root that vanishes when , namely ,\end{aligned}\ ] ] where is the deterministic dynamics ( for which by definition ) .then , using we obtain , interestingly , has the same form as the equivalent expression in ref . , which was derived under a stricter set of assumptions .because the wkb method is more commonly applied to large--type weak noise conditions , they made the initial assumption that mrna can be treated as a concentration ( i.e. , that ) .later in the analysis , after the wkb expansion , they use a fast slow analysis to obtain by assuming that the rate of mrna degradation is much larger than the transcription rate ( i.e. , ) , seemingly at odds with their initial assumption .the derivation of makes no assumption about the size of relative to . now that has been determined , the conditional distribution is where is a normalization factor given by hence , to determine we need the right eigenvector and its generating function .setting allows us to solve and obtain , where and are given by ( with , , and ) .recall that and is given by .the generating function is written in terms of using .we recover from the generating function using the inverse transform , after some calculation , we obtain ,\end{aligned}\ ] ] where is defined by . in practice, the validity of the approximation can be verified by confirming that the above distribution is nonnegative .a general proof of this based on precise assumptions about the model parameters is beyond the scope of this paper .however , it follows immediately that if then .we anticipate that this is true when all the parameters ( , , , , , and ) are positive .collecting terms in the wkb expansion yields {\rho}^{(1)}(s , m , x)\\ & \quad = { k}(x)\left[-{\frac{\partial^{2 } u}{\partial p \partial x } } + \frac{1}{2}\phi''(x)\left(m { \frac{\partial^{2 } v}{\partial p ^{2 } } } - { \frac{\partial^{2 } u}{\partial p ^{2 } } } \right ) \right]{\rho}\\ & \qquad+ \left(m{\frac{\partial v}{\partial p } } - { \frac{\partial u}{\partial p}}\right)\left({k}'(x){\rho}+ { k}(x){\frac{\partial { \rho}}{\partial x}}\right ) , \end{split}\ ] ] where and , defined by , and their derivatives are evaluated at , given by .recall that and are given by and .more details on obtaining the above expression ( namely the second order term in ) can be found in ref .the pef is determined by a solvability condition , which makes use of the left eigenvector , the derivation can be found in appendix [ sec : adjoint - problem ] . define the inner product according to it follows from the fredholm alternative theorem that a solution to exists provided that .\end{gathered}\ ] ] the inner products can be evaluated explicitly using the generating function for .it is simpler to use the unnormalized eigenvector to evaluate the inner products .recall that , where is a normalization factor defined by .hence , .using the generating function , given by and , the inner product of the left and right eigenvector is note that we have normalized so that .likewise , we define where .the various partial derivatives of the generating function simplify considerably when evaluated at ; they are listed in appendix [ sec : deriv - gener - funct ] . with the above inner products ,we can write the pef as where the partial derivatives of and are evaluated at , which is given by .note that contains removable singularities at the fixed points , and is best evaluated using a chebyshev approximation .suppose that there is a background concentration of active inhibitor that binds to the dna and turns the gene off .suppose further that the protein deactivates the inhibitor through the reaction , where is the deactivated inhibitor .assuming that this reaction is fast , a simple way to include regulation in the model is to set where and are positive parameters .we compute the wkb and mean switching time approximations in python using the scipy package for plotting .we numerically integrate and using the chebyshev approximation toolbox from the gnu scientific library .all figure are generated using interpolation points on the interval . in fig .[ fig : stat ] , we show the wkb approximation of the marginal stationary density function where the normalization factor is to see the accuracy in the tails of the distribution , we also show . in fig .[ fig : mfpt ] we show the mean switching times as a function of .the approximations that ignore the pef are shown as dashed lines for comparison .using the qsa , we develop an accurate approximation of the stationary density function and the mean switching times .our only assumption is that the protein degradation rate is small compared to all other rates .physically , this corresponds to fast promotor and mrna dynamics and a relatively large number of proteins .our assumptions are valid for many physically relevant parameter regimes , including transcriptional bursting when .using the generating function for the right eigenvector , we obtain an analytical formula ( up to a numerical integration ) for the pef .the results from a positive feedback model of regulation show that the contribution from the pef to the stationary density approximation is most significant for small .it is no surprise then that the pef is critical for the accuracy of the mean exit time from the left well surrounding stable fixed point to the right well .there are also interesting possibilities for how the asymptotic approximation can be used to construct an efficient simulation algorithm . for continuous markov processes, many simulation tools have been developed to study rare events , including importance sampling , which can be used in conjunction with the type of asymptotic approximation developed here to speed up simulation time .the results are derived independent of how regulation is modeled ( how and depend on ) .it should be possible to extend these results to more complicated gene regulation circuits and gene networks .for example , one might consider additional chemical species that interact with the protein synthesized by the gene .more possibilities exist for metastable behavior in higher dimensions , and analyzing such systems is possible using a recently derived large deviation principle .we make use of the left eigenvector satisfying ^ { * } + mv - u \right \ } l_{s , m } = 0 , \\\label{eq:46 } { \left \langle l , r \right \rangle } = 1,\end{gathered}\ ] ] where ^{*}l_{s , m } = [ s\beta - ( 1 - s)\alpha](l_{0 , m } - l_{1 , m } ) \\ + s\sigma ( l_{s , m+1 } - l_{s , m } ) + m ( l_{s , m-1 } - l_{s , m}).\end{gathered}\ ] ] consider the trial solution where and are unknown constants .first , notice that if we substitute into with , we find that . with nonzero and , substituting into yields , setting the determinant of the above matrix to zero yields an expression equivalent to the characteristic equation for the principal eigenvalue , which indicates that we have correctly guessed the left eigenvector we need .using the normalization condition , we have , with given by .the generating function is given by and .define and , where and are given by and is defined by . for ease of notation , we write partial derivatives of with a subscript : then , .the derivatives evaluated at are the derivatives evaluated at are the , derivatives evaluated at are 31ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] link:\doibase 10.1038/nature09326 [ * * , ( ) ] link:\doibase 10.1103/physreve.72.051907 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.101.118104 [ * * , ( ) ] link:\doibase 10.1103/physreve.79.031923 [ * * , ( ) ] http://stacks.iop.org/1751-8121/44/i=35/a=355001 [ * * , ( ) ] * * , ( ) link:\doibase 10.1007/bf01304226 [ * * , ( ) ] link:\doibase 10.1137/s0036139994271753 [ * * , ( ) ] _ _ , , vol .( , , ) link:\doibase 10.1007/978 - 3 - 642 - 25847 - 3 [ _ _ ] , ed .( , , ) _ _ , , vol .( , , ) , * * , ( ) link:\doibase 10.1088/0305 - 4470/9/9/009 [ * * , ( ) ] \doibase 10.1051/jphys:019850046090146900 [ * * , ( ) ] link:\doibase 10.1103/physreva.29.371 [ * * , ( ) ] link:\doibase 10.1063/1.467139 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.88.048101 [ * * , ( ) ] link:\doibase 10.1103/physreve.74.041115 [ * * , ( ) ] link:\doibase 10.1073/pnas.0509547102 [ * * , ( ) ] link:\doibase 10.1039/c4sc00831f [ ( ) , 10.1039/c4sc00831f ] link:\doibase 10.1103/physreve.71.011902 [ * * , ( ) ] link:\doibase 10.1007/s00285 - 013 - 0723 - 1 [ ( ) ] link:\doibase 10.1103/physrevlett.106.248102 [ * * , ( ) ] link:\doibase 10.1137/10080676x [ * * , ( ) ] link:\doibase 10.1088/1478 - 3975/9/2/026002 [ * * , ( ) ] link:\doibase 10.1021/jp045523y [ * * , ( ) ] link:\doibase 10.1103/revmodphys.62.251 [ * * , ( ) ] link:\doibase 10.1371/journal.pbio.0040309 [ * * , ( ) ] _ _ ( , , ) link:\doibase 10.1137/110842545 [ * * , ( ) ] , ( )
a simple stochastic model of a self regulating gene that displays bistable switching is analyzed . while on , a gene transcribes mrna at a constant rate . transcription factors can bind to the dna and affect the gene s transcription rate . before an mrna is degraded , it synthesizes protein , which in turn regulates gene activity by influencing the activity of transcription factors . protein is slowly removed from the system through degradation . depending on how the protein regulates gene activity , the protein concentration can exhibit noise induced bistable switching . an asymptotic approximation of the mean switching rate is derived that includes the pre exponential factor , which improves upon a previously reported logarithmically accurate approximation . with the improved accuracy , a uniformly accurate approximation of the stationary probability density , describing the gene , mrna copy number , and protein concentration is also obtained .
since the ilc is designed to allow measurements of masses and cross - sections of standard model as well as of possible new particles at the permille level , also the beam parameters like beam energy , polarisation and luminosity have to be controlled to this precision . while for the beam energy this goal has already been achieved at previous colliders , the up to now most precise polarisation measurement at sld reached a precision of 0.5% .the overall polarimetry scheme at the ilc therefore combines the measurements of two compton polarimeters , located upstream and downstream of the interaction point , with data from the annihilations themselves . for optimized compton polarimeters , a factor of two improvement over the sldis expected .r0.6 the working principle of compton polarimeters has been described in detail for example in . the lonigtudinally polarised electron ( or positron ) beam is hit under a small angle by a circularly polarised laser .the energy spectrum of the scattered particles depends on the product of laser and beam polarisations .the rate asymmetry with respect to the laser helicity is directly proportional to the polarisation : the analyzing power , which contains all dependency on the experimental setup , corresponds to the asymmetry which is expected for 100% polarisation .obviously a large analyzing power is favourable for a precise measurement . in order to collect statistics fast enough, the laser intensity is chosen such that typically in the order of 1000 electrons are scattered per bunch . since the scattering angle in the laboratory frame is restricted to less than 10 , a magnetic chicane has to be employed to transform the energy spectrum into a spacial distribution , which is finally measured with an array of cherenkov detectors .figure [ fig : chicane ] shows the setup forseen for the ilc upstream polarimeter , with the compton ip between the second and third dipole triplet .the blue line shows the beam trajectory for a beam energy of 250 gev , the blue shade indicates the fan of ( detectable ) compton scattered electrons .based on the experience from sld , it has to be expected that the largest contribution to the overall error budget is due to the analyzing power .however with the dedicated chicane design forseen at the ilc and with a detector operating without a preradiator , this uncertainty is expected to be reduced by about a factor 2 with respect to sld , yielding a contribution of 0.2% . to achieve this goalit is important to controle all system parameters continuously .for example a misalignment of the polarimeter detector with respect to the beam of 0.5 mm leads to an 0.1% effect on the polarisation .apart from the analyzing power , additional sources of uncertainty are non - linearities of the detector and its read - out chain ( 0.1% ) and the measurement of the laser polarisation ( 0.1% ) .r0.7 if operated at a fixed magnetic field for all beam energies , the position of the compton edge at the detector surface stays the same for all beam energies , which ensures a homogeneous detector acceptance and the same measurement quality at all center - of - mass energies . instead , the compton ip moves laterally with the beam energy , as indicated in figure [ fig : chicane ] by the red dashed line for 45.6 gev . while the laser path can be easily adjusted with movable mirrors , the mps collimator , would be more difficult and expensive to move . therefore it has been proposed to operate the chicane with a magnetic field which scales with the beam energy , thus keeping the compton ip at a constant location . in such a scenario ,the compton edge position varies with the beam energy , squeezing the entire compton spectrum more and more into the beampipe until no measurement is possible anymore .figure [ fig : disp_cov ] shows the dispersion of the chicane , i.e. the transverse distance of the comtpon ip from the neutral beam line , as function of the beam energy for the original fixed field case ( blue line ) .it has been chosen to maximize the spread of the compton spectrum at the detector position while avoiding a significant emittance blow - up .the red dashed line indicates the minimum dispersion required to perform any polarisation measurement at all ( two detector channels below zero crossing of asymmetry ) . with fixed dispersion, the whole range of ilc beam energies from 45.6 gev up to 500 gev can only be covered by at least three ranges , indicated by the green line .the minimal scale factor for the magnetic field with respect to the fixed field case is 0.45 .in addition , figure [ fig : disp_cov ] shows the detector coverage as a function of the beam energy . in the fixed field scenario ,the compton spectrum is spread out over the whole colored area , where the red area correponds to the two innermost detector channels , while the dashed line indicates the zero crossing of the asymmetry . in the three step scaled field scenario ,the compton spectrum would only cover the red and green areas . in this case , the achievable precision will depend on the beam energy .the analyzing power of each detector channel depends on the compton edge position with respect to the beam . for a precise polarisation measurement , it is therefore of utmost importance to controle this parameter at any time , preferably without interfering with polarimetry data taking .the fewer detector channels contribute to the measurement , the more sensitive the polarisation measurement becomes to the compton edge position .l0.6 [ fig : pol_bias ] this is illustrated in figure [ fig : edge_calib ] , which shows the effect of some simulated misalignment between detector and beam for an input polarisation of 80% ( green line ) .the red curve corresponds to the case of a fixed magnetic field , where the compton spectrum is spread out over about 20 cm . if the edge position should not contribute more than 0.1% to the total error budget , it needs to be known to better than 0.4 mm . in the scaled field scenarioas discussed above , the magnetic field is reduced down to 0.45 times the nominal value . in this case( blue line ) , the dependence of the polarisation on the alignment is much steeper , since each channel integrates a larger fraction of the highly non - linear spectrum , and the uncertainty on the edge position must not exceed 0.2 mm in order to stay within less than 0.1% deviation of the polarisation . on the other hand , the effective position resolution of the detector gets worse if less channels are covered by the spectrum . with a simple algorithm which estimates the position of the edge within the last covered bin by compairing its contents with the expectation extrapolated from its neighboring bins ,the edge position can be determined to cm in the fixed field case after 10000 bunch crossings , where the difference to the true position is given as systematic uncertainty . in the scaled field case , again at a scale factor of 0.45 , the same method yields cm , showing clearly that the difference to the true edge position is large enough to introduce a bias of more than 0.1% on the polarisation measurement .this is illustrated by figure [ fig : pol_bias ] , which shows the difference between reconstructed and input polarisation as a function of the scale factor , using the edge position determined from the simulated data instead of its true value .figure [ fig : vac_chamber ] shows a sketch of the vaccuum chamber forseen for the fixed field scenario . at the last dipole triplet , it has to provide an aperture of about 30 cm for the fan of scattered compton electrons as well as for the trough going beam . in order to avoid wake fields, the aperture grows slowly along the entire chicane .the effect of a collimator at a place where the aperture of the vacuum chamber is already about 20 cm is currently unclear .the collimator will create significant backgrounds for the polarimeter detector , which is not compliant with a high precision measurement .last but not least , the collimator either would have to be movable or a scaled field operating mode of the chicane is required , putting in danger precision polarimetry , as explained in the previous section .r0.51 in addition to the collimator it has been suggested to install laser wire based emittance diagnostics right upstream of the polarimeter chicane , with a detector for the scattered particles in front of the second dipole triplet , c.f .figure [ fig : chicane ] .this detector is expected to create backgrounds at the level of 60% of the polarimeter signal .such amounts of background are clearly incompatible with meaningful polarimetry .thus , the laser wire and polarimeter could only be operated on alternating bunches .alternating operation however can not resolve the spacial conflicts : if the compton scattered photons from the laser wire are used , a converter target plus an electron detector is needed in order to distinguish the compton photons from the synchrotron background coming out of the linac . at high beam energies ,the dispersion of the chicane is only a few centimeters ( c.f .figure [ fig : disp_cov ] ) , yielding not enough clearance for converter and detector with respect to the through - going beam .the obvious alternative is to detect directly the compton electrons , which are deflected out of the neutral beam line by the first dipole triplet of the chicane .this approach seems promissing , especially since it does nt create additional backgrounds for the polarimeter .however , if the chicane is operated with a scaled field , there is not enough beam clearance at low beam energies . in summary , the best compatibility of emittance diagnostics and polarimetry in one chicaneis achieved when operating on alternating bunches , with a fixed magnetic field and electron detection for the laser wire .the upstream polarimeter as it is forseen in the ilc rdr has several short comings , which make it impossible to reach the high precision goals .especially the chicane magnets should be operated at a constant magnetic field for all beam energies . furthermore significant additional background sources must be avoided .therefore it is recommended to separate the locations of the polarimeter , the mps collimator and the emittance diagnostics .a full list of recommendations can be found in .[ aleph , delphi , l3 , opal and sld collaborations ] , phys . rept .* 427 * ( 2006 ) 257 v. gharibyan , n. meyners and p. schuler , `` the tesla compton polarimeter , '' lc - det-2001 - 047 g. a. blair , l. deacon , s. malton , i. v. agapov , a. latina and d. schulte , _ in the proceedings of 11th european particle accelerator conference ( epac 08 ) _ b. aurand _ et al ._ , arxiv:0808.1638 [ physics.acc-ph ] .
the physics programme of the ilc requires polarimetry with yet unprecedented precision . this note focusses on aspects of the upstream polarimeter as described in the ilc reference design report which are not compatible with the extraordinary precision goals . in conclusion , recommendations for improving the design are given .
the multifractal behaviour of the financial time - series has become one the acknowledged stylized facts in the literature ( see : ) .many works have been dedicated to its empirical characterization , reporting strong evidence of its presence in financial markets , and models .understanding which is the origin of the measured multifractality in financial markets is still an open research challenge .this question has been raised first in where the authors pointed out that the power law tails and the autocorrelation of the analysed time - series must be the two sources of the measured multifractality . in the first case ,the multifractal behaviour is a consequence of the broadness of the unconditional distribution of the returns ; while in the second case , the multifractal behaviour is associated with the causal structure of the time - series .after , many papers have investigated the relative contribution of these two sources to the measured multifractality , however no agreement exists .for example in the author points out that the autocorrelation structure has a minor impact on the measured multifractality while the power law tails are the major source of it . in also report that the power law tails give the major contribution , but they also point out that the presence of unknown autocorrelations might introduce a negative bias effect in the quantification of multifractality .conversely , in the authors find that the autocorrelation gives the major contribution while for a specific time - series the `` extreme events are actually inimical to the multifractal scaling '' .this lack of agreement motivated our work , leading us to investigate what the source of the measured multifractality is and how it can be detected . in this paperwe quantify the two contributions by using synthetic times series where the two contributions can be separated .specifically we analyze brownian motion with innovations drawn from a t - student distribution , multifractal random walk and normalized version of the multifractal random walk .the measured multifractality on these synthetic series are compared with measures on both real financial log - returns and on a normalized version of the real log - returns where the heavy tails are removed .the results show the aggregation horizon has a strong effect on the quantification of multifractality .we verify however that there are regions of the aggregation horizon that can be used in practice to extract reliable multifractality estimators .the rest of the paper is organized as follows : in sec .[ background ] we perform a brief literature review introducing the tools we used for our analysis and discussing the results from previous works . in sec .[ section_methods ] we review the theoretical models we used and we define the multifractality estimators that shall be used throughout the paper . secs .[ sec_source ] and [ section_real ] are dedicated respectively to the analysis of artificial and real data . in sec .[ section_discussion ] we discuss the results while in sec .[ section_summary ] we summarize the results and conclude .among the methods which are used for the empirical measurement of the scaling exponents , in this work we will use only the _generalized hurst exponent method _ ( ghe ) , see which relies on the measurement of the direct scaling of the - order moments of the distribution of the increments and it has been shown to be one of the most reliable estimators .let us call a process with stationary increments .the ghe method considers the following function of the increments =k(q)\tau^{qh(q)},\ ] ] where is the time horizon over which the increments are computed and is the generalized hurst exponent .the function is concave and depends also on . in particular , ghe considers the logarithm of eq .( [ moments_scaling ] ) \right)=\zeta(q)\ln(\tau)+\ln\left(k(q)\right),\ ] ] and , if linearity with respect to holds , it computes the slopes of the straight lines at different .the slopes are computed in the following way : for every , several linear fits are computed taking ] ; the output estimator for is the average of these values for a given .this method gives also the errors which are the standard deviations of these values .however , in this paper we do not perform any average over different values of , and we instead consider just one linear fit for a given range ] , following the prescription of other works ( ) , and ] and ] . [ cols="^,^,^",options="header " , ] we observe first that the results change considerably .secondly , within the significance level the shuffled time - series can be considered uniscaling , as it happens for the tbm , so there is not an increase in multifractality .thirdly the normalized time - series keeps its concavity , thus it is not affected anymore by the negative bias mentioned previously .this therefore demonstrates that a statistically significant multiscaling behaviour is present in financial time - series .our analyses provide clear evidence that the estimation of the scaling exponents are affected by the aggregation horizon .we chose two regions : 1 ) ] .we observed that the analyses on the region ] we observed that the spurious multiscaling found on tbm processes and on the indu time - series is lower with respect to the measurements performed in the ] are reliable and reveal that some degree of multifractality is present in real financial log - return time series and it has to be ascribed to the effect of the causal structure of the process . at this pointa question to address is why there is a so big difference in the two regions of . for what concerns the effect of the tails we explain this difference via the speed of convergence of the central limit theorem ( clt ) . in particular , for processes exhibiting increments with power law tails , with tails index bigger than two , it is well - known that under aggregation they behave , in the asymptotic limit , as a bm .the speed of convergence depends on how heavy the tails are but if the aggregation is finite , whatever the tails index is , there will always be a region in the final part of the tails of the probability density which will have a power law behaviour .the effect of increasing the aggregation horizon is to push this region further in the tail .this explains why , increasing the aggregation horizon , the spurious power law tails concavity tends to disappear , reconciling with the theoretical expectations .counter - intuitively processes with increments exhibiting tails with exponents less then two are less affected by this problem , since their convergence under aggregation is ruled by a generalized central limit theorem and they keep their power law nature in the tails of the distribution so the convergence is faster .concerning the autocorrelation negative bias , our interpretation is that it may be caused by the fact that the average of a strongly correlated variable does not necessarily converge to the expectation value . in this respectthe effect might be reduced in the region ] has not been chosen optimizing the performance of the multifractal estimator .however it proved to be sufficient to give us valuable insights and improved our estimation of the scaling parameters .let us make few other observations concerning the measurements . since the measures , as proposed here ( cfr .[ subsec_multifractality ] ) , depend on two parameters , and , we report that in general , rules the precision while the accuracy .so a bigger value of would reflect in measured values nearer to expected ones . on the other hand taking bigger values of up in including more oscillating values in the analysis , thus in a larger standard deviation .however for a process like the mrw , attention must be paid , since , if becomes bigger than the autocorrelation length , no multifractal behaviour holds anymore , since the increments of the process become independent .so the range of must be taken large enough to reduce as much as possible the power law tails effect , but not too much to exceed the time - span where the correlations are relevant .finally , we notice that it appears evident that at small ranges of the power law tails concavity has a bigger impact to the measures with respect to the convexity induced by the autocorrelation .in this paper we studied the multiscaling behaviour of financial time - series by studying synthetic and real datasets at different aggregation horizons .we started by analysing the mrw , finding that , for small aggregation horizons , the multiscaling behaviour after shuffling , appears to increase , in agreement with previous works on empirical datasets .however for larger aggregation horizons this effect disappears . since the shuffling procedure destroys the temporal structure of a time - series , but preserves its unconditional distribution , we focused our attention on the scaling properties of another process , the tbm which is a unifractal process .it turned out that for small aggregation horizons the presence of power law tails induces a concavity in the scaling exponents , indicating therefore a multiscaling behaviour which is however not predicted by the theory .we turned then our attention to the causal structure of a time - series .in this case we observed that , at small aggregation horizons , the presence of autocorrelation introduces a negative bias , _ i.e. _ a reduced concavity which ended up in a convexity of the scaling exponents , both for synthetic and real time - series .these numerical findings explain well the puzzling increase in multifractality found in previous works after shuffling : as long as both power law tails and autocorrelation are kept , the spurious multiscaling contribution of the tails is lessen by the presence of the autocorrelation , while after shuffling , only the tails effect is present .we pointed out that the aggregation of the returns is crucial . indeed for higher aggregation horizons all these issues disappear or at least strongly lessen . for what concerns the tails we interpret this effect as a consequence of the central limit theorem and its speed of convergence on time - series with power law tails but finite variance .in particular the range of tail exponents between two and five turned out to affect the most the measurements .this is due to the fact that under aggregation a residual of the power law tail is always present in the unconditional distribution and the nearer the exponent is to two , the stronger the effect .we finally note that , choosing higher values of aggregations can reduce this effect but this requires to have longer time - series .we plan in the future to study in more detail this issue trying to provide a recipe for the best choice of the region of which is capable to capture the multifractality of the empirical time - series .the authors wish to thank bloomberg for providing the data and n. musmeci for useful discussions .acknowledges support of the uk economic and social research council ( esrc ) in funding the systemic risk centre ( es / k002309/1 ) .tdm wishes to thank the cost action td1210 for partially supporting this work .mantegna , h. e. stanley , `` an introduction to econophyisics : correlations and complexity in finance '' , cambridge univ pr , 2000 ; m.m .dacorogna , r. genay , u. a. mller , r. b. olsen , o. v. pictet , `` an introduction to high frequency finance '' , academic pr , 2001 ; r. n. mantegna , h. e. stanley , `` scaling behaviour in the dynamics of an economic index '' , nature , 376 , 46 - 49 , 1995 ; t. di matteo , `` multi - scaling in finance '' , quantitative finance , vol . 7 , no .1 , 2136 , 2007 ; r. cont , `` empirical properties of asset returns : stylized facts and statistical issues '' , quantitative finance , 1:2 , 223 - 236 2001 ; s. ghashghaie , w. breymann , j. peinke , p. talkner , y. dodge , `` turbulent cascades in foreign exchange markets '' , nature , 381 , 767770 1996 ; l. calvet , a. fisher , `` multifractality in asset returns : theory and evidence '' , review of economics and statistics , 84(3 ) : 381406 2002 ; r. liu , t. di matteo , t. lux , advances in complex systems , `` multifractality and long - range dependence of asset returns : the scaling behaviour of the markov - switching multifractal model with lognormal volatility components '' , 11(5 ) , 669684 2008 ; m. bartolozzi , c. mellen , t. di matteo , t. aste , `` multi - scale correlations in different future markets '' , the european physical journal b , 58(2 ) , 207220 , 2007 ; l. kristoufek , `` fractal markets hypothesis and the global financial crisis : scaling , investment horizons and liquidity '' , advances in complex systems , 15(06 ) , 2012 ; z .- q .- q .jiang , w .- x .zhou , `` multifractality in stock indexes : fact or fiction ? '' , physica a : statistical mechanics and its applications , 387(14 ) , 36053614 , 2008 ; b.b .mandelbrot , a. fisher , l. calvet , `` a multifractal model of asset returns '' , cowles foundation discussion paper no . 1164 , yale university , 1997 ; l. calvet , a. fisher , `` forecasting multifractal volatility '' , journal of econometrics , 105(1 ) , 2758 , 2001; t. lux , `` the markov - switching multifractal model of asset returns '' , journal of business & economic statistics , 26(2 ) , 194210 , 2008 ; m. segnon , t. lux , `` multifractal models in finance : their origin , properties , and applications '' , kiel working paper no . 1860 , august 2013 ;r. liu , t. di matteo , t. lux , `` true and apparent scaling : the proximities of the markov - switching multifractal model to long - range dependence '' , physica a , 383(1 ) , 3542 , 2007 ; e. bacry , j. delour , j .- f .muzy , `` multifractal random walk '' , physical review e , 64(2 ) , 026103 , 2001 ; e. bacry , j .- f .muzy , `` log - infinitely divisible multifractal processes '' , communications in mathematical physics , 236 , 449475 , 2003 ; e. bacry , l. duvernet , j .- f muzy , `` continuous - time skewed multifractal processes as a model for financial returns '' , journal of applied probability , volume 49 , number 2 , 482 - 502 , 2012 ; j. w. kantelhardt , s. a. zschiegnera , e. koscielny - bundec , s. havlind , a. bundea , h. e. stanley , `` multifractal detrended fuctuation analysis of nonstationary time series '' , physica a , 316 87114 , 2002 ; w .- x zhou , `` the components of empirical multifractality in financial returns '' , europhysics letters , 88(2 ) , 28004 , 2009 ; j. barunik , t. aste , t. di matteo , r. liu , `` understanding the source of multifractality in financial markets '' , physica a , 391 , 4234 - 4251 , 2012 ; e. green , w. hanan , d. heffernan , `` the origins of multifractality in financial time series and the effect of extreme events '' , the european phyisical journal b , 78:129 , 2014 ; t. di matteo , t. aste , m.m .dacorogna , `` scaling behaviors in dfferently developed markets '' , physica a , 324 , 183188 , 2003 ; j. barunik , l. kristoufek , `` on hurst exponent estimation under heavy - tailed distributions '' , physica a , 389(18 ) , 3844 - 3855 , 2010 ; t. di matteo , t. aste , m. m. dacorogna , `` long - term memories of developed and emerging markets : using the scaling analysis to characterize their stage of development '' , journal of banking & finance , elsevier , volume 29(4 ) , 827 - 851 , 2005 ; r. morales , t. di matteo , r. gramatica , t.aste , `` dynamical hurst exponent as a tool to monitor unstable periods in financial time series '' , physica a , 391 , 3180 - 3189 , 2012 ; n.l .johnson , s. kotz , n. balakrishnan , `` continuous univariate distributions '' , volume 2 , 2nd edition , wiley , isbn 0 - 471 - 58494 - 0 , 1995 ; h. nakao , `` multi - scaling properties of truncated lvy flights '' , physics letters a , 266 , 282289 , 2000 ; a. v. chechkin , v. yu .gonchar , `` self and spurious multi - affinity of ordinary levy motion , and pseudo - gaussian relations '' , chaos , solitons & fractals , volume 11 , 14 , 2379 - 2390 , 2000 ; k. abadir , j. magnus , `` the central limit theorem for student s distribution '' , econometric theory , 20(6 ) , 12611264 , 2004 ; e. bacry , j. delour , j .- f .muzy , `` modelling financial time series using multifractal random walks '' , physica a , 299(1 ) , 2001 , 84 - 92 ; a. chakraborti , i. m. toke , m. patriarca , f. abergel , `` econophysics review : i. empirical facts '' , quantitative finance , 11:7 , 991 - 1012 , 2011 ; d. stoi , d. stoi , t. stoi , h.e .stanley , `` multifractal analysis of managed and independent float exchange rates '' , physica a , 428 , 1318 , 2015 ; r.morales , t.di matteo and t.aste , `` non - stationary multifractality in stock returns '' , physica a , 392 , 6470 - 6483 , 2013 ; a. clauset , c.r .shalizi , m.e.j .newman , `` power - law distributions in empirical data '' , siam review 51(4 ) , 661 - 703 , 2009 ; y. virkar , a. clauset , `` power - law distributions in binned empirical data '' , annals of applied statistics 8(1 ) , 89 - 119 , 2014 .
we discuss the origin of multiscaling in financial time - series and investigate how to best quantify it . our methodology consists in separating the different sources of measured multifractality by analysing the multi / uni - scaling behaviour of synthetic time - series with known properties . we use the results from the synthetic time - series to interpret the measure of multifractality of real log - returns time - series . the main finding is that the aggregation horizon of the returns can introduce a strong bias effect on the measure of multifractality . this effect can become especially important when returns distributions have power law tails with exponents in the range $ ] . we discuss the right aggregation horizon to mitigate this bias . multiscaling , multifractality , central limit theorem , power law tails , autocorrelation .
it is one of the most well - known and unintuitive features of quantum mechanics that entangled quantum systems can , in a way that disturbed einstein , instantaneously affect each other .specifically , the famous einstein , podolsky , and rosen ( epr ) paper of 1935 , which made the first prediction of this feature , used it to argue that quantum mechanics itself must be incomplete .the epr paper presents a thought experiment involving a maximally entangled state of two systems , for which measurement of the first ( alice s ) system forces the second ( bob s ) system into one of a set of basis states , with the basis depending on the choice of measurement made upon the first .that is , alice s choice of measurement determines which of bob s observables is predictable by her . but epr implicitly rule out instantaneous action - at - a - distance , assuming that no real change can take place in the second system in consequence of anything that may be done to the first system , " ( that is , bob s system is not disturbed , explaining why einstein was ) .hence they conclude that these different observables must have well - defined values regardless of alice s choice of measurement . butquantum mechanics forbids simultaneous values for non - commuting observables .thus , they say , `` the wave function does not provide a complete description of the physical reality . ''contrary to epr , schrdinger argued , in the same year , that quantum mechanics was not incomplete , but idealised .he used the term `` steering '' for the effect epr identified , namely that `` as a consequence of two different measurements performed upon the first system , the second system may be left in states with two different [ types of ] wavefunctions . '' but he thought this was unrealistic when describing systems that are spatially distant , because some sort of decoherence would prevent the entanglement from being established in such situations . in this way , he , too , thought that instantaneous action _ at a distance _ could be kept out of the most fundamental description of reality .the epr paper advocated the possibility of local hidden variables ( lhvs ) in quantum systems which would account for the illusory ( in their view ) nonlocality in the theory .however , it was proved by bell in 1964 that there exist predictions of quantum mechanics for which no possible lhv model could account . finally , in 1982 , examples of bell nonlocality were experimentally realised . even without a loophole - free test of bell nonlocality , it has become widely accepted that ( contrary to schrdinger s hope ) entanglement can exist over long distances , and that bell nonlocality is real . entanglement and bell nonlocalityhave been rigorously defined for decades ; however , it was not until relatively recently ( 2007 ) that the particular class of nonlocality described in the epr paper was actually formalised .the ability of an entangled quantum state to nonlocally affect another ( though not necessarily vice versa ; see also ] , where here is a singlet state in the two - qubit subspace , and is understood to act only on this subspace . by construction , alice can not steer bob , but bob can steer alice because alice ( now considered trusted ) can simply consider steering in her qubit subspace . ] ) has come to be known as _ epr - steering _ .the nonlocality described in the epr paper had been studied mainly in the context of their position momentum example ( see , e.g. , ) but the formal notion introduced in ref . has opened the door to a series of new experiments . following the first demonstration of this general notion of epr - steering in , three experiments have each closed the detection loophole in tests of epr - steering .one did so while also closing the locality loophole over 48 m ( thus definitively disproving schrdinger s suggested resolution of the epr paradox ) .another closed the detection loophole with only two different measurements ( as in the original epr scenario ) by employing state - of - the - art transition edge detectors .the remaining paper closed the detection loophole using commonplace photon detectors while also enduring the losses of transmitting the measured photons through an extra kilometre of fibre - optic cable .the accomplishments of this third paper are due to the highly loss - tolerant epr - steering criteria that it employed to rigorously close the detection loophole .reference describes the formulation of these criteria in more detail , also showing them to be more loss - tolerant than another class of epr - steering criteria ( which includes those used in refs . ) . in this paper , we reconsider those criteria and reveal that they are actually not optimally loss - tolerant epr - steering criteria . in doing so , we demonstrate a method for optimising similar tests of epr - steering , and show that the optimal measurement strategies for such an experiment are just as practicable , significantly more more loss - tolerant in some regimes , and are ( unlike those used in ref . ) applicable for an arbitrary number of different measurements by alice . in sec .ii of this paper , we briefly review the operational definition of epr - steering and the family of states we consider in this paper . in sec .iii we review linear epr - steering criteria , including postselection , then identify and close the inefficient detection loophole this potentially incurs .we then review , in sec .iv , the epr - steering criteria obtained when using platonic solid measurement strategies .we discuss the limitations of platonic solid strategies , including their inherent restrictions in measurement number ( i.e. , ) , and consider geodesic solid strategies ( introduced for in ref . ) , which circumvent this restriction . going from platonic solids to geodesic solids is a more radical step than it may first appear . because it is no longer the case that every vertex is equivalent to every other, a non - trivial constraint can be used to obtain stronger criteria ( than those in ref . ) : that , when post - selection by alice is allowed , the probability of a null result be independent of alice s measurement choice . moreover , there is no longer any symmetry - based justification for all vertices to be equally weighted ; for a geodesic solid comprising two dual platonic solids ( such as the of ref . , and here ) even tighter criteria will result from weighting the two sets differently .all this is introduced in sec .iv , and serves as a springboard to the completely general consideration in sec .v. there , we allow arbitrary arrangements of vertices , with arbitrary weighting of each vertex , and find still tighter criteria for ranging from to . for the states we consider ,these are the most loss - tolerant epr - steering criteria possible for any chosen number of measurements , .we conclude in sec .vi with a discussion of experimental practicalities and future work .therein , we address the benefits and difficulties presented by the most optimal measurement strategies for each , and consider whether optimality alone necessarily makes these the best possible choices for constructing experimental tests of epr - steering .the operational definition of epr - steering that we employ in this paper is such that one experimental party , bob , possesses a quantum state , and another party , alice , claims to possess a state that is entangled with bob s .bob asks alice to make one out of a pre - specified set of measurements on her state , and inform him of her results .using both alice s results and the results of his own measurements on his system , bob then calculates the value of some epr - steering parameter and is only convinced that alice is telling the truth if there is no local hidden state ( lhs ) model which could attain the same value .lhs models assume that bob s quantum state is preexisting , and can only depend on alice s results as much as can be explained by some local ( to alice ) hidden variable that may be correlated with bob s state .this is used to define epr - steering bounds by constructing a theoretical limit on some property of bob s system , based on the assumption that bob s quantum system can not be nonlocally affected by alice s measurements .thus , a violation of this limit demonstrates epr - steering .the epr - steering criteria that we will use are based upon measurements of qubit observables ( typcially photon polarisation , but we will also use the terminology of spin ) . moreover , we specialise to criteria suitable for two - photon entangled states that are werner states : where represents the spin singlet state : .the and superscripts respectively denote properties of alice s and bob s subsystems .the second term represents pairs of qubits that are uncorrelated , and the first term represents qubits that are maximally entangled .thus the purity parameter determines the degree of entanglement in the ensemble , with entanglement being present for .we will consider epr - steering criteria that are analogous to ( linear ) entanglement witnesses .that is , the expectation value of a correlation function between alice and bob s spin measurements , summed over the measurement settings . since in tests of epr - steeringwe can not trust alice s detectors or the results she states , this correlation function must be defined generally as a classical expectation value over alice s reported result , denoted by , as follows : , \label{eq : sn}\end{aligned}\ ] ] where each denotes a particular measurement setting on the bloch sphere , and denotes the total number of such settings .bob s qubit observable is , and is the result alice submits for her measurement .we can restrict alice s results to these values of equal magnitude because of the symmetry of the werner state .if alice , and her detectors , were trustworthy , then the result would correspond to a measurement of her qubit observable .then the correlation function between alice and bob s results can be written as where is the state of bob s system , conditioned upon being the result of alice s measurement . if alice and bob share an entangled state as in eq .( [ eq : werner ] ) , and , then the value of this function is easily shown to be .however , bob must consider that alice might not share an entangled state with him , and could be employing an lhs model , in which case would be calculated from where represents the local hidden variable(s ) inherent to bob s system , upon which alice bases her knowledge of bob s state . in this scenario, bob receives each state with probability , and alice submits results dependent upon both and .this expression relies on the assumption that there is an lhs model of bob s system , the existence of which means that there is a bound upon eq .( [ eq : lhs ] ) that is not present in a quantum mechanical system . in order to ensure that this is as rigorous a test as possible , in defining our epr - steering bound we will assume that alice controls anything that depends upon the hidden variable(s ) , ; namely , , , and .note that the only thing that does not have any dependence upon is bob s choice of measurement .the assumption of locality in this lhs model is manifested in alice s inability to influence or predict bob s measurement choice . to this end, we must assume that bob randomises the order in which he performs each of his measurements , and that alice does not have foreknowledge of , or access to , his random number generation ( this is referred to in other works as the free will assumption , which we will not be further addressing ) . under the above conditions , it is apparent that is bounded above by , which is always achievable by choosing a suitable sign for . a proof of this , and of which ensembles of states a cheating alice can use to attain this optimal value , are given in ref .but if the only concern is to maximise ( an assumption to which we will return in sec .[ sec : globaloptimal ] ) then this can clearly be achieved for a single state . even if there were more than one state that maximised , there is no reason ( at this stage ) for alice to use more than one .therefore , we can take for that state , and will now denote any choice that maximises . the values corresponding to this choice are obviously . however , to evaluate the bound on it is more convenient to keep , writing with the representation on the right being included to highlight that this entire value can be considered as the expectation value of an operator . to seek out the largest possible value of this expression ,we will use the fact that the largest possible expectation value of any operator is equal to the largest eigenvalue of that operator .therefore , the epr - steering bound we can derive for is ,\label{eq : kbound}\ ] ] where denotes the maximum eigenvalue of this operator , and the other maximisation is over the values of .it should be noted that the normalisation factor of in all of the above expressions , stemming from its introduction in eq .( [ eq : sn ] ) , is generally paired with the sum over measurements so that the values of ( and related quantities ) are limited to .this restricts the values of to the same range for any -value , allowing meaningful comparison between them .while it seems logical to weight each measurement result equally , by applying to each term or to the whole sum , we will re - evaluate this assumption in sec .iv.c . in keeping with our assumption of locality ,any null results that bob obtains for his measurements can not be predicted by , or used to any advantage by a cheating alice in an lhs model . because we trust that bob s state , and his measurement thereof ,is governed by our quantum mechanical model of it , we can assume that bob s probability of missing any result is independent of the value that result would have taken ( had it not been null ) .therefore , we will assume that the probability distribution of the results bob did not obtain would have been the same as the probability distribution of bob s measured results .this is known as a fair sampling assumption ( fsa ) , and is generally valid for quantum systems as it is based upon the principles of quantum mechanics ( in the behaviour of detectors ) . however , since we can not assume that alice s results are generated through measurement of a quantum state , we can not apply any fsa to her results in any test of epr - steering ( which is , in part , a test of quantum mechanics itself ) . to simply postselect out any of alices null results would open an inefficient detection loophole in our test . even though the fsa can not be made for alice, this does not mean that it is not permissible to postselect on alice getting ( or claiming to get ) a non - null result .this postselection is permissible as long as the bound in the inequality eq .( [ eq : kbound ] ) is adjusted ( to a higher value , naturally ) , to take into account the extra flexibility offered to a dishonest alice if she is allowed to submit null results with a certain probability .since bob has no way of knowing whether this probability is due to genuine inefficiencies or not , we refer to ( such that ) , as alice s _ apparent efficiency_. alice s optimal cheating strategies , which gives us the new bounds for the post - selected correlation function , were derived in ref . , with more details in ref .the analysis in the remainder of the present paper builds on this , so we briefly review it here . if alice chooses to submit non - null results only for a predetermined set of measurement settings , with , her optimal is defined by the values of these settings .such a strategy can be referred to as a _ deterministic _ strategy , and the maximal values obtainable with such a strategy are calculated to be , \label{eq : platdet}\ ] ] where is the apparent efficiency associated with any such strategy , which is necessarily constrained to be .the sum in the above expression can be over either or settings , since the maximisation over is constrained such that a portion of the values will be nonzero .an experimental determination of would require many repetitions for each of the settings , and alice is not constrained to choose the same measurements to be null in every iteration , nor even to choose the same number of nulls in every iteration . if alice uses a combination of deterministic strategies a _ nondeterministic _ strategy she is also able to avoid constraining her apparent efficiency to be .if using a nondeterministic strategy , the maximal value attainable for any apparent efficiency is ,\label{eq : ndbound}\ ] ] where defines the weighting with which alice uses each deterministic strategy , each of which is defined by its apparent efficiency , .thus , the sum over indexes all optimal deterministic strategies alice could use ( there is no benefit for alice to ever use suboptimal deterministic strategies , so they are not considered ) .the weightings are normalised by , and constrained such that .it can be seen from the form of eq .( [ eq : ndbound ] ) that . [ platbound ] the above construction gives the bound a dishonest alice can achieve for the non - postselected correlation function . since she declares non - null results with probability ( which is a quantity bob directly calculates from the statistics of her declared results ) , the bound on the post - selected function will be epr - steering criteria of the above form have been studied before , both with the fsa for alice and without ( i.e. , closing the detection loophole ) . in all of these works , measurement orientations that are regularly spaced about the bloch sphere were used .that is , the spacing between vertices is the same for any pair of nearest neighbours .the only such arrangements that exist are those with 2 , 3 , 4 , 6 , or 10 different measurement axes , which correspond to the vertices of the three - dimensional platonic solids ( with the exception of for which the tetrahedron , whose vertices do not come in antipodal pairs , was replaced by the square ) .regularly spaced measurements are as far apart as it is possible to be from their nearest neighbours on the bloch sphere , and in this sense are as different as possible .this minimises the ability of alice to choose a state that leads to high values of for many .intuitively , this seems like a good choice for making it as hard as possible for a cheating alice to obtain high values , thereby making the rigorous epr - steering bounds as low as possible , and thus making it as easy as possible for an honest alice to violate the bound .it should be noted that this reasoning would not necessarily apply for all kinds of photon polarisation states , as it relies on the symmetry of werner states , which are invariant under identical unitary transformations performed on both sides .figure [ fig : platonic ] displays the epr - steering bounds calculated from eq .( [ knep ] ) with measurement orientations defined by platonic solid vertices . looking closely at this graph, one can observe that the platonic solid measurements for are clearly not optimal in general , since they give a bound above that for for .an optimal set of four measurements would never require a higher degree of correlation to demonstrate epr - steering than any set of three measurements .we will return to this issue in sec .[ sec : globaloptimal ] ., using platonic solid measurements ( and geodesic measurements , for ) . ]recall that for a werner state the degree of post - selected correlation is , which can approach unity .thus we see that for close to one , the bounds are quite loss - tolerant , especially as increases . indeed ,if , epr - steering is demonstrable so long as .moreover , in almost all places , use of more measurements results in epr - steering bounds that are more loss - tolerant .however , regularly spaced measurementsets do not exist for any above 10 , so we must abandon our scheme of using regularly spaced measurements if we wish to use .but on the other hand , our restriction to regularly spaced measurements was based upon the intuition that they were the best choice for their respective numbers of measurements , whereas this is demonstrably not true everywhere , as discussed above .therefore , there may be little reason to continue imposing this condition , and little reason to thusly limit our measurement number .the reader may notice that fig .[ fig : platonic ] includes not only the platonic solid bounds mentioned above , but also includes a bound for measurements , which can not correspond to any platonic solid .this was derived , and employed experimentally , in ref .the measurement orientations used to obtain this bound correspond to the vertices of a shape that incorporates the vertices of the icosahedron ( ) and the dodecahedron ( ) , face - centred on one another ( as these two shapes are a dual pair ) .the resulting arrangement of vertices creates a shape that is a geodesic solid each face is an isoceles triangle , so its neighbouring vertices are not regularly spaced , but are quite close to it .this characteristic is true of any geodesic solid , so given the obvious benefits of using this arrangement , it would seem that geodesic solids are one possible solution for obtaining high- measurement sets with robust bounds .construction of a geodesic solid does not require two platonic solids to be superimposed , but only requires vertices to be added to the face centres of a platonic solid , or another geodesic solid .thus , they can not be constructed with arbitrary numbers of vertices , but there does not exist any upper bound upon the number of vertices that can be used to construct one . having seen that the platonic solids are not necessarily optimal anyway , the fact that the vertices of a geodesic solid are not regularly spaced is not really much of a drawback .indeed , the viability of geodesic solids may even raise the point of whether a little asymmetry may be more optimal than regularly spaced measurements even for small .this will be fully explored in sec .[ sec : globaloptimal ] .meanwhile , we will use the geodesic solids as a first investigation into the way asymmetry can affect the derivation of epr - steering bounds , enabling more loss - tolerant tests than any previously calculated .when a cheating alice suspects that bob is keeping track of her null result distribution , her foremost consideration in optimising will be to ensure that this distribution reflects the same profile as that of an honest alice .this means that alice should ensure the probability of her reporting a null result on any given measurement is equal to the probability of her reporting a null result on any other measurement .she must do this , even if submitting nulls more often for some measurements would allow her to obtain a higher value .in other words , if bob does verify that the null rate is independent of alice s supposed setting , then he will be convinced of the reality of epr - steering for a lower value than without this verification , thereby making the test more loss - tolerant .the uniform spacing of the platonic solids vertices grants them large symmetry groups ; the group of all transformations which leave the polyhedron invariant .in particular , all vertices are equivalent under the action of each solid s symmetry group .therefore any cheating strategy alice adopts performs precisely as well if it is symmetrised by application of the symmetry group , and this ensures that the null - rate can be made independent of alice s supposed setting .for example , when for any of the platonic measurement sets , alice s optimal choice of is any state with its spin axis centred on an edge of the platonic solid ( i.e. , equidistant between any pair of adjacent vertices ) .such a strategy is equally optimal regardless of which adjacent vertex pair is chosen because all edges are the same length .but for any geodesic solid , not all edges are the same length , so ( considering again ) not all edge - centres correspond to optimal strategies .thus , it may not necessarily be possible to use a nondeterministic strategy that both attains the maximal value _ and _ keeps alice s null probabilities equal. such limitations would be expected to become even more important for the more complicated cheating strategies [ which would be strategies near : the middle of each curve ] . with and without consideration given to the symmetry condition .] for illustration , let us consider the geodesic solid that is constructed by combining the and platonic solid vertices ( because they are dual to one another ) , to obtain . to simply maximise the numerical value of , without constraining her null probabilities for each measurement to be equal , alice can obtain the `` asymmetric '' bound in fig .[ fig : n7 ] .if a cheating alice takes care to obey this symmetry condition , then the maximum she can attain is the `` symmetric '' bound in fig .[ fig : n7 ] .the difference is negligible for most efficiencies , and is most significant near .a clearer plot of the numerical difference between these two bounds is shown later .figure [ fig : weighting7 ] shows how a cheating alice must depart from her reasonably simple asymmetric strategy in order to attain the maximum bound under the symmetry condition .the partitions in this figure show the optimal mixture of deterministic strategies by alice , for each possible -value .the height of each partition represents the weighting with which alice must send bob each of the ensembles displayed on the shape within that section , in order to attain the maximal value of . for example , alice s optimal symmetric strategy for requires alice to choose bob s states such that : come from the ensemble shown on the solid labelled ( 0,1 ) , from the ( 1,1 ) solid , and from the ( 1,2 ) solid .the states in each ensemble must also be submitted equally frequently , e.g. , in this strategy , the eight states on the ( 0,1 ) solid must each be submitted of the time , in total .the bracketed numbers that label each solid in fig .[ fig : weighting7 ] respectively represent the number of non - null responses , for the associated deterministic strategy , to bob s and measurements ( that make up the set ) . for a deterministic strategy , identified with the pair , we calculate the deterministic bound quite similarly to before , as , \label{eq : dn34}\ ] ] where corresponds to bob s measurements for the ( the first sum ) , and for the measurements for ( the second sum ) . the index is over all possible combinations of and thus the maximisation considers the optimal deterministic ensembles for every such combination [ there will be of these ] .an optimal nondeterministic strategy is composed of these as , \label{eq : kngeo}\ ] ] where is the weighting of each deterministic strategy , , and is constrained such that , and such that the apparent efficiency of the strategy is .although constructed slightly differently , these are the same relations as given in sec .[ platbound ] . in order for to give the optimal _ symmetric_ nondeterministic bound , we must also constrain alice s null probability to be independent of bob s measurement orientation .this can be done by constraining such that the mixing of strategies must be in proportions where , over the entire nondeterministic strategy , the null probability for is equal to that for .therefore , must also satisfy for both and . without this constraint , the optimal cheating strategies for alice [ those shown in fig .[ fig : weighting7](a ) ] would lead to very asymmetric reporting of null results .for example , at there is a single deterministic strategy ; the strategy , with an apparent efficiency of .this strategy requires and , which means alice would never report a null result for one of bob s measurements drawn from the cube ( but would report a null result of the time for one drawn from the octahedron ( ) .we have seen that for , a cheating alice is able to attain a symmetric bound almost always as high as her asymmetric bound , but only if she employs more elaborate mixings of her deterministic strategies . indeed , at almost every -value , the optimal symmetric mixings include more deterministic strategies than just the strategies used to attain the optimal asymmetric bounds at that -value . clearly , only when using two ( or more ) geometrically inequivalent subsets of measurement direction ( as in geodesic solids ) could any cheating strategy attain a higher with an asymmetric null distribution than is possible with a symmetric null distribution .thus , only when using such inequivalent measurement subsets can a symmetry condition be used to improve our epr - steering bounds ( as we observed for ) . from this observation, one may come to suspect a further advantage that may be gained in this situation , as follows .say an optimal asymmetric cheating strategy involves alice reporting more null results for one of the measurement sets ( e.g. , the set ) .this suggests that a cheating alice would prefer not to have to report outcomes for this set at all .therefore , if alice were not only forced to report results for these measurements equally often , but actually _ more _ often than other measurements , this would , intuitively , make it harder for a cheating alice to achieve a high correlation , averaged over all reported results , especially when we impose the restriction that the cheating strategy be symmetric .thus , using different weights for different measurement sets in the expression for the epr - steering correlation function could conceivably lower our epr - steering bounds even further . like the symmetry condition , such an advantage would clearly only be available to bob if the set of measurements he employs are not regularly spaced . to make use of this , we should recall that each measurement was equally weighted in all of our previous calculations . indeed , for any of the platonic solid measurements , unequal weightings could predictably lead to higher bounds ( attainable by a dishonest alice by aligning bob s lhs closer to the more highly weighted measurements ) , but offer no prospect of lower bounds . the only goal for any choice of weighting ( or , indeed , any choice of measurement set ) is to limit the values of that can possibly be obtained with any cheating strategies . for an honest alice, will be solely dependent upon her state s entanglement parameter , and her efficiency .thus , the only way in which measurement weightings can affect an honest alice s capabilities is if a change in weightings changes our bounds .this is to say ; if unequal weightings can lower our epr - steering bounds , we can be certain that this is the only consequence they will effect .to investigate how the epr - steering bound is affected when our measurement weightings are not necessarily equal , we will designate the measurement weighting for the octahedral ( ) measurements as , and for the cubic ( ) measurements as .our previous expressions for used equal weightings for all measurements , so removing this restriction from eq .( [ eq : dn34 ] ) to include a dependence upon and , we obtain , \label{eq : dn34p}\ ] ] with . note( from this expression ) that does not define equal measurement weightings because there are three octahedron ( ) measurements and four cube ( ) measurements in our set of seven .therefore , each measurement is chosen equally often with the _ balanced _weightings , .when bob chooses unbalanced weightings ( which we will refer to as in the general case ) , alice s optimal deterministic strategies will likely change . however , even with eq .( [ eq : dn34 ] ) replaced by eq .( [ eq : dn34p ] ) , alice s optimal nondeterministic strategies are still described by eq .( [ eq : kngeo ] ) , and we will still constrain alice to satisfy the symmetry condition , eq .( [ eq : sym7 ] ) . upon calculating the values of these bounds as a function of , we find that bob can indeed alter his values to lower the epr - steering bound for almost all -values .figure [ p3p4 ] plots the values of and that yield the lowest possible epr - steering bounds for our geodesic measurements . from this figure , it is clear that epr - steering can be more easily demonstrated by using unbalanced measurement weightings . ) and cubic measurements ( ) for the variable- bounds .the dashed line indicates balanced weighting .the line stops at because epr - steering is impossible below that point . ]geodesic bounds to : ( a ) the symmetric bounds , ( b ) the variable probability bounds , and , ( c ) the optimal bounds . ]moreover , the optimal way to unbalance the correlation function is in line with the intuitive argument we used to motivate this unbalancing at the beginning of this section : to more heavily weight the measurements which give lower results in alice s cheating strategies .appendix a gives more detail as to how this is shown by the behaviour of fig .[ p3p4 ] . however , at most -values , the magnitude of the improvement we obtain in by using the optimal weightings shown in fig .[ p3p4 ] is on the same scale as the difference between the two bounds in fig .[ fig : n7 ] ; so it would not be very useful to plot the postselected values for these bounds .instead , we have shown , in fig .[ comparison7 ] , the difference between the optimally weighted bounds and the original asymmetric bounds for ( as function b " ) .this figure also includes the difference between the asymmetric bounds and the symmetric bounds ( function a " ) , so the spacing between these two functions is the degree of improvement that the optimally weighted bounds offer over the symmetric bounds . on this figure , which is approximately one - fourteenth the vertical scale of fig .[ fig : n7 ] , we can observe that the optimally weighted bounds offer improvement at almost every -value , but given the scale upon which this change is visible , it can be said that there is not a significant improvement anywhere except near . the function c " in this graph shows the improvement gained from further types of optimisation that we discuss in the next section .our choices of measurement sets thus far have all been built upon the idea that regularly spaced measurement orientations should be of the most benefit for a rigorous test of epr - steering . however , in fig . [fig : platonic ] we saw that regularly spaced measurements for are definitely not optimal , and in sec .iv we have observed several distinct advantages that only exist for measurements that are not regularly spaced ( because they combine two platonic solids ) . including these advantages for the geodesic solid gives bounds better than the platonic bounds for around .however , they are actually worse than the platonic bounds for \cup [ 0.52,0.82]$ ] , meaning that even this scheme can not be optimal for .these observations motivate considering the even more general case , where we do not have two ( or more ) sets of measurements , but rather where we treat each measurement setting independently .that is , we fix only the number of settings , and , for each , optimise the directions defining the measurements , and the weightings defining the correlation function .to investigate this , we must return to our definition of , redefining it as generally as possible .our use of already allows arbitrary measurement directions , so we need only define a weight for each -term , which we will denote , normalised according to . thus , the ( non - postselected ) form of that we consider is {r}. \label{eq : snformal}\ ] ] in this scenario ( just as in sec .[ sec : ivc ] ) , it is not actually necessary for bob to experimentally choose measurement setting with probability in order to calculate eq .( [ eq : snformal ] ) ; he can choose different settings with arbitrary frequency and merely weight each term appropriately in his calculation of . to obtain the strongest bounds for variable measurement sets, it will clearly be necessary to employ our symmetry condition . in a form which is independent of measurement orientations ( or relationships thereof ), the condition a cheating alice must meet for her null probabilities to be independent of measurement orientation is where for a given deterministic strategy , is the result she reports when bob measured with setting , and is the probability with which she chooses each strategy .note that is the efficiency for measurement under strategy .alice s optimal deterministic strategies are those which attain the maximum in the expression , .\label{eq : dngen}\ ] ] this looks markedly more similar to eq .( [ eq : platdet ] ) than it does to eq .( [ eq : dn34 ] ) or eq .( [ eq : dn34p ] ) , but its only deviation from any of these equations is that , to define it with generality , we must take the index to denote the optimal deterministic strategies for each possible permutation of null / non - null values for _ all _ measurements .that is , for measurements , where we now label , we must consider the optimal deterministic strategies for all possible values of the list of .thus , to employ our generalised symmetry constraint , the maximal nondeterministic bound on can not be defined by eq .( [ eq : ndbound ] ) , which is not compatible with eq . ( [ eq : symcond ] ) , but must be defined as , \label{eq : kngen}\ ] ] where , most generally , indexes the set of all possible deterministic strategies , .this is because if alice s numerically optimal nondeterministic strategies can not be arranged to satisfy the symmetry condition , she will need to use some suboptimal deterministic strategies in order to satisfy this condition ( and to maintain a reasonably high value for ) . while bob s choice and implementation of are of no consequence to an honest alice ( except in their capacity to lower the epr - steering bound ) , it merits brief observation that a cheating alice can not attain the bounds on without knowing what will be , since the optimal deterministic strategies defined by eq .( [ eq : dngen ] ) involve .( the same is true of the in sec .[ sec : ivc ] . ) but , as described above , bob s only priority in choosing is to make the epr - steering bound as low as possible .so , given some measurement set and set of weights , we can calculate from eq .( [ eq : kngen ] ) .thus , in terms of the post - selected , as we have been using , the epr - steering bound is .calculating which measurements and weightings minimise requires searching simultaneously over all and variables .thus , we can only define the optimal value of as .\label{eq : opt}\ ] ] we can minimise the dimensionality of this problem by holding static the direction of the first and the plane of the second , and defining one from the other of them ( using their completeness relation ) , but this still leaves a search space of scalar variables .moreover , such an optimisation is required for every different value . performing such optimisations numericallydoes not require unreasonable amounts of computational power for moderate .takes about an hour per data point in matlab on a standard personal computer .however , every time increases by 1 , the variable space requires three more dimensions .even with efficient optimisation algorithms , our solving time still increases exponentially with , almost doubling with every increase in , being approximately proportional to . ] the sets and that achieve the minimum bound , , define the optimal steering experiment using werner states , measurement settings , and an apparent efficiency of .we observed earlier in fig .[ fig : platonic ] that for , the platonic solid epr - steering bound for ( cube ) was not as loss - tolerant as the ( octahedron ) bound , which would not be possible were it an optimal set of four measurements .this makes the obvious place to start for our optimisation .this optimisation was performed for at 18 different values , with spacing between each value .the epr - steering bounds yielded by each optimised measurement strategy are shown in fig .[ optbounds4 ] . for comparison, this figure also displays the platonic solid bounds for and .one might expect us to show also the optimised bounds , but it turns out that the octahedral measurement strategy for is already an optimal measurement strategy for every .( at least this is what we found after performing the optimisation for over a large range of values . )it was concluded that the same is true of the square strategy for . bounds with and platonic bounds . ] in fig .[ optbounds4 ] , the points on the platonic solid curves are optimal deterministic strategies , and the lines are the nondeterministic strategies corresponding to the optimal bounds for these measurement sets , as usual .but on the optimised measurement curve , the only bounds which are definitely optimal are the data points , as these are the only values for which optimisations have been performed .the curve connecting these points is calculated from nondeterministic mixings of these optimised bounds .however , analysis of these data points indicates that the optimal values of and vary quite slowly relative to , so this curve almost certainly closely approximates the intermediate optimal bounds .as we can see , the optimal bounds for are lower than the bounds in all places , which more than fulfils our motivating requirement that optimal bounds should have . indeed, the optimised bounds are also visibly lower than the platonic solid bounds for , but converge with the platonic bounds as .performing the minimisation in eq .( [ eq : opt ] ) for , with a large number of -values , reveals that the optimal measurement strategy for is still to use equally weighted cubic vertices for , but as increases , the optimal measurement strategy deviates from the cube ( as a seemingly continuous function of ) , approaching the spatial configuration shown in fig .[ n4ep5solid ] , which represents the optimal measurement strategy for .the optimal values of at this point are such that the two measurements in the same plane the ones that define the square visible in fig .[ n4ep5solid](b)have weightings of , and the other two measurements have weightings of . when , from two different angles .the two vertices at the top of ( a ) are the same two vertices in the centre of ( b ) . ] as increases above , this optimal measurement strategy undergoes another continuous transition , and at , the optimal arrangement becomes that shown in fig .[ optsolids](a ) : three measurements almost ( but not quite ) equally spaced in the same plane the optimal lengths of their edges seem to be around 1.03 , 1.00 , and 0.97 , and this performs better than exactly equally spaced measurements and a fourth perpendicular to them .the weightings associated with these measurements are for the three planar measurements , and for the extraplanar measurement . ) for ( a ) : , ( b ) : , ( c ) : , and ( d ) : .( b ) and ( c ) can be thought of as top - down " compared to the perspective of ( a ) . ] although we found the platonic solid bound for to be an optimal bound , the clear improvement of the optimal bounds over their platonic solid counterparts strongly suggests that there may be room for improvement in the other platonic solid measurement strategies . upon calculating a series of optimal strategies for , this suggestion becomes an insistence , since we find that the optimal bounds for are again better than the platonic bounds for in a range near ( we plot the curve later , in fig .[ allopt ] ) . calculating optimal measurement strategies for gives bounds that are , as expected , equal to the platonic bounds in some places , but slightly better in most . in fig .[ comparison ] , we have plotted the quantitative improvement that the optimised bounds offer over the platonic bounds for and ( more visibly displayed than the form of fig .[ optbounds4 ] allows ) . and .] the maxima and minima in fig .[ comparison ] are indicative of the advantages that optimised measurement strategies offer over platonic measurements , so we explain in appendix b what causes them to occur . at each point, it seems that the most beneficial measurement sets should generally be reasonably close to being regularly spaced , but not quite .the most beneficial sets merely augment these properties , with most being close to equal , but slightly higher for measurements that are the most outlying . based on an exploration of optimised strategies for ( though less comprehensively for and ), similar behaviours seem to be generally applicable to the optimal strategies for any .indeed , the optimal measurement arrangements for and 7 have obvious traits in common with those for . if we define the vertices of a solid from our optimised measurement orientations , we obtain solids for and that have almost the same arrangement of three equatorial vertex pairs that elicits . for and ,the only substantial difference from the case is that the single vertex at the top of that figure is replaced by a pair of vertices for , and a ( scalene , but nearly equilateral ) triangle of vertices for .this property is made as visible as possible in figs .[ optsolids](b ) and [ optsolids](c ) , with their three planar vertex pairs being the six outermost vertices visible on both of those images .the optimal solid for , on the other hand , breaks with this pattern , but still shows a noticeable similarity to the shape . shown in fig .[ optsolids](d ) , this solid has the same top - down profile as the solid , centred on a top - bottom " vertex pair . unlike ,the remaining vertices are not arranged in a single plane , but are arranged in two parallel planes with three vertex pairs defining each one which is the source of the similarity between our and shapes .the optimal solid shown in fig .[ n8solid ] does not bear an immediate resemblance to any of the other optimal solids in figs .[ n4ep5solid ] or [ optsolids ] , but can easily be seen to approximate two parallel planes of vertices with and another vertex orthogonal to them .the solids shown in figs .[ optsolids ] and [ n8solid ] were all generated from optimisations , and do change slightly with , but retain the same general arrangements at all points . when , from two different angles .the vertex at the top of ( a ) is the same vertex as in the centre of ( b ) . ] in addition to this , the optimal epr - steering bounds for and 8 all seem to adopt the same general behaviour that we have observed in our analysis of the above bounds . if we return to fig .[ comparison7 ] , we can see that the improvement of the optimised bounds over the other examples does follow a similar pattern to that observed in fig .[ comparison ] for and 6 .around and , our optimised bounds offer little improvement upon their more regularly spaced counterparts , for the same reasons described above. however , fig .[ comparison7 ] shows that for ( at least ) , the improvements of the optimised bounds at are largely due to the advantage of unequal measurement weightings . returning to fig .[ comparison ] , a final trend to discuss is that the improvement in bounds , at all points , exceeds the improvement in bounds .seeing as the platonic bounds were the only ones to be outperformed by another platonic solid at any point , this is not surprising .however , perhaps a better framing of the reasons for this can be seen in the tendency of higher -values to yield bounds ever closer to the infinite measurement limit the lowest possible values that epr - steering bounds can take , regardless of measurement number analytically calculated in ref .this limit can be expressed as a diagonal line on our graphs , and is shown in fig .[ allopt ] .as increases , the platonic bounds ( in fig . [ fig : platonic ] ) approach this diagonal , but with every step towards it being smaller than the last ( with respect to their increases in ) .we would expect that optimised epr - steering bounds should also approach this limit in a similarly asymptotic manner , albeit more swiftly than sub - optimal bounds .therefore , it should be reasonable to expect that the closer a platonic bound is to the line , the smaller the advantage conferred by optimising it , just as with the advantage conferred by increasing measurement number . and 8 , with the analytical infinite measurement limit included .the and bounds are equal to the platonic bounds . ]as expected , we find that the optimised epr - steering bounds do approach the bound more quickly ( with respect to ) than the platonic bounds do . the optimised bounds for and 8 are shown in fig .[ allopt ] , and at almost every -value , the optimised bounds seen here are actually closer to the diagonal than the platonic bounds are ( especially around , where the bounds are inferior to every optimised bound with ) .indeed , the proximity of the bounds in fig . [ allopt ] to the diagonal limit shows that with , these measurement strategies are considerably loss - tolerant , and have very little room for improvement .however , any optimised strategy with is guaranteed to be at least as loss - tolerant , and at least as close to the diagonal as the best bounds in fig .[ allopt ] ( and will necessarily be incrementally closer for _ at least _ some range near ) . in fig .[ allopt ] , this trend is easily observed , but here we can also see the relevance of regularly spaced measurements being close to optimal around and : around these places , the platonic bounds ( for and 10 , at least ) were already reasonably close to their graph s diagonal .thus , it stands to reason that these would be values where the possible advantages of any other measurement strategies would generally be most limited .this also offers insight as to why the greatest advantages for our optimised bounds were around and .such behaviour is reassuring to see in optimised bounds , since it is reasonable that only with optimal bounds can we see higher -values necessarily leading to bounds that are incrementially closer to a diagonal line each time .in our consideration of epr - steering tests for two - qubit werner states , we have confirmed our earlier conclusion that the detection loophole in these tests can be closed without necessarily placing any particularly demanding experimental constraints upon one s detection efficiency .this can be accomplished by employing a large number of measurements in each test .however , we have also shown , contrary to the assumptions of previous experiments , that measurement sets based upon platonic solids are , in general , suboptimal . of course , platonic solids are suboptimal in that they are restricted to , but this limit can be overcome by combining platonic solids to make geodesic solids ( which can be defined for arbitrarily large if desired ) .the more interesting point is that platonic solids are demonstrably suboptimal even for as small as .specifically for some values of alice s efficiency , there are werner states which do not violate the epr - steering inequality for the platonic solid , when we know that epr - steering can be demonstrated even with . considering geodesic solids and how to test alice s steering ability most rigorously pointed the way to defining the optimal steering tests for any , even those for which there exist no platonic solid or geodesic solid .this means that more measurements can always yield more loss - tolerant tests of epr - steering .more importantly , it means that even with relatively small , tests of epr - steering can be much more loss - tolerant than with platonic solids , or any other merely intuitive strategies .we calculated and explored the optimal measurement strategies for measurement numbers of and 8 , but were prevented from easily exploring the optimal strategies for by the computational demands of their numerical derivations .for this reason , we should conclude that geodesic measurement sets may be a more practical alternative than truly optimal measurements for loss - tolerant experimental tests of epr - steering for large . optimising the epr - steering inequalities forgeodesic measurements do require numerical minimisation , but the number of parameters scales only logarithmically with the number of settings .this is significantly less demanding than generating a fully optimal measurement strategy and inequality for settings , which has free parameters .we note that a recent paper has suggested an alternate method for demonstrating steering with large numbers of measurements , by using random bases , although without consideration of inefficiency or loss .finally , we note that further work would be required to turn the epr - steering inequalities we have derived here into truly experimentally applicable inequalities .there are two reasons for this .first , we have assumed that bob s detectors are completely characterised , with no unknown systematic errors .second , we have allowed bob to place restrictions on alice s reported results ( that the frequencies of nulls are independent of his setting ) which can not be exactly verified from any finite data set . a completely rigorous experimental test would have to include the ( very small ) increase in the ability of an untrusted alice to cheat by exploiting the imperfections of bob s measurement apparatus , and any allowed deviation of her null - rates from the average .we thank cyril branciard for initially pointing out that the cubic arrangement could not possibly be optimal for .this work was supported by arc centre of excellence grant no .the behaviour of fig . [ p3p4 ] is clearly meaningful , though its implications are not immediately obvious .it supports the general hypothesis that optimal weightings entail lower weights for measurements with higher results in alice s optimal cheating strategies .for example , the peak in at in fig .[ p3p4 ] occurs in a region where the average results of the measurements are lower than the measurements . at this point ,alice s optimal mixing of deterministic strategies is the same as at in fig .[ fig : weighting7 ] , this being a 50 mix of the ( 1,2 ) and ( 2,2 ) strategies .the optimal configurations of non - null measurements in the ( 2,2 ) strategy are such that the optimal lhs orientations shown on the ( 2,2 ) solid in fig .[ fig : weighting7 ] are always the optimal orientations for the ( 2,2 ) strategy , regardless of , and these give slightly higher results for the measurements , on average .the ( 1,2 ) strategy is not as symmetric as the ( 2,2 ) strategy , and its optimal lhs orientations do change with , but they still give higher average results for the measurements with any such that ( the difference between these average and results being much greater than the same difference in the ( 2,2 ) strategy ) . above that, the ( 1,2 ) ensembles start giving higher results , on average .thus , the results of alice s optimal cheating strategy at are lowest when bob chooses . the particularly obvious discontinuity at in fig .[ p3p4 ] is also caused ( in part ) by the marked symmetry of the ( 2,2 ) strategy , but in conjunction with the ( 1,4 ) strategy .just as for the ( 2,2 ) strategy , the optimal lhs orientation for the ( 1,4 ) strategy does not change no matter how changes . unlike ( 2,2 ) , the ( 1,4 ) strategy yields higher average results for the measurements than the measurements .however , the symmetry condition restricts alice s mixture of these two strategies at to weight the ( 2,2 ) strategy four times more heavily than the ( 1,4 ) strategy ( as can be seen in fig .[ fig : weighting7 ] ) .the difference between the average and results in ( 2,2 ) is not greater than that difference in ( 1,4 ) , but this difference in ( 2,2 ) is greater than a quarter of the difference in ( 1,4 ) .since the symmetry condition requires alice to use an 80:20 ratio of these two strategies , any increase in will thus lower . because does not affect the optimal lhs orientations for ( 2,2 ) or ( 1,4 ) , the optimal value of at is therefore .this kind of collapse is only possible when the optimal mixture is composed only of strategies with the symmetric characteristics of the ( 2,2 ) and ( 1,4 ) strategies .this mixture itself is only the optimal one because the differences between the average and results , in both the ( 2,2 ) and ( 1,4 ) strategies , is small enough that even when , there simply does not happen to be any other possible mixture of strategies that can attain as high a value of while still maintaining .indeed , we can see in fig .[ comparison7 ] that the improvement at is only about .thus , this anomaly provides a good example of how strategically unbalanced weightings allow us to optimally utilise a chosen set of geodesic measurements .[ [ no - advantage - at - epsilon - lesssim-0.4 . ] ] no advantage at .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + at , all measurement strategies necessarily yield the same bounds , and for other efficiencies close to , a cheating alice can easily choose nulls for most of her measurement results , and select states for bob that are equidistant between the non - null measurements .thus , for these values , being as far apart as possible ( i.e. , regularly spaced ) is of most importance in a measurement strategy , which is why the platonic measurements are optimal or quite nearly optimal in this range .a similar principle applies for the minima to the right of the central peak in fig .[ comparison ] , when alice must choose most of her results to be non - null . with non - regularly spaced measurements, alice can often compose her deterministic strategies for to have non - null arrangements that are more closely spaced than platonic measurements allow ( in a non - regular set of orientations , it s easy to find one or two that are more isolated from the others , whereas in a regular set , this is impossible ) .the symmetry condition curbs this ability somewhat since it requires each measurement to have the same average non - null probability , but when alice has several -values between and ( i.e. , when is large ) , it becomes easier for her to mix these closely spaced high- ( ) strategies with low- strategies that give higher expectation values for the outlying measurements .this is why the platonic strategies are close to optimal around , and moreso for than . in this regime ,alice s optimal strategy is to choose roughly the same number of nulls and non - nulls in each deterministic strategy .and low- strategies .however , this would combine the weaknesses of the low- and high- strategies without optimally employing their strengths .alice s most effective cheating strategies at low are those which maximise bob s results for a minority measurements by disregarding his results for the majority ( which are assigned nulls ) . at high ,alice s most effective strategies are those which maximise bob s results for as many measurements as possible , with her ability to do so being limited by how many measurements she can afford to assign nulls for ( and therefore not care about their values ) .choosing roughly the same number of nulls and non - nulls in each deterministic strategy allows her to most effectively prioritise the maximisation of half of bob s measurements over the other half , which she can afford to not care about ( and submit nulls for ) in each deterministic strategy . ] to do this , alice would need to find closely - spaced configurations of measurements to be non - null , and must find such configurations in as many directions as possible .this task is trivial with platonic measurements , as their symmetry groups are such that a configuration of nearest - neighbour measurements is the same configuration for _ any _ nearest neighbours .therefore , choosing a set of measurements that are not regularly spaced offers an advantage in this region . for the solid in fig .[ n4ep5solid ] , for example , every measurement pair is farther apart than the measurement pairs in the cubic arrangement , with the exception of the pair of lowest - weighted measurements .thus , there is only one pair of measurements that can offer deterministic strategies lower than the cube s , and they have very low weightings .it is in this way that non - regularity of the optimal measurement sets is easily used to outperform platonic measurement sets .alice s strategies are most strongly restricted at , where a cheating alice s optimal strategy is to align bob s state with the spatial average of all of his measurement axes , with a suitable choice of sign . for the platonic solids ,this means choosing ensembles that are either face - centred or vertex - centred all about the platonic solid .however , optimised measurement strategies for tend to have ( up to , at least ) most of their measurements defining a single plane ( or two parallel planes ) of vertices , and the rest clustered near the directions perpendicular to this plane .the benefit this offers is that the spatial average(s ) of all of these measurements will be farther from most of them than the platonic averages are from their constituent measurements .v. hndchen , t. eberle , s. steinlechner , a. samblowski , t. franz , r. f. werner , r. schnabel , nature photonics * 6 * , 596 ( 2012 ) .j. bowles , t. vrtesi , m. t. quintino , n. brunner , phys .lett . * 112 * , 200402 ( 2014 ) .e. g. cavalcanti , s. j. jones , h. m. wiseman , m. d. reid , phys .a * 80 * , 032112 ( 2009 ) .a. j. bennet , d. a. evans , d. j. saunders , c. branciard , e. g. cavalcanti , h. m. wiseman , g. j. pryde , phys .x * 2 * , 031003 ( 2012 ) .d. j. saunders , m. s. palsson , g. j. pryde , a. j. scott , s. m. barnett , h. m. wiseman , new j. phys . *14 * , 113020 ( 2012 ) .m. d. reid , phys .a * 40 * , no . 2 , 913 ( 1989 ) .m. d. reid _et al . _ , rev .* 81 * , 1727 ( 2009 ) .b. wittmann , s. ramelow , f. steinlechner , n. langford , n. brunner , h. wiseman , r. ursin , a. zeilinger , new j. phys .* 14 * , 053030 ( 2012 ) .d. h. smith _et al . _ , nature commun . * 3 * , 625 ( 2012 ) .d. a. evans , e. g. cavalcanti , h. m. wiseman , phys .a * 88 * , 022106 ( 2013 ) .
it has been shown in earlier works that the vertices of platonic solids are good measurement choices for tests of einstein - podolsky - rosen ( epr)-steering using isotropically entangled pairs of qubits . such measurements are regularly spaced , and measurement diversity is a good feature for making epr - steering inequalities easier to violate in the presence of experimental imperfections . however , such measurements are provably suboptimal . here , we develop a method for devising optimal strategies for tests of epr - steering , in the sense of being most robust to mixture and inefficiency ( while still closing the detection loophole , of course ) , for a given number of measurement settings . we allow for arbitrary measurement directions , and arbitrary weightings of the outcomes in the epr - steering inequality . this is a difficult optimisation problem for large , so we also consider more practical ways of constructing near - optimal epr - steering inequalities in this limit .
fisher information ( fi ) is a measure of the minimum error in estimating an unknown parameter of a distribution , and its importance is related to the cramr - rao inequality for unbiased estimators . by introducing a location parameter , the de bruijn s identity indicates that the fundamental quantity of fi is affiliated with the differential entropy of the minimum descriptive complexity of a random variable .furthermore , in known weak signal detection , a locally optimum detector ( lod ) , as an alternative to the neyman - pearson detector , has favorable properties for small signal - to - noise ratios ( snrs ) .with sufficiently large observed data and using the central limit theorem , it is demonstrated that the lod is asymptotically optimum and its asymptotic efficiency is upper bounded by the fi of the distribution .however , the fundamental nature of fi is not adequately recognized for processing known weak signals . to extend the heuristic studies of , in this paper, we will theoretically demonstrate that , for a known weak signal buried in additive white noise , the performance of a locally optimum processor ( lop ) is completely determined by the fi of a standardized even probability density function ( pdf ) of noise .we show this for three signal processing case studies : ( i ) the maximum snr gain for a periodic signal ; ( ii ) the asymptotic relative efficiency ( are ) of a lod for signal detection ; ( iii ) the best cross - correlation gain ( cg ) for an aperidoic ( random ) signal transmission .moreover , for estimating an unknown parameter of a weak signal , the minimum mean square error of the unbiased estimator can be reduced to a straightforward form expressed by the fi of the distribution .the physical significance of fi , resulting from the reciprocal of fi delimiting the minimum mean square error of unbiased estimators , provides a upper bound of the performance for locally optimum processing .it is well known that a standardized gaussian pdf has minimum fi of unity . as a consequence , for any non - gaussian noise, it is always possible to achieve the performance ( snr gain , are or cg ) of a lop larger than unity for the three considered situations . in the sense of a realizable lop , an example of a gaussian mixture noise pdfis investigated .it is found that arbitrarily large fi can be achieved by the corresponding lop , and even when the noise is dichotomous noise associated with infinite fi .since the known weak signal might be periodic or aperiodic , three signal processing cases are illustrated for exploring the significance of fi in locally optimum processing .first , consider a static processor with its output ,\ ] ] where the nonlinearity is odd and the input is a signal - plus - noise mixture .the component is a known weak periodic signal with a maximal amplitude ( ) and period .a zero - mean white noise , independent of , has an even ( symmetric ) pdf and the root - mean - square ( rms ) amplitude ( if it exists , or it is a scale parameter . ) .a family of even ( symmetric ) pdfs is frequently encountered in practical signal processing tasks . in the case of , we have a taylor expansion around at a fixed time as \approx g(z)+s(t)g'(z),\ ] ] where we assume the derivative exists for almost all ( similarly hereinafter ) .thus , we have &\approx & { \rm e}[g(z)]+s(t){\rm e}[g'(z)]=s(t){\rm e}[g'(z ) ] , \label{expectation}\\ { \rm var}[y(t)]&= & { \rm e } [ y^2(t)]-{\rm e}[y(t)]^2 \approx { \rm e}[g^2(z)],\label{variance}\end{aligned}\ ] ] where =\int_{-\infty}^{\infty}\cdots f_z(z)dz ] , =0 ] is neglected , resulting in eq .( [ variance ] ) .the input snr at can be defined as the power contained in the spectral line divided by the power contained in the noise background in a small frequency bin around , this is \rangle|^2}{\sigma_z^2 \delta b \delta t},\ ] ] with indicating the time resolution or the sampling period in a discrete - time implementation and the temporal average defined as .since is periodic , is in general be a cyclostationary random signal with period .similarly , the output snr at is given by \exp[-i 2\pi t / t]\rangle|^2}{\langle{\rm var}[y(t)]\rangle \delta b \delta t},\ ] ] with nonstationary expectation ] .here , we assume the sampling time and observe the output for a sufficiently large time interval of ( ) . substituting eqs .( [ expectation ] ) and ( [ variance ] ) into eq .( [ outsnr ] ) and noting eq .( [ insnr ] ) , we have \rangle|^2}{\delta b \delta t } \frac{{\rm e}^2[g'(z)]}{{\rm e}[g^2(z)]}\!=\!r_{\rm in } \;\sigma_z^2 \frac{{\rm e}^2[g'(z)]}{{\rm e}[g^2(z)]}.\end{aligned}\ ] ] thus , the output - input snr gain of eq .( [ system ] ) is }{{\rm e}[g^2(z ) ] } \leq \sigma_z^2 { \rm e}\left[\frac{f'^2_z(z)}{f_z^2(z)}\right]=\sigma_z^2 i(f_z ) , \end{aligned}\ ] ] with the equality occurring as becomes a lop , viz . by the schwarz inequality for a constant and . it is noted that the lop of eq .( [ lop ] ) is odd and accords with the above assumption .more interestingly , the expectation ] , and there exists a finite bound such that . in the asymptotic case of and , the test statistic , according to the central limit theorem , converges to a gaussian distribution with mean =0 ] under the null hypotheses . using eqs .( [ expectation ] ) and ( [ variance ] ) , is asymptotically gaussian with mean \approx{\rm e}[g'(x)]\sum_{n=1}^ns_n^2 ] under the hypothesis .then , given a false alarm probability , the detection probability of the detector of eq .( [ gcd ] ) is expressed as ,\end{aligned}\ ] ] with /\sqrt{2\pi}\;dt ] ( ) in terms of the generalized neyman - pearson lemma .based on the bayesian criterion , two hypotheses and are endowed with prior probabilities and .similarly , for the weak signal and the sufficiently large , the test statistic in eq .( [ gcd ] ) has gaussian distribution and its performance is evaluated by the error probability which is also a monotonically decreasing function of and has a minimum as of eq .( [ eq : deflection ] ) for .interestingly , with ( called the signal energy - to - noise ratio of the data vector ) achieved by a matched filter as a benchmark , the asymptotic relative efficiency ( are ) provides an asymptotic performance improvement of a detector of eq .( [ gcd ] ) over the linear matched filter when both detectors operate in the same noise environment .the equality of eq .( [ arpf ] ) is achieved as .thirdly , we transmit a known weak aperiodic signal through the nonlinearity of eq .( [ system ] ) . here, the signal is with the average signal variance =\sigma_s^2\ll \sigma_z^2 ] and the upper bound a ( ) .for example , can be a sample according to a uniformly distributed random signal equally taking values from a bounded interval .the input cross - correlation coefficient of and is defined as }{\sqrt{{\rm e}[s^2(t)]}\sqrt{{\rm e}[x^2(t)]}}= \frac{\frac{\sigma_s}{\sigma_z}}{\sqrt{\frac{\sigma_s^2}{\sigma_z^2}+1}}\approx \frac{\sigma_s}{\sigma_z}.\end{aligned}\ ] ] using eqs .( [ taylor])([variance ] ) , the output cross - correlation coefficient of and is given by }{\sigma_s \sqrt{{\rm var}[y(t)]}}\approx\frac{\sigma_s { \rm e}[g'(z)]}{\sqrt{{\rm e}[g^2(z)]}}.\end{aligned}\ ] ] then , the cross - correlation gain ( cg ) is given by }{\sqrt{{\rm e}[g^2(z)]}}\leq \sqrt{i(f_{z_0})},\end{aligned}\ ] ] which has its maximal value as of eq .( [ lop ] ) .finally , for the observation components , we assume the signal are with an unknown parameter . as the upper bound ( ), the cramr - rao inequality indicates that the mean squared error of any unbiased estimator of the parameter is lower bounded by the reciprocal of the fi given by \nonumber \\ & \approx & \sum_{n=1}^n { \rm e}\left[\left(\frac{df_z(z_n)/dz}{f_z(z_n)}\bigr|_{z_n = x_n - s_n } \bigl(-\frac{\partial s_n}{\partial \theta}\bigr)\right)^2\right]\nonumber \\ & = & i(f_z ) \sum_{n=1}^n \bigl(\frac{\partial s_n}{\partial \theta}\bigr)^2= \frac{i(f_{z_0})}{\sigma_z^2 } \sum_{n=1}^n \bigl(\frac{\partial s_n}{\partial \theta}\bigr)^2,\end{aligned}\ ] ] which indicates that the minimum mean square error of any unbiased estimator is also determined by the fi of distribution with a location shift , as is fixed .therefore , just as the fi represents the lower bound of the mean squared error of any unbiased estimator in signal estimation , the physical significance of the fi is that it provides a upper bound of the performance for locally optimum processing for the three considered problems .some interesting questions arise : which type of noise pdf has a minimal or maximal fi , and how large is the extreme value of ? does the corresponding lop in eq .( [ lop ] ) exist for the noise pdf with extreme ? these questions will be investigated as follows . for a standardized even pdf , we have { \rm e}\left[z_0 ^ 2 \right ] \geq { \rm e}\left[\frac{f'_{z_0}(z_0)}{f_{z_0}(z_0)}\ ; z_0 \right]^2\!=\!1,\end{aligned}\ ] ] with \!\!=\!\!\sigma_{z_0}^2\!\!=\!\!1 ] . in order to be a pdf , and the normalized constant .this is a standardized gaussian pdf .contrarily , any standardized non - gaussian pdf has the fi , which indicates that the performance ( snr gain , are or cg ) is certainly larger than unity via a lop of eq .( [ lop ] ) for processing a known weak signal . a standardized generalized gaussian noise pdf ,\end{aligned}\ ] ] with and .the fi of eq .( [ ggauss ] ) becomes \gamma\left(\frac{3}{2}-\frac{1}{2}\beta\right)}{\gamma^2\left[\frac{1}{2}(1+\beta)\right]},\end{aligned}\ ] ] with the corresponding normalized lop where is the sign or signum function .the curve of versus ( cf .fig . 10.10 of ref . ) clearly indicates that , for , is the minimum corresponding to the standardized gaussian pdf .it is also noted that , as or , . is the maximal infinite for and or not , andis the corresponding lop simply implemented ?the answer is negative , because the lop of eq .( [ nlop ] ) is not realizable as for and . when , the lop of eq .( [ nlop ] ) has a singularity at . in this sense, an arbitrary large fi can not be reached for the generalized gaussian noise given in eq .( [ ggauss ] ) .next , we consider gaussian mixture noise with its pdf \!,\end{aligned}\ ] ] with variance and parameters .note that eq .( [ gaussmix0 ] ) has another expression as /\sqrt{2\pi\epsilon^2},\end{aligned}\ ] ] with ] .the function of fi versus is shown in fig .[ fig : one ] , and can be calculated as ( no explicit expression exists ) ^ 2\right\}.\end{aligned}\ ] ] versus parameter of the gaussian mixture noise pdf of eq .( [ gaussmix ] ) . ]interestingly , fig .[ fig : one ] shows that , as , eq .( [ gaussmix ] ) is the standardized gaussian pdf with the fi . while , as . in eq .( [ gaussmix0 ] ) , for , and , the term =\delta(z) ] .moreover , refs . have pointed out that there exists a scheme allowing a perfect recovery of corrupted by dichotomous noise with the pdf of eq .( [ gaussmix2 ] ) .however , a practical difficulty in eq .( [ nlop ] ) is that the rms needs to be known .the above analysis indicates that the lop of eq .( [ lopopt ] ) can recover the weak signal perfectly as .thus , according to the optimum performance of the lop of eq .( [ fisher ] ) ( eq .( [ arpf ] ) or eq .( [ cg ] ) ) , the fi contained in the type of pdf of eq .( [ gaussmix2 ] ) , as shown in fig .[ fig : one ] . using eq .( [ gaussmixtan ] ) , the fi of eq .( [ mgfisher ] ) can be computed as ={\rm e}\left[\frac{d^2y(z_0)}{dz_0 ^ 2}\right ] \nonumber \\&=&\lim\limits _ { m\rightarrow 1 } \int_{-\infty}^{\infty}\!\!\ !\frac{\bigl[\ ! 1\!-\!m^2\!-\!m^2{\rm sech}^2\!\bigl(\frac{m z_0}{1\!-\!m^2}\bigr)\bigr]}{(1-m^2)^2}\;\frac{\exp[-y(z_0)]}{\sqrt{2\pi ( 1-m^2 ) } } dz_0\!=\!\infty,\end{aligned}\ ] ] where =0 $ ] , the numerator is the infinitesimal and the denominator is a higher - order infinitesimal in the integral .in this paper , for a known weak signal in additive white noise , it was theoretically demonstrated that the optimum performance of a lop can be completely determined by the fi of the corresponding standardized even noise pdf , as illustrated by three signal processing case stuides : ( i ) the maximum snr gain for a periodic signal ; ( ii ) the optimal are for signal detection ; ( iii ) the best cg for signal transmission .thus , our study of the performance of a lop focused on the measure of the fi of a standardized noise pdf .it is well known that the minimal fi is unity for a standardized gaussian noise pdf , and the matched filter is the corresponding optimal processor . while , for any non - gaussian noise , the fi and hence the optimum performance of the lop is certainly larger than unity .illustratively , we observed that the generalized gaussian noise pdf and the gaussian mixture noise pdf have an arbitrary large fi .there are some types of noise pdf possessing an infinite fi , such as uniform noise and dichotomous noise .however , we argue that only if the lop is practically realizable , can the performance predicted by the fi be reached in practice . in this sense , it is found that the dichotomous noise has an infinite fi and also a simple lop structure can be realized in practice .some interesting questions arise .for instance , it is known that for a weak signal already corrupted by initial additive white noise , there is usually a lop that yields the maximal output - input gain .therefore , can the method of adding an extra amount of noise to the initial data improve the performance of the updated lop for the resulting noise pdf ?this interesting topic will invoke the stochastic resonance phenomenon .another important question is the influence of finite observation time on the performance of locally optimum processing .
for a known weak signal in additive white noise , the asymptotic performance of a locally optimum processor ( lop ) is shown to be given by the fisher information ( fi ) of a standardized even probability density function ( pdf ) of noise in three cases : ( i ) the maximum signal - to - noise ratio ( snr ) gain for a periodic signal ; ( ii ) the optimal asymptotic relative efficiency ( are ) for signal detection ; ( iii ) the best cross - correlation gain ( cg ) for signal transmission . the minimal fi is unity , corresponding to a gaussian pdf , whereas the fi is certainly larger than unity for any non - gaussian pdfs . in the sense of a realizable lop , it is found that the dichotomous noise pdf possesses an infinite fi for known weak signals perfectly processed by the corresponding lop . the significance of fi lies in that it provides a upper bound for the performance of locally optimum processing .
leveraging feedback to obtain the channel state information at the transmitter ( csit ) enables a wireless system to adapt its transmission strategy to the varying wireless environment .the growing number of wireless users , as well as their increasing demands for higher data rate services impose a significant burden on the feedback link . in particular for ofdma systems , which have emerged as the core technology in 4 g and future wireless systems , full csit feedback may become prohibitive because of the large number of resource blocks .this motivates more efficient feedback design approaches in order to achieve performance comparable to a full csit system with reduced feedback . in the recent years , considerablework and effort has been focused on limited or partial feedback design , e.g. , see and the references therein . to the best of our knowledge ,most of the existing partial feedback designs are homogeneous , i.e. , users feedback consumptions do not adapt to the underlying channel statistics . in this paper , we propose and analyze a heterogeneous feedback design , which aligns users feedback needs to the statistical properties of their wireless environments .current homogeneous feedback design in ofdma systems groups the resource blocks into subband which forms the basic scheduling and feedback unit . since the subband granularity is determined by the frequency selectivity , or the coherence bandwidth of the underlying channel, it would be beneficial to adjust the subband size of different users according to their channel statistics .empirical measurements and analysis from the channel modeling field have shown that the root mean square ( rms ) delay spread which is closely related to the coherence bandwidth , is both location and environment dependent .the typical rms delay spread for an indoor environment in wlan does not exceed hundreds of nanoseconds ; whereas in the outdoor environment of a cellular system , it can be up to several microseconds . intuitively , users with lower rms delay spread could model their channel with a larger subband size and require less feedback resource than the users with higher rms delay spread .herein , we investigate this heterogeneous feedback design in a multiuser opportunistic scheduling framework where the system favors the user with the best channel condition to exploit multiuser diversity .there are two major existing partial feedback strategies for opportunistic scheduling , one is based on thresholding where each user provides one bit of feedback per subband to indicate whether or not the particular channel gain exceeds a predetermined or optimized threshold . the other promising strategy currently considered in practical systems such as lte is the best - m strategy , where the receivers order and convey the m best channels . the best - m partial feedback strategy is embedded in the proposed heterogeneous feedback framework .apart from the requirement of partial feedback to save feedback resource , the study of imperfections is also important to understand the effect of channel estimation error and feedback delay on the heterogeneous feedback framework .these imperfections are also considered in our work .an important step towards heterogeneous feedback design is leveraging the match " among coherence bandwidth , subband size and partial feedback . under a given amount of partial feedback , if the subband size is much larger than the coherence bandwidth , then multiple independent channels could exist within a subband and the subband - based feedback could only be a coarse representative of the channels . on the other hand , if the subband size is much smaller than the coherence bandwidth , then channels in adjacent subbands are likely to be highly correlated and requiring feedback on adjacent subbandscould be a waste of resource ; or a small amount of subband - based partial feedback may not be enough to reflect the channel quality . in order to support this heterogeneous framework, we first consider the scenario of a general correlated channel model with one cluster of users with the same coherence bandwidth .the subband size is adjustable and each user employs the best - m partial feedback strategy to convey the m best channel quality information ( cqi ) which is defined to be the subband average rate .the simulation result shows that a suitable chosen subband size yields higher average sum rate under partial feedback conforming the aforementioned intuition .this motivates the design of heterogeneous feedback to match " the subband size to the coherence bandwidth . the above - mentioned study , though closely reflects the relevant mechanism , is not analytically tractable due to two main reasons .firstly , the general correlated channel model complicates the statistical analysis of the cqi .secondly , the use of subband average rate as cqi makes it difficult to analyze the multi - cluster scenario .therefore , a simplified generic channel model is needed that balances the competing needs of analytical tractability and practical relevance . in order to facilitate analysis ,a subband fading channel model is developed that generalizes the widely used frequency domain block fading channel model .the subband fading model is suited for the multi - cluster analysis . according to the subband fading model , the channel frequency selectivity is flat within each subband , and independent across subbands .since the subband sizes are different across different clusters , the number of independent channels are heterogeneous across clusters and this yields heterogeneous partial feedback design .another benefit of the subband fading model is that the cqi becomes the channel gain and thus facilitate further statistical analysis . under the multi - cluster subband fading model and the assumption of perfect feedback ,we derive a closed form expression for the average sum rate . additionally , we approximate the sum rate ratio for heterogeneous design , i.e. , the ratio of the average sum rate obtained by a partial feedback scheme to that achieved by a full feedback scheme , in order to choose different best - m for users with different coherence bandwidth .we also compare and demonstrate the potential of the proposed heterogeneous feedback design against the homogeneous case under the same feedback constraint in our simulation study .the average sum rate helps in understanding the system performance with perfect feedback . in practical feedback systems ,imperfections occur such as channel estimation error and feedback delay .these inevitable factors degrade the system performance by causing outage .therefore , rather than using average sum rate as the performance metric , we employ the notion of average goodput to incorporate outage probability . under the multi - cluster subband fading model , we perform analysis on the average goodput and the average outage probability with heterogeneous partial feedback .in addition to examining the impact of imperfect feedback on multiuser diversity , we also investigate how to adapt and optimize the average goodput in the presence of these imperfections .we consider both the fixed rate and the variable rate scenarios , and utilize bounding technique and an efficient approximation to derive near - optimal strategies . to summarize , the contributions of this paper are threefold : a conceptual heterogeneous feedback design framework to adapt feedback amount to the underlying channel statistics , a thorough analysis of both perfect and imperfect feedback systems under the multi - cluster subband fading model , and the development of approximations and near - optimal approaches to adapt and optimize the system performance .the rest of the paper is organized as follows .the motivation under the general correlated channel model and the development of system model is presented in section [ system ] .section [ perfect ] deals with perfect feedback , and section [ imperfect ] examines imperfect feedback due to channel estimation error and feedback delay .numerical results are presented in section [ numerical ] .finally , section [ conclusion ] concludes the paper .this part provides justification for the adaptation of subband size with one cluster of users under the general correlated channel model , and motivates the design of heterogeneous partial feedback for the multi - cluster scenario in section [ multicluster ] .consider a downlink multiuser ofdma system with one base station and users .one cluster of user is assumed in this part and users in this cluster are assumed to experience the same frequency selectivity .the system consists of subcarriers . , the frequency domain channel transfer function between transmitter and user at subcarrier , can be written as : where is the number of channel taps , for represents the channel power delay profile and is normalized , i.e. , , denotes the discrete time channel impulse response , which is modeled as complex gaussian distributed random processes with zero mean and unit variance and is i.i.d . across and .only fast fading effect is considered in this paper , i.e. , the effects of path loss and shadowing are assumed to be ideally compensated by power control .the received signal of user at subcarrier can be written as : where is the average received power per subcarrier , is the transmitted symbol and is the additive white noise distributed as . from ( [ system : eq_1 ] ) , it can be shown that is distributed as .the channels at different subcarriers are correlated , and the correlation coefficient between subcarriers and can be described as follows : in general , adjacent subcarriers are highly correlated . in order to reduce feedback needs , subcarriers are formed as one resource block , and resource blocks are grouped into one subband .thus , there are resource blocks and subbands . in this manner, each user performs subband - based feedback to enable opportunistic scheduling at the transmitter .since the channels are correlated and there is one cqi to represent a given subband , the cqi is a function of the all the individual channels within that subband .herein , we employ the following subband ( aggregate ) average rate as the functional form of the cqi for user at subband : each user employs the best - m partial feedback strategy and conveys back the best cqi values selected from . a detailed description of the best - m strategy can be found in .after the base station receives feedback , it performs opportunistic scheduling and selects the user for transmission at subband if user has the largest cqi at subband .also , it is assumed that if no user reports cqi for a certain subband , scheduling outage happens and the transmitter does not utilize it for transmission . ) and partial feedback ( ) with respect to the number of users .a general correlated channel model is assumed with an exponential power delay profile .( , , , , db ) ] resource blocks .the subband sizes equal and for the two different clusters respectively . according to the subband fading model, the channel frequency selectivity is flat within each subband , and independent across subbands .the subband sizes can be heterogeneous across clusters , and this leads to heterogeneous channel frequency selectivity across clusters .the subband fading model approximates the general correlated channel model , and is useful for statistical analysis . ]now we demonstrate the need to adapt the subband size to achieve the potential match " among coherence bandwidth , subband size and partial feedback through a simulation example .the channel is modeled according to the exponential power delay profile : for , where the parameter is related to the rms delay spread .the simulation parameters are : , , , , db . the subband size can be adjusted and ranges from to resource blocks .we consider partial feedback with and . the average sum rate of the system for different subband sizes and partial feedback with respect to the number of users is shown in fig .[ fig_1 ] . under the given coherence bandwidth, several observations can be made .firstly , the curves with has the smallest increasing rate because a larger subband size gives a poor representation of the channel .secondly , the curve with has the smallest average sum rate because a small amount of partial feedback is not enough to reflect the channel quality .thirdly , the two curves and possess similar increasing rate . this is because the underlying channel is highly correlated within resource blocks and thus having -best feedback with yields similar effect as having -best feedback with . from the above observations, matches the frequency selectivity and there would be performance loss or waste of feedback resource when a subband size is blindly chosen . in a multi - cluster scenario where users in different clusters experience diverse coherence bandwidth , this advocates heterogeneous subband size and heterogeneous feedback . the general correlated channel model as well asthe non - linearity of the cqi , though useful to demonstrate the need for heterogeneous feedback , does not lend itself to tractable statistical analysis . to develop a tractable analytical framework ,an approximated channel model is needed .a widely used model is the block fading model in the frequency domain due to its simplicity and capability to provide a good approximation to actual physical channels .according to the block fading model , the channel frequency selectivity is flat within each block , and independent across blocks .herein , we generalize the block fading model to the subband fading model for the multi - cluster scenario .we assume that users possessing similar frequency selectivity are grouped into a cluster and the subband size is perfectly matched to the coherence bandwidth for a given cluster .according to the subband fading model , for a given cluster with a perfectly matched subband size , the channel frequency selectivity is flat within each subband , and independent across subbands .[ fig_2 ] demonstrates the subband fading model for two different clusters with different subband sizes under a given number of resource blocks .we now present the multi - cluster subband fading model . consider a downlink multiuser ofdma system with one base station and clusters of users .the system consists of resource blocks and the total number of users equals .users in cluster are indexed by the set for , with and . in our framework , users in the same cluster group their resource blocks into subbands in the same manner while each cluster can potentially employ a different grouping which enables the subband size to be heterogeneous between clusters .denote as the subband size for cluster , and .the s are ordered such that .based on the assumption for , the number of subbands in cluster equals .let be the frequency domain channel transfer function between transmitter and user in cluster at subband , where . is distributed as . according to the subband fading model , assumed to be independent across users and subbands in cluster .the feedback for different clusters is at different granularity , and so to model the channel for the different clusters of users at the same basic resource block level , some additional notation is needed .let with denote the resource block based channel transfer function .then the received signals of user in cluster at resource block can be represented by : where is the average received power per resource block , is the transmitted symbol and is additive white noise distributed with .let denote the cqi for user in cluster at subband .in order to reduce the feedback load , users employ the best - m strategy to feed back their cqi . in the basic best - m feedback policy , users measure cqi for each resource block at their receiver and feed back the cqi values of the best resource blocks chosen from the total values . for each resource block , the scheduling policy selects the user with the largest cqi among the users who fed back cqi to the transmitter for that resource block .however , in our heterogeneous partial feedback framework , since the number of independent cqi for cluster is , a fair and reasonable way to allocate the feedback resource is to linearly scale the feedback amount for users in cluster . to be specific , user in ( i.e. , the cluster with the largest subband size ) is assumed to feed back the best cqi selected from , whereas user in conveys the best cqi selected from . after receiving feedback from all the clusters , for each resource block the system schedules the user for transmission with the largest cqi .it is useful to note that the user feedback is based on the subband level , while the base station schedules transmission at the resource block level .in this section , the cqi are assumed to be fed back without any errors and the average sum rate is employed as the performance metric for system evaluation .we derive a closed form expression for the average sum rate in section [ spectral_efficiency ] for the multi - cluster heterogeneous feedback system . in section [ ratio ]we analyze the relationship between the sum rate ratio and the choice of the best - m . according to the assumption ,the cqi is i.i.d . across subbands and users , and thus let denote the cdf .because only a subset of the ordered cqi are fed back , from the transmitter s perspective , if it receives feedback on a certain resource block from a user , it is likely to be any one of the cqi from the ordered subset .we now aim to find the cdf of the cqi seen at the transmitter side as a consequence of partial feedback .let denote the reported cqi viewed at the transmitter for user in at resource block .also , let represent the subband - based cqi seen at the transmitter for user in at subband , then .the following lemma describes the cdf of ( the index is dropped for notational simplicity ) , which is denoted by .[ lemma_1 ] the cdf of is given by : where the vector and the proof is provided in appendix [ appena ] .let demote the selected user at resource block , then according to the scheduling policy : where is the set of users who convey feedback for resource block , with representing the number of users belonging to in cluster .it can be easily seen that in the full feedback case , i.e. , , . for the general casewhen , the probability mass function ( pmf ) of is given by : _ remark : _ only the largest subband size appears in the expression of instead of the vector .this is due to our heterogeneous partial feedback design to let users in cluster convey back the best cqi out of values .now we turn to determine the conditional cdf of the cqi for the selected user at resource block , conditioned on the set of users providing cqi for that resource block .since users are equiprobable to be scheduled according to the fair scheduling policy , the condition on is not described explicitly , and so we denote the conditional cdf as , where is the conditional cqi of the selected user at resource block .notice from lemma [ lemma_1 ] that possess a different distribution for different due to the heterogeneous feedback from different clusters . using order statistics yields as : the polynomial form of can be obtained , which is stated in the following theorem .[ theorem_1 ] the cdf of is given by : where the vector , , the proof is provided in appendix [ appena ] .after obtaining the conditional cdf , let denote the average sum rate and it can be computed using the following procedure .\notag\\ & \mathop{=}\limits^{(a)}\mathbb{e}_{\mathcal{u}}\left[\int_0^\infty \log_2(1+\rho x)d(f_{x|\mathcal{u}}(x))\right]\notag\\ & \mathop{=}\limits^{(b)}\mathbb{e}_{\mathcal{u}}\left[\sum_{m=0}^{\phi(m,\boldsymbol\eta,\boldsymbol\tau)}\theta_{g-1}(n , m,\boldsymbol\eta,\boldsymbol\tau , m)\int_0^\infty \log_2(1+\rho x)d(f_z(x))^{\sum_{g=1}^g\frac{n}{\eta_g}\tau_g - m}\right]\notag\\ \label{perfect : eq_9}&\mathop{=}\limits^{(c)}\sum_{\boldsymbol\tau\neq\mathbf{0}}\mathbb{p}(\mathcal{u } ) \sum_{m=0}^{\phi(m,\boldsymbol\eta,\boldsymbol\tau)}\theta_{g-1}(n , m,\boldsymbol\eta,\boldsymbol\tau , m)\mathcal{i}_1\left(\rho,\sum_{g=1}^g\frac{n}{\eta_g}\tau_g - m\right),\end{aligned}\ ] ] where and is given by ( [ perfect : eq_4 ] ) .( a ) follows from the conditional expectation of and the identically distributed property ( let and represent and respectively ) , ( b ) follows from ( [ perfect : eq_6 ] ) in theorem [ theorem_1 ] , ( c ) follows from ( [ perfect : eq_4 ] ) , and define . is computed in appendix [ appena ] to be : where is the exponential integral function .the average sum rate for the full feedback is a special case and is given by : _ remark : _ it is noteworthy to mention that the functional form of in ( [ perfect : eq_9 ] ) consists of two main parts .the first part , which involves and , accounts for the randomness of the set of users who convey feedback as well as the scheduling policy .this part is inherent to the heterogeneous partial feedback strategy , and is independent of the system metric for evaluation , such as the average sum rate employed in this paper .the second part depends on statistical assumption of the underlying channel and the system metric , and it is impacted by partial feedback as well .we now examine how to determine the smallest that results in almost the same performance , in terms of average sum rate , as the full feedback case . applying the same technique as in ,define as the spectral efficiency ratio and the problem can be formulated as : the above problem can be numerically solved by substituting the expressions for and .in order to obtain a simpler and tractable relationship between and given , i.e. , the tradeoff between the amount of partial feedback and the number of users given existing heterogeneity of channel statistics in frequency domain , an approximation is utilized similar to that in , by observing that in ( [ perfect : eq_10 ] ) is slowly increasing in with fixed ( this phenomenon is due to the saturation of multiuser diversity ) . observing and employing the binomial theorem yields the approximation for the spectral efficiency ratio as : from ( [ perfect : eq_12 ] ) and ( [ perfect : eq_13 ] ) , the minimum required can be obtained as follows : _ remark : _ it can be seen that depends on the system parameters ( ) as well on the largest subband size .it is also a consequence of our heterogeneous partial feedback assumption to let users in cluster convey back the best cqi out of values .this results in the fact that obtaining feedback information from users belonging to different clusters have almost the same statistical influence on scheduling performance .after analyzing the heterogeneous partial feedback design with perfect feedback , we turn to examine the impact of feedback imperfections in this section .we develop the imperfect feedback model due to channel estimation error and feedback delay in section [ imperfect_model ] , and investigate the influence of imperfections on two different transmission strategies in section [ fix_rate ] and [ vari_rate ] .then we propose how to optimize the system performance to adapt to the imperfections in section [ optimize ] .the imperfect feedback model is built upon the subband fading model for the perfect feedback case . to differentiate from the notation for the perfect feedback case and focus on the imperfect feedback model, the resource block index is dropped .let denote the frequency domain channel transfer function of user ( users in different clusters are not temporally distinguished to avoid notational overload ) . due to channel estimation error , the user only has its estimated version , and the relationship between and can be modeled as : where is the channel estimation error .the channel of each resource block is assumed to be estimated independently , which yields the channel estimation errors i.i.d . across users and resource blocks ,i.e. , .it is clear that the base station makes decision on scheduling and adaptive transmission depending on cqi , a function of .thus this information can be outdated due to delay between the instant cqi is measured and the actual instant of use for data transmission to the selected user .let be the actual channel transfer function and we employ a first - order gaussian - markov model to describe the time evolution and to capture the relationship with the delayed version as follows : where accounts for the innovation noise and is distributed as .the delay time between and is not explicitly written for notational simplicity , and ] , where the random variable is defined to have cdf .firstly , ] , which shows its potential in accurately tracking the performance of average goodput . ) and jensen approximation ( ) for the variable rate scenario under different .( , , , , , db , and db ) ] now we calculate the average outage probability . since it does not involve the function , it can be computed into closed form as follows : \right]\notag\\ \label{imperfect : eq_19}&=\sum_{\boldsymbol\tau\neq\mathbf{0}}\mathbb{p}(\mathcal{u})\sum_{m=0}^{\phi(m,\boldsymbol\eta,\boldsymbol\tau)}\theta_{g-1}(n , m,\boldsymbol\eta,\boldsymbol\tau , m)\left(1-\mathcal{i}_4\left(\beta_1,\sum_{g=1}^g\frac{n}{\eta_g}\tau_g - m\right)\right),\end{aligned}\ ] ] where , , , , , .( a ) follows from change of variables ; ( b ) follows from applying ( * ? ? ?* b.48 ) . in the case of full feedback ,the average outage probability becomes : we have obtained the relationship between the system parameter ( or ) and the system average goodput , and we now aim to maximize the average goodput by adapting the system parameters. consider the optimization of to obtain the optimal backoff factor .it is observed from ( [ imperfect : eq_13 ] ) that directly optimizing is tedious , and a near - optimal method is now proposed to obtain .this method is inspired by the results in section [ ratio ] , which show that the minimum required can be chosen to achieve almost the same performance as a system with full feedback .thus an optimal for the full feedback scenario can be optimized first , and then is obtained to match " the system performance .looking again at fig .[ fig_3 ] with emphasis on different number of partial feedback , as gets larger , the optimal converges to the full feedback case .in this example , is adequate to match the system performance .it is noteworthy to mention that this adaptation philosophy can be applied to partial feedback systems wherein system parameters are optimized according to full feedback assumption first and minimum required partial feedback is chosen subsequently .note that a closed form approximation has been obtained to track in section [ vari_rate ] , which is denoted as .the following proposition demonstrates the optimal property of when optimizing .[ proposition_3 ] there exists a unique global optimal that maximizes . the proof is provided in appendix [ appenb ] . from the above analysis, the optimization strategy can be described as : since it is proved in proposition [ proposition_3 ] that is quasiconcave in , numerical approach such as newton - raphson method can be applied to obtain . as discussed before ,once is found , the minimum required can be obtained by solving ( [ perfect : eq_12 ] ) or relying on ( [ perfect : eq_14 ] ) .the same strategy can be carried over to the optimization of , which is presented as follows : the impact of imperfections on system parameter adaptation , and the comparison between the fixed rate and variable rate strategies will be examined through simulations in section [ numerical_imperfect ] .in this section , we conduct a numerical study to verify the results developed and to draw some insight . [ cols="^,^ " , ] fig .[ fig_6 ] exhibits the comparison between the fix rate and variable rate outage scenarios as well as the effect of the number of users on the optimization of and . in order to show the system performance of the two scenarios in one figure ,a normalized parameter is defined . while examining the variable rate plots and when considering the fixed rate plots .the system parameters are : , , and db .it can be seen that for both scenarios , larger number of users yields better system performance , i.e. , higher average goodput and lower average outage probability .this is a consequence of increased multiuser diversity gain to combat the imperfections in the feedback system .-cluster system with fix rate and variable rate strategies using optimized and .the average goodput is calculated using the best - m partial feedback scheme when the minimum required is computed after obtaining or .( , , , , , db ) ] fig .[ fig_7 ] illustrates the effect of channel estimation error and feedback delay on the optimal value of and . here is varied from to , and is varied from to in steps of .it can be observed from the changing profiles that both the optimal values of and get smaller as the imperfections become worse .therefore , the system should adjust the system parameters to adapt to the encountered imperfections .now we consider the adaptation of system parameters ( or ) and partial feedback in a -cluster heterogeneous feedback system .the system parameters are : , , , and db . for both transmission strategies and for a given number of users ,the optimal value of or is first optimized according to the full feedback case discussed in section [ optimize ] .then , a minimum required is obtained by matching the system performance to that in the full feedback case .[ fig_8 ] demonstrates the average goodput for both transmission strategies with and ( or ) .we observe that there is almost a constant performance gain for the variable rate strategy compared with the fix rate one .this is due to the fact that for the variable rate scenario , the system is adapting the transmission parameters conditioned on the past memory even if it is the outdated one .if the channel estimation error and feedback delay are not severe , the imperfections can be compensated by multiplying with the backoff factor and relying on the past feedback .in this paper , we propose and analyze a heterogeneous feedback design adapting the feedback resource according to users frequency domain channel statistics . under the general correlated channel model, we demonstrate the gain by achieving the potential match among coherence bandwidth , subband size and partial feedback . to facilitate statistical analysis, we employ the subband fading model for the multi - cluster heterogeneous feedback system .we derive a closed form expression of the average sum rate under perfect partial feedback assumption , and provide a method to obtain the minimum heterogeneous partial feedback required to obtain performance comparable to a scheme using full feedback .we also analyze the effect of imperfections on the heterogeneous partial feedback system .we obtain a closed form expression for the average goodput of the fix rate scenario , and utilize a bounding technique and tight approximation to track the performance of the variable rate scenario .methods adapting the system parameters to maximize the average system goodput are proposed .the heterogeneous feedback design is shown to outperform the homogeneous one with the same feedback resource . with imperfections ,the system adjusting the transmission strategy and the amount of partial feedback is shown to yield better performance .the developed analysis provides a theoretical reference to understand the approximate behavior of the proposed heterogeneous feedback system and its interplay with practical imperfections . dealing with the general channel correlation and the corresponding nonlinear nature of the cqiare interesting directions for the heterogeneous feedback system .[ [ appena ] ] _ proof ( sketch ) of lemma [ lemma_1 ] : _ the methodology is an extension of the work in which deals with the homogeneous feedback case with one cluster of users and one specific subband size. let denote the cdf of . substituting the subband size and the number of reported cqi for user in cluster makes satisfy ( [ perfect : eq_1 ] ) .it can be shown that , which concludes the proof ._ proof of theorem [ theorem_1 ] : _ substituting the expressions of from lemma [ lemma_1 ] and combining ( [ perfect : eq_5 ] ) yield : applying to a finite - order power series in ( [ appendix : eq_1 ] ) , can be expressed as , where the expression for is described in theorem [ theorem_1 ] .note that the coefficients of can be computed in a recursive manner .then we employ for the multiplication of power series . for , be calculated from and as : for , can be computed from and in the following manner : this concludes the proof . _derivation of : _ from the definition of , and .thus , where the last equality follows from the binomial theorem .therefore , . applying yields ( [ perfect : eq_10 ] ) .[ [ appenb ] ] _ derivation of : _ from the definition of , it can be shown that and .then can be calculated as : where , , .( a ) is obtained by substituting the expression of and using change of variables ; ( b ) follows from applying ( * ? ? ?* b.18 ) ; ( c ) follows from using the fact that ._ proof of proposition [ proposition_1 ] : _ where , , , , and is the gaussian hypergeometric function .( a ) is obtained by substituting the expression of ; ( b ) follows from the fact that when , is a tight upper bound for ; note that change of variables are used ; ( c ) follows from applying ( * ? ? ?* b.60 ) . _ proof ( sketch ) of proposition [ proposition_2 ] : _ define ] converges to in probability . for , it is now shown that : )}-1\right|\geq\epsilon\right)=&\mathbb{p}\left(\left|\frac{s(\check{\chi}_b)-s(\mathbb{e}[\check{\chi}_b])}{s(\mathbb{e}[\check{\chi}_b])}\right|\geq\epsilon\right)\notag\\ \label{appendix : eq_6}&\mathop{\leq}\limits^{(a)}\mathbb{p}\left(\frac{s(|\check{\chi}_b-\mathbb{e}[\check{\chi}_b]|)}{s(\mathbb{e}[\check{\chi}_b])}\geq\epsilon\right)\mathop{\rightarrow}\limits^{(b)}0,\end{aligned}\ ] ] where ( a ) follows from the concave and monotonically increasing property of : ; ( b ) follows from the asymptotic scaling rate of ] , and the utilization of the chebyshev s inequality . from extreme value theory and asymptotic analysis of order statistics , it is known that the tail behavior of converges to type gumbel distribution , which enables ] to scale as .then a method similar to that in can be employed to prove the uniformly integrable property of )} ] is strictly quasiconcave in . this property can be proved by log - concavity . it is shown in that is log - concave in for . also , is concave thus log - concave in . since log - concavityis maintained in multiplication , is log - concave in . from the definition of ,it is now proved to be log - concave in since $ ] is irrelevant to .therefore , it is quasiconcave in because log - concave functions are also quasiconcave .in addition , it is clear that .also , it is now shown that : }-\alpha_w\alpha\sqrt{\mathbb{e}[\check{\chi}]})^2}{2}\right)\log_2(1+\rho\beta_1\mathbb{e}[\check{\chi}])\notag\\ \label{appendix : eq_5}&\mathop{=}\limits^{(b)}\mathop{\lim}\limits_{\beta_1\rightarrow\infty}\frac{\rho}{2\alpha_w^2\ln2}\frac{1}{\left(1+\rho\beta_1\mathbb{e}[\check{\chi}]\right)\left(1-\frac{\alpha}{\sqrt{\beta_1}}\right)}\exp\left(-\frac{(\alpha_w\sqrt{\beta_1\mathbb{e}[\check{\chi}]}-\alpha_w\alpha\sqrt{\mathbb{e}[\check{\chi}]})^2}{2}\right)=0,\end{aligned}\ ] ] where ( a ) follows from the upper bound for ; ( b ) follows from applying lhospital s rule .therefore , there exists a unique global optimal which maximizes .the authors want to express their deep appreciation to the anonymous reviewers and the associated editor for their many valuable comments and suggestions , which have greatly helped to improve this paper .d. j. love , r. w. heath , v. k. n. lau , d. gesbert , b. d. rao , and m. andrews , `` an overview of limited feedback in wireless communication systems , '' _ ieee j. sel .areas commun ._ , vol . 26 , no . 8 , pp .13411365 , oct .2008 .h. asplund , a. a. glazunov , a. f. molisch , k. i. pedersen , and m. steinbauer , `` the cost 259 directional channel model - part ii : macrocells , '' _ ieee trans .wireless commun ._ , vol . 5 , no . 12 , pp .34343450 , dec . 2006 .r. knopp and p. a. humblet , `` information capacity and power control in single - cell multiuser communications , '' in _ proc .ieee international conference on communications ( icc ) _ , jun .1995 , pp . 331335 .v. hassel , d. gesbert , m. s. alouini , and g. e. oien , `` a threshold - based channel state feedback algorithm for modern cellular systems , '' _ ieee trans .wireless commun ._ , vol . 6 , no . 7 , pp24222426 , jul . 2007 .b. c. jung , t. w. ban , w. choi , and d. k. sung , `` capacity analysis of simple and opportunistic feedback schemes in ofdma systems , '' in _ proc . international symposium on communications and information technologies ( iscit ) _ , oct .2007 , pp . 203208 .j. y. ko and y. h. lee , `` opportunistic transmission with partial channel information in multi - user ofdm wireless systems , '' in _ proc .ieee wireless communications and networking conference ( wcnc ) _ , mar .2007 , pp . 13181322 .k. pedersen , t. kolding , i. kovacs , g. monghal , f. frederiksen , and p. mogensen , `` performance analysis of simple channel feedback schemes for a practical ofdma system , '' _ ieee trans .technol . _ , vol .58 , no . 9 , pp .53095314 , nov .j. leinonen , j. hamalainen , and m. juntti , `` performance analysis of downlink ofdma resource allocation with limited feedback , '' _ ieee trans .wireless commun ._ , vol . 8 , no . 6 , pp . 29272937 , juns. donthi and n. mehta , `` joint performance analysis of channel quality indicator feedback schemes and frequency - domain scheduling for lte , '' _ ieee trans . veh ._ , vol . 60 , no . 7 , pp . 30963109 , sept . 2011. y. huang and b. d. rao , `` environmental - aware heterogeneous partial feedback design in a multiuser ofdma system , '' in _ proc .asilomar conference on signals , systems , and computers _2011 , pp . 970974 .v. lau , w. k. ng , and d. s. w. hui , `` asymptotic tradeoff between cross - layer goodput gain and outage diversity in ofdma systems with slow fading and delayed csit , '' _ ieee trans .wireless commun ._ , vol . 7 , no . 7 , pp . 27322739 , jul. 2008 .a. kuhne and a. klein , `` throughput analysis of multi - user ofdma - systems using imperfect cqi feedback and diversity techniques , '' _ieee j. sel .areas commun ._ , vol . 26 , no . 8 , pp . 14401450 , oct . 2008 .s. n. donthi and n. b. mehta , `` an accurate model for eesm and its application to analysis of cqi feedback schemes and scheduling in lte , '' _ ieee trans .wireless commun . _ , vol .10 , no .10 , pp . 34363448 , oct .l. wan , s. tsai , and m. almgren , `` a fading - insensitive performance metric for a unified link quality model , '' in _ proc .ieee wireless communications and networking conference ( wcnc ) _ , apr .2006 , pp .21102114 .m. r. mckay , p. j. smith , h. a. suraweera , and i. b. collings , `` on the mutual information distribution of ofdm - based spatial multiplexing : exact variance and outage approximation , '' _ ieee trans .inf . theory _54 , no . 7 , pp . 32603278 , jul. 2008 .yichao huang ( s10m12 ) received the b.eng .degree in information engineering with highest honors from the southeast university , nanjing , china , in 2008 , and the m.s . and ph.d .degrees in electrical engineering from the university of california , san diego , la jolla , in 2010 and 2012 , respectively .he then join qualcomm , corporate r&d , san diego , ca .he interned with qualcomm , corporate r&d , san diego , ca , during summer 2011 and summer 2012 .he was with california institute for telecommunications and information technology ( calit2 ) , san diego , ca , during summer 2010 .he was a visiting student at the princeton university , princeton , nj , during spring 2012 .huang received the microsoft young fellow award in 2007 from microsoft research asia .he received the ece department fellowship from the university of california , san diego in 2008 , and was a finalist of qualcomm innovation fellowship in 2010 .his research interests include communication theory , optimization theory , wireless networks , and signal processing for communication systems .bhaskar d. rao ( s80m83sm91f00 ) received the b.tech .degree in electronics and electrical communication engineering from the indian institute of technology , kharagpur , india , in 1979 , and the m.sc . and ph.d .degrees from the university of southern california , los angeles , in 1981 and 1983 , respectively .since 1983 , he has been with the university of california at san diego , la jolla , where he is currently a professor with the electrical and computer engineering department .he is the holder of the ericsson endowed chair in wireless access networks and was the director of the center for wireless communications ( 20082011 ) .his research interests include digital signal processing , estimation theory , and optimization theory , with applications to digital communications , speech signal processing , and human computer interactions .rao s research group has received several paper awards .his paper received the best paper award at the 2000 speech coding workshop and his students have received student paper awards at both the 2005 and 2006 international conference on acoustics , speech , and signal processing , as well as the best student paper award at nips 2006 .a paper he coauthored with b. song and r. cruz received the 2008 stephen o. rice prize paper award in the field of communications systems .he was elected to the fellow grade in 2000 for his contributions in high resolution spectral estimation .he has been a member of the statistical signal and array processing technical committee , the signal processing theory and methods technical committee , and the communications technical committee of the ieee signal processing society .he has also served on the editorial board of the eurasip signal processing journal .
current ofdma systems group resource blocks into subband to form the basic feedback unit . homogeneous feedback design with a common subband size is not aware of the heterogeneous channel statistics among users . under a general correlated channel model , we demonstrate the gain of matching the subband size to the underlying channel statistics motivating heterogeneous feedback design with different subband sizes and feedback resources across clusters of users . employing the best - m partial feedback strategy , users with smaller subband size would convey more partial feedback to match the frequency selectivity . in order to develop an analytical framework to investigate the impact of partial feedback and potential imperfections , we leverage the multi - cluster subband fading model . the perfect feedback scenario is thoroughly analyzed , and the closed form expression for the average sum rate is derived for the heterogeneous partial feedback system . we proceed to examine the effect of imperfections due to channel estimation error and feedback delay , which leads to additional consideration of system outage . two transmission strategies : the fix rate and the variable rate , are considered for the outage analysis . we also investigate how to adapt to the imperfections in order to maximize the average goodput under heterogeneous partial feedback . to appear in ieee transactions on signal processing heterogeneous feedback , ofdma , partial feedback , imperfect feedback , average goodput , multiuser diversity
solutions to integrable differential equations in terms of theta - functions were introduced with the works of novikov , dubrovin , matveev , its , krichever , ( see ) for the korteweg - de vries ( kdv ) equation .such solutions to e.g. the kdv , the sine - gordon , and the non - linear schrdinger equation describe periodic or quasi - periodic solutions , see .they are given explicitly in terms of riemann theta - functions defined on some riemann surface .though all quantities entering the solution are in general given in explicit form via integrals on the riemann surface , the work with theta - functional solutions admittedly has not reached the importance of soliton solutions .the main reason for the more widespread use of solitons is that they are given in terms of algebraic or exponential functions . on the other handthe parameterization of theta - functions by the underlying riemann surface is very implicit .the main parameters , typically the branch points of the riemann surface , enter the solutions as parameters in integrals on the riemann surface .a full understanding of the functional dependence on these parameters seems to be only possible numerically . in recent yearsalgorithms have been developed to establish such relations for rather general riemann surfaces as in or via schottky uniformization ( see ) , which have been incorporated successively in numerical and symbolic codes , see and references therein ( the last two references are distributed along with maple 6 , respectively maple 8 , and as a java implementation at ) . for an approach to express periods of hyperelliptic riemann surfaces via theta constants see .these codes are convenient to study theta - functional solutions of equations of kdv - type where the considered riemann surfaces are ` static ' , i.e. , independent of the physical coordinates . in these casesthe characteristic quantities of the riemann surface have to be calculated once , just the comparatively fast summation in the approximation of the theta series via a finite sum as e.g.in has to be carried out in dependence of the space - time coordinates .the purpose of this article is to study numerically theta - functional solutions of the ernst equation which were given by korotkin . in this casethe branch points of the underlying hyperelliptic riemann surface are parameterized by the physical coordinates , the spectral curve of the ernst equation is in this sense ` dynamical ' .the solutions are thus not studied on a single riemann surface but on a whole family of surfaces .this implies that the time - consuming calculation of the periods of the riemann surface has to be carried out for each point in the space - time .this includes limiting cases where the surface is almost degenerate .in addition the theta - functional solutions should be calculated to high precision in order to be able to test numerical solutions for rapidly rotating neutron stars such as provided e.g. by the spectral code ` lorene ` .this requires a very efficient code of high precision .we present here a numerical code for hyperelliptic surfaces where the integrals entering the solution are calculated by expanding the integrands with a fast cosine transformation in matlab .the precision of the numerical evaluation is tested by checking identities for periods on riemann surfaces and by comparison with exact solutions .the code is in principle able to deal with general ( non - singular ) hyperelliptic surfaces , but is optimized for a genus 2 solution to the ernst equation which was constructed in .we show that an accuracy of the order of machine precision ( ) can be achieved at a space - time point in general position with 32 polynomials and in the case of almost degenerate surfaces which occurs e.g. , when the point approaches the symmetry axis with at most 256 polynomials .global tests of the numerical accuracy of the solutions to the ernst equation are provided by integral identities for the ernst potential and its derivatives : the equality of the arnowitt - deser - misner ( adm ) mass and the komar mass ( see ) and a generalization of the newtonian virial theorem as derived in .we use the so determined numerical data for the theta - functions to provide ` exact ' boundary values on a sphere for the program library ` lorene ` which was developed for a numerical treatment of rapidly rotating neutron stars . `lorene ` solves the boundary value problem for the stationary axisymmetric einstein equations with spectral methods .we show that the theta - functional solution is reproduced to the order of and better .the paper is organized as follows : in section [ sec : ernsteq ] we collect useful facts on the ernst equation and hyperelliptic riemann surfaces , in section [ sec : spectral ] we summarize basic features of spectral methods and explain our implementation of various quantities .the calculation of the periods of the hyperelliptic surface and the non - abelian line integrals entering the solution is performed together with tests of the precision of the numerics . in section [ sec : integrals ] we check integral identities for the ernst potential .the test of the spectral code ` lorene ` is presented in section [ sec : lorene ] . in section [ sec : concl ] we add some concluding remarks .the ernst equation for the complex valued potential ( we denote the real and the imaginary part of with and respectively ) depending on the two coordinates can be written in the form the equation has a physical interpretation as the stationary axisymmetric einstein equations in vacuum ( see appendix and references given therein ) .its complete integrability was shown by maison and belinski - zakharov . for real ernst potential, the ernst equation reduces to the axisymmetric laplace equation for .the corresponding solutions are static and belong to the so called weyl class , see .algebro - geometric solutions to the ernst equation were given by korotkin .the solutions are defined on a family of hyperelliptic surfaces with corresponding to the plane algebraic curve where is the genus of the surface and where the branch points , are independent of the physical coordinates and for each subject to the reality condition or .hyperelliptic riemann surfaces are important since they show up in the context of algebro - geometric solutions of various integrable equations as kdv , sine - gordon and ernst . whereas it is a non - trivial problem to find a basis for the holomorphic differentials on general surfaces ( see e.g. ) , it is given in the hyperelliptic case ( see e.g. ) by which is the main simplification in the use of these surfaces .we introduce on a canonical basis of cycles , .the holomorphic differentials are normalized by the condition on the -periods the matrix of -periods is given by .the matrix is a so - called riemann matrix , i.e. it is symmetric and has a negative definite real part .the abel map with base point is defined as , where the jacobian of .the theta - function with characteristics corresponding to the curve is given by where is the argument and are the characteristics .we will only consider half - integer characteristics in the following .the theta - function with characteristics is , up to an exponential factor , equivalent to the theta - function with zero characteristic ( the riemann theta - function is denoted with ) and shifted argument , we denote by a differential of the third kind , i.e. , a 1-form which has poles in with respective .residues and .this singularity structure characterizes the differentials only up to an arbitrary linear combination of holomorphic differentials .the meromorphic differentials can be normalized by the condition that all -periods vanish .we use the notation for the infinite points on different sheets of the curve , namely as .the differential is given up to holomorphic differentials by .it is well known that the -periods of normalized differentials of the third kind can be expressed in terms of the abel map ( see e.g. ) , in a physically interesting subclass of korotkin s solution was identified which can be written in the form where and where is a piece - wise smooth contour on and is a non - zero hlder - continuous function on .the contour and the function have to satisfy the reality conditions that with also and ; both are independent of the physical coordinates . in the followingwe will discuss the example of the solution constructed in which can be interpreted as a disk of collisionless matter .for a physical interpretation see .the solution is given on a surface of the form ( [ hyper1 ] ) with genus 2 .the branch points independent of the physical coordinates are related through the relations , and .the branch points are parameterized by two real parameters and . writing with real , , we have the contour is the piece of the covering of the imaginary axis in the upper sheet between ] degenerates to a point .as was shown in , the ernst potential can be written in this limit in terms of theta - functions on the elliptic surface defined by , i.e. the surface with the cut ] and ] by the relation .\ ] ] they satisfy the differential equation the addition theorems for sine and cosine imply the recursion relations for the polynomials and for their derivatives .the chebyshev polynomials are orthogonal on with respect to the hermitian inner product we have where and otherwise .now suppose that a function on is sampled at the points and that is the interpolating polynomial .defining , for in the discrete case and the numbers we have this looks very much like a discrete cosine series and in fact one can show that the coefficients are related to the values of the function by an inverse discrete fourier transform ( dct ) note , that up to a numerical factor the dct is idempotent , i.e. , it is its own inverse .this relationship between the chebyshev polynomials and the dct is the basis for the efficient computations because the dct can be performed numerically by using the fast fourier transform ( fft ) and pre- and postprocessing of the coefficients .the fast transform allows us to switch easily between the representations of the function in terms of its sampled values and in terms of the expansion coefficients ( or ) .the fact that is approximated globally by a finite sum of polynomials allows us to express any operation applied to approximately in terms of the coefficients .let us illustrate this in the case of integration .so we assume that and we want to find an approximation of the integral for , i.e. , the function so that .we make the ansatz and obtain the equation expressing in terms of the using and comparing coefficients implies the equations between the coefficients which determines all in terms of the except for .this free constant is determined by the requirement that which implies ( because ) these coefficients determine a polynomial of degree which approximates the indefinite integral of the -th degree polynomial .the exact function is a polynomial of degree whose highest coefficient is proportional to the highest coefficient of .thus , ignoring this term we make an error whose magnitude is of the order of so that the approximation will be the better the smaller is . the same is true when a smooth function is approximated by a polynomial .then , again , the indefinite integral will be approximated well by the polynomial whose coefficients are determined as above provided the highest coefficients in the approximating polynomial are small . from the coefficients we can also find an approximation to the definite integral by evaluating thus , to find an approximation of the integral of a function we proceed as described above , first computing the coefficients of , computing the and then calculating the sum of the odd coefficients . the riemann surface is defined by an algebraic curve of the form where in our case we have throughout . in order to compute the periods and the theta - functionsrelated to this riemann surface it is necessary to evaluate the square - root for arbitrary complex numbers . in order to make this a well defined problemwe introduce the cut - system as indicated in fig .[ fig : cut - system ] . on the cut surfacethe square - root is defined as in as the product of square - roots of monomials the square - root routines such as the one available in matlab usually have their branch - cut along the negative real axis .the expression ( [ eq : root ] ) is holomorphic on the cut surface so that we can not simply take the builtin square - root when computing .instead we need to use the information provided by the cut - system to define adapted square - roots .let be the argument of a complex number with values in -\pi,\pi[ ] with branch - cut along the ray with argument by computing for each the square - root with the available matlab routine and then putting {z } = \left\ { \begin{array}{rl } s & \alpha/2<\arg(s)<\alpha/2 + \pi\\ -s & \text{otherwise } \end{array } \right . .\ ] ] with this square - root we compute the two factors {k - p_1}\sqrt[(\alpha)]{k - p_2}.\ ] ] it is easy to see that this expression changes sign exactly when the branch - cut between and is crossed .we compute the expression ( [ eq : root ] ) by multiplying the pairs of factors which correspond to the branch - cuts .this procedure is not possible in the case of the non - linear transformations we are using to evaluate the periods in certain limiting cases . in these casesthe root is chosen in a way that the integrand is a continuous function on the path of integration .the quantities entering formula ( [ ernst2 ] ) for the ernst potential are the periods of the riemann surface and the line integrals and .the value of the theta - function is then approximated by a finite sum .the periods of a hyperelliptic riemann surface can be expressed as integrals between branch points .since we need in our example the periods of the holomorphic differentials and the differential of the third kind with poles at , we have to consider integrals of the form where the , denote the branch points of .in general position we use a linear transformation of the form to transform the integral ( [ period1 ] ) to the normal form where the are complex constants and where is a continuous ( in fact , analytic ) complex valued function on the interval ] .this has the effect that only the -period diverges logarithmically in this case whereas the remaining periods stay finite as tends to 0 . in the cut systems [ fig : cut - system ] ,all periods diverge as .since the divergence is only logarithmical this does not pose a problem for values of .in addition the integrals which have to be calculated in the evaluation of the periods are the same in both cut - system .thus there is no advantage in using different cut systems for the numerical work . to test the numerics we use the fact that the integral of any holomorphic differential along a contour surrounding the cut ] .it can be seen in fig .[ fig : test_periods ] that 128 polynomials are sufficient to obtain machine precision even in almost degenerate situations .the cut - system in fig . [ fig : cut - system ] is adapted to the limit in what concerns the -periods , since the cut which collapses in this limit is encircled by an -cycle .however there will be similar problems as above in the determination of the -periods .for we split the integrals for the -periods as above in two integrals between and , and and . for the first integral we use the integration variable , for the second .since the riemann matrix ( the matrix of -periods of the holomorphic differentials after normalization ) is symmetric , the error in the numerical evaluation of the -periods can be estimated via the asymmetry of the calculated riemann matrix .we define the function as the maximum of the norm of the difference in the -periods discussed above and the difference of the off - diagonal elements of the riemann matrix .this error is presented for a whole space - time in fig .[ fig : error ] .the values for and vary between and . on the axis and at the diskwe give the error for the elliptic integrals ( only the error in the evaluation of the -periods , since the riemann matrix has just one component ) . for asymptotic formulas for the ernst potential are used .the calculation is performed with 128 polynomials , and up to 256 for .it can be seen that the error is in this case globally below .the line integrals and in ( [ ernst2 ] ) are linear combinations of integrals of the form in general position , i.e. not close to the disk and small enough , the integrals can be directly calculated after the transformation with the chebyshev integration routine . to test the numerics we consider the newtonian limit ( ) where the function is proportional to ,i.e. we calculate the test integral we compare the numerical with the analytical result in fig .[ fig : line ] . in general position machine precision is reached with 32 polynomials .when the moving cut approaches the path , i.e. , when the space - time point comes close to the disk , the integrand in ( [ eq : testline ] ) develops cusps near the points and . in this casea satisfactory approximation becomes difficult even with a large number of polynomials .therefore we split the integration path in ] and ] ( typically ) as a new value in the source to provide numerical stability .the iteration is in general stopped if .the ernst equation ( [ ernst1 ] ) is already in the form ( [ eq : poisson ] ) , but it has the disadvantage that the equation is no longer strongly elliptic at the ergo - sphere where . in physical terms , this apparent singularity is just a coordinate singularity , and the theta - functional solutions are analytic there . the ernst equation in the form ( [ eq : poisson ] ) has a right - hand side of the form ` ' for which causes numerical problems especially in the iteration process since the zeros of the numerator and the denominator will only coincide for the exact solution .the disk solutions we are studying here have ergo - spheres in the shape of cusped toroids ( see ) .therefore it is difficult to take care of the limit by using adapted coordinates .consequently the use of the ernst picture is restricted to weakly relativistic situations without ergo - spheres in this framework . to be able to treat strongly relativistic situations, we use a different form of the stationary axisymmetric vacuum einstein equations which is derived from the standard -decomposition , see .we introduce the functions and via where is the component of the metric leading to the ernst potential , see ( [ eq : wlp ] ) in the appendix .expressions for in terms of theta - functions are given in .the vacuum einstein equations for the functions ( [ eq : nun ] ) read by putting we obtain the flat 3-dimensional laplacian acting on on the left - hand side , since the function can only vanish at a horizon , it is globally non - zero in the examples we are considering here .thus the system of equations ( [ eq : nu ] ) and ( [ eq : v ] ) is strongly elliptic , even at an ergo - sphere .the disadvantage of this regular system is the non - linear dependence of the potentials and on the ernst potential and via ( [ eq : nun ] ) .thus we loose accuracy due to rounding errors of roughly an order of magnitude . though we have shown in the previous sections that we can guarantee the numerical accuracy of the data for and to the order of , the values for and are only reliable to the order of . to test the spectral methods implemented in ` lorene ` , we provide boundary data for the disk solutions discussed above on a sphere around the disk .for these solutions it would have been more appropriate to prescribe data at the disk , but ` lorene ` was developed to treat objects of spherical topology such as stars which suggests the use of spherical coordinates .it would be possible to include coordinates like the disk coordinates of the previous section in ` lorene ` , but this is beyond the scope of this article .instead we want to use the poisson - dirichlet routine which solves a dirichlet boundary value problem for the poisson equation for data prescribed at a sphere .we prescribe the data for and on a sphere of radius and solve the system ( [ eq : nu ] ) and ( [ eq : v ] ) iteratively in the exterior of the sphere .if the iteration converges , we compare the numerical solution in the exterior of the sphere with the exact solution .since spherical coordinates are not adapted to the disk geometry , a huge number of spherical harmonics would be necessary to approximate the potentials if is close to the disk radius .the limited memory on the used computers imposes an upper limit of 64 to 128 harmonics .we choose the radius and the number of harmonics in a way that the fourier coefficients in drop below to make sure that the provided boundary data contain the related information to the order of machine precision .the exterior of the sphere where the boundary data are prescribed is divided in two domains , one from to and one from to infinity . in the second domain used as a coordinate . for the dependence which is needed only for the operator in ( [ eq : v ] ) ,4 harmonics in are sufficient . since ` lorene ` is adapted to the solution of the poisson equation , it is to be expected that it reproduces the exact solution best for nearly static situations , since the static solutions solve the laplace equation .the most significant deviations from the exact solution are therefore expected for . for the case , we consider 32 harmonics in on a sphere of radius .the iteration is stopped if which is the case in this example after 90 steps .the exact solution is reproduced to the order of .the absolute value of the difference between the exact and the numerical solution on a sphere of radius 3 is plotted in fig .[ fig : maxdifftheta ] in dependence of .there is no significant dependence of the error on .the maximal deviation is typically found on or near the axis . as can be seen from fig .[ fig : maxdiffr ] which gives the dependence on on the axis , the error decreases almost linearly with except for some small oscillations near infinity .we have plotted the maximal difference between the numerical and the exact solution for a range of the physical parameters and in fig .[ fig : gamma ] .as can be seen , the expectation is met that the deviation from the exact solution increases if the solution becomes more relativistic ( larger ) .as already mentioned , the solution can be considered as exactly reproduced if the deviation is below .increasing the value of for fixed leads to less significant effects though the solutions become less static with increasing ., for , the ultra - relativistic limit corresponds to a space - time with a singular axis which is not asymptotically flat , see . since ` lorene ` expands all functions in a galerkin basis with regular axis in an asymptotically flat setting , solutions close to this singular limit can not be approximated .convergence gets much slower and can only be achieved with considerable relaxation . for and needed nearly 2000 iterations with a relaxation parameter of .the approximation is rather crude ( in the order of one percent ) . for higher values of convergence could be obtained .this is however due to the singular behavior of the solution in the ultra - relativistic limit . in all other cases ,` lorene ` is able to reproduce the solution to the order of and better , more static and less relativistic cases are reproduced with the provided accuracy .in this article we have presented a scheme based on spectral methods to treat hyperelliptic theta - functions numerically .it was shown that an accuracy of the order of machine precision could be obtained with an efficient code .as shown , spectral methods are very convenient if analytic functions are approximated . close to singularities such as the degeneration of the riemann surface, analytic techniques must be used to end up with analytic integrands in the discussed example .the obtained numerical data were used to provide boundary values for the code ` lorene ` which made possible a comparison of the numerical solution to the boundary value problem with the numerically evaluated theta - functions . for a large range of the physical parametersthe numerical solution was of the same quality as the provided data .the main errors in ` lorene ` are introduced by rounding errors in the iteration .this shows that spectral methods provide a reliable and efficient numerical treatment both for elliptic equations and for hyperelliptic riemann surfaces .however , to maintain the global quality of the numerical approximation an analytical understanding of the solutions is necessary in order to treat the non - analyticities of the solutions .the ernst equation has a geometric interpretation in terms of the stationary axisymmetric einstein equations in vacuum .the metric can be written in this case in the weyl - lewis - papapetrou form ( see ) where and are weyl s canonical coordinates and and are the commuting asymptotically timelike respectively spacelike killing vectors . in this casethe vacuum field equations are equivalent to the ernst equation ( [ ernst1 ] ) for the complex potential . for a given ernst potential , the metric ( [ eq : wlp ] )can be constructed as follows : the metric function is equal to the real part of the ernst potential .the functions and can be obtained via a line integration from the equations and this implies that is the dual of the imaginary part of the ernst potential .the equation ( [ kxi ] ) for follows from the equations where is the ( three - dimensional ) ricci tensor corresponding to the spatial metric .this reflects a general structure of the vacuum einstein equations in the presence of a killing vector . for the ricci scalarone finds we denote by the determinant of the metric .the komar integral of the twist of the timelike killing vector over the whole spacetime establishes the equivalence between the asymptotically defined adm mass and the komar mass , where the integration is carried out over the disk , where is the normal at the disk , and where is the energy momentum tensor of the disk given in . in other wordsthe adm mass can be calculated either asymptotically or locally at the disk . to obtain an identity which does not involve only surface integrals , we consider as in an integral over the trace of equation ( [ eq : ricci ] ) for the ricci - tensor , to avoid numerical problems at the set of zeros of , the so - called ergo - sphere ( see for the disk solutions studied here ) , we multiply both sides of equation ( [ eq : ricci ] ) by . integrating the resulting relation over the whole space - time, we find after partial integration here the only contributions of a surface integral arise at the disk , since for and since the axis is regular ( vanishes on the axis ) .if we replace via ( [ kxi ] ) , we end up with an identity for the ernst potential and its derivatives , this identity ( as the identity given in ) can be seen as a generalization of the newtonian virial theorem .the relation ( [ virial20 ] ) coincides with the corresponding relation of only in the newtonian limit .this reflects the fact that generalizations of a newtonian result to a general relativistic setting are not unique .our formulation is adapted to the ernst picture and avoids problems at the ergo - spheres , thus it seems optimal to test the numerics for ernst potentials in terms of theta - functions .we thank a. bobenko , d. korotkin , e. gourgoulhon and j. novak for helpful discussions and hints .ck is grateful for financial support by the marie - curie program of the european union and the schloessmann foundation .belinskii , v.e .zakharov , integration of the einstein equations by the methods of inverse scattering theory and construction of explicit multisoliton solutions , _ sov .* 48 * ( 1978 ) 985 - 994 .
a code for the numerical evaluation of hyperelliptic theta - functions is presented . characteristic quantities of the underlying riemann surface such as its periods are determined with the help of spectral methods . the code is optimized for solutions of the ernst equation where the branch points of the riemann surface are parameterized by the physical coordinates . an exploration of the whole parameter space of the solution is thus only possible with an efficient code . the use of spectral approximations allows for an efficient calculation of all quantities in the solution with high precision . the case of almost degenerate riemann surfaces is addressed . tests of the numerics using identities for periods on the riemann surface and integral identities for the ernst potential and its derivatives are performed . it is shown that an accuracy of the order of machine precision can be achieved . these accurate solutions are used to provide boundary conditions for a code which solves the axisymmetric stationary einstein equations . the resulting solution agrees with the theta - functional solution to very high precision .
the fundamental essence of any information protocol resides in the amount of accessible information in a given physical system . in a realistic quantum process , the system is rarely isolated and commonly interacts with an environment , which results in the information being scattered in the typically large hilbert space of the environment . in most theoretical models ,an open system is studied through the dynamics of the reduced density matrix , upon tracing over the environmental degrees of freedom .the dynamics of the system is described by completely positive semigroup maps , or equivalently by the solution of a master equation in lindblad form .this formalism assumes weak system - environment coupling , short environment correlation time and a memoryless " transfer of information from the system to the environment leading to information erasure .such a dynamical model for open quantum systems is called markovian .however , it is readily observed that the markovian formalism is not always optimal or justified when dealing with many important open quantum systems , especially in complex biological models or interacting many - body systems in condensed matter physics .more accurately , the system - environment interaction needs to be treated as non - markovian , which tends to deviate from the completely positive semigroup dynamics , thus making the corresponding mathematical formalism difficult .incidentally , non - markovian dynamics are not memoryless " and allow a backflow of information from the environment ( cf . ) to the system , a fact which has interesting ramifications from the perspective of quantum information theory . for example ,non - markovian processes have been shown to preserve entanglement in many - body and biomolecular systems , and have been exploited in quantum key distribution , enhancing precision in quantum metrology , and implementing certain quantum information protocols .non - markovianity also plays a detrimental role in quantum darwinism , thus impeding the emergence of classical objectivity from a quantum world . with recent development of experimental techniques to engineer and control system - environment interactions ( see also ref . ) , there is considerable interest in characterizing and quantifying non - markovian dynamics and investigating possible applications in scalable quantum technologies that are robust against environment - induced decoherence or phenomena such as entanglement sudden death .although the concept of non - markovianity is well established in the classical realm , its quantum extension is often riddled with inconsistency and subtle variations .this has led to a substantial amount of literature attempting to quantitatively characterize non - markovianity based primarily on the nonmonotonic time evolution of some quantum information measure ( for reviews , see ) .such nonmonotonic behavior arises from the nondivisibility of the completely positive and trace preserving ( cptp ) maps that describe the dynamics of the open quantum system , which is perhaps the most established marker of non - markovianity ( cf .the nondivisibility of a cptp map is necessary for the occurrence of information backflow from the environment or the presence of the environment memory .( color online ) .a scheme showing the protocol under consideration .the system interacts with the environment , while the quantum interferometric power is computed by applying local unitaries on the ancilla part , which serves as the measuring apparatus ., scaledwidth=45.0% ] a number of non - markovian measures and witnesses have been proposed . among the most important ones ,let us mention those based on the deviation of the dynamical maps from divisible cptp maps and those based on the nonmonotonicity of the trace distance or distinguishability , entanglement , quantum mutual information , and channel capacities .other significant attempts to quantify non - markovianity include the flow of quantum fisher information , fidelity between dynamical time - evolved states , distinguishability in gaussian quantum systems in terms of fidelity , volume of gaussian states , backflow via accessible information , and local quantum uncertainty .recent proposals have also been made to characterize non - markovianity in direct analogy to entanglement theory and to study the links between system - environment quantum correlations and non - markovian behavior .interestingly , although the different non - markovian measures and witnesses emanate from the dynamical divisibility criteria , the inverse implication is not always true , which makes them incompatible with each other for general open system dynamics ( cf . ) . in this work , we propose to characterize the non - markovianity of an open system evolution through the nonmonotonic behavior of a quantum metrological figure of merit , called the quantum interferometric power ( qip ) , which is defined in terms of the minimal quantum fisher information obtained by local unitary evolution of one part of the system .the qip is an important information - theoretic tool that also quantifies discordlike quantum correlations in a bipartite system and is related to the minimum speed of evolution of a quantum system in the projective hilbert space . to capture the non - markovianity in open quantum evolutions , we consider a single qubit ( say , ) as the principal _ system _ , interacting with an environment . a second qubit ( say , ) plays the role of an ancilla .we consider the action of the environment on the system in terms of the dephasing and amplitude damping channels .using the jamiokowski - choi isomorphism , single - qubit operations can be used to study the bipartite ( system + ancilla ) behavior .the qip of the system is measured by applying local unitaries on the ancilla , which acts as a measuring apparatus .the non - markovianity of the evolution is characterized by quantifying the nonmonotonic behavior of the qip .the paper is organized as follows . in sec .ii , we briefly present the concept and definition of the qip and introduce a non - markovianity measure based on its nonmonotonic evolution . in sec .iii , we consider a prototypical single - qubit dephasing model for the two - qubit ( system + ancilla ) state , and show that the non - markovianity measure derived using qip is qualitatively consistent with measures based on distinguishabilty , divisibilty , and quantum mutual information . in sec .iv , we consider a single - qubit amplitude damping model and investigate the flow of qip in the non - markovian regime .we observe that the measure appropriately captures the backflow of information and is more robust compared to a non - markovianity measure based on entanglement .we discuss the results , possible extensions , and potential benefits of the introduced non - markovianity indicator in sec .the qip is a metrological figure of merit that quantifies the guaranteed precision enabled by a bipartite probe state for the task of black - box quantum parameter estimation .let us consider a bipartite , system + ancilla state , , such that the ancilla ( ) is subject to a local unitary evolution . in this picture, the ancilla acts as a measuring device for any operation performed on the system ( ) ( for an illustration , see fig .[ fig:0 ] ) .the system + ancilla hamiltonian is given by = , where is the local hamiltonian acting on , and is the identity operator acting on .for any bipartite state , and local hamiltonian , the optimal available precision for the estimation of a parameter encoded in the local unitary , is governed by the quantum fisher information , as derived using the cramr - rao bound , which is defined as follows . for a quantum state , written in its spectral decomposition as = , where 0 and = 1 , the quantum fisher information associated with the local evolution generated by can be written as , the above expressioncan be equivalently rewritten as follows , if the generator of the local evolution on the ancilla is not known _ a priori _ , as in the black - box paradigm for quantum metrology , then the guaranteed precision enabled by the state is given by the qip ( ) , defined as the minimum quantum fisher information over all local hamiltonians of a fixed spectral class ( a canonical choice is to consider the minimization to run over all with nondegenerate , equispaced eigenvalues ) , namely where the factor is a convenient normalization . in the relevant case where the system has arbitrary dimension ,while the ancilla is a qubit , the choice of local hamiltonians is reduced to = , where = 1 and = \{ } is the vector of the pauli matrices . for such local hamiltonians , the minimization in eq .( [ qip ] ) can be performed analytically , so that the qip is computable in closed form and given by the expression , where is the highest eigenvalue of the real symmetric matrix with elements , in general , the qip is a _ bona fide _ measure of bipartite quantum correlations beyond entanglement , of the so - called discord type ( see for a review ) , in the quantum state .namely , is known to vanish for states with zero discord from the perspective of subsystem ( known as quantum - classical states ) , is invariant under local unitary operations , and reduces to an entanglement monotone for pure quantum states .most importantly for the aims of the present paper , the qip is a monotonically decreasing function under the action of arbitrary local cptp maps on the system .furthermore , the qip can be interpreted as the minimal global speed of evolution for the state under all local unitary transformations on the ancilla , as a consequence of the connection between the quantum fisher information and the bures metric .the evaluation of the qip remains computationally tractable for higher - dimensional ( ) systems , although a closed analytical form may not be available .the problem can be recast in the form of a minimization of the hamiltonian with respect to a finite number of variables spanning a compact space .this follows by noting that the unitary evolution , corresponding to acting on the ancilla , can be chosen within the special unitary group , without any loss of generality .furthermore , the qip can also be reliably computed for two - mode gaussian states in the continuous variable regime .let us consider an open quantum system undergoing an evolution given by the time - local master equation , where is the liouvillian superoperator , given by - \sum_i \gamma_i(t)\bigg[a_i(t)\rho(t)a_i^\dag(t)- \nonumber\\ & & \frac{1}{2}\left\{a_i^\dag(t)a_i(t),\rho(t)\right\}\bigg].\end{aligned}\ ] ] here are the lindblad operators , and is the time - dependent relaxation rate .the quantum evolution is markovian when for each instant of time , .the dynamical quantum process can be then defined in terms of time - ordered cptp maps , such that , ] , and are the elements of the initial system state , . for a zero - temperature reservoir with spectral density , the decoherence rate is given by the relation , to analyze the non - markovianity , we need to calculate the measure , given by eq .( [ n ] ) , for the composite system + ancilla state , ( where the system undergoes dephasing while the ancilla is not subject to decoherence ) optimized over all possible initial states .such an optimization process is complicated and can be solved only for specific instances .alternately , a lower bound on can be obtained by considering the particular situation where the initial composite system + ancilla state , , is maximally entangled , say , where is a bell state , .the composite dynamical map is given by , and the non - optimized measure of non - markovianity is given by the relation , we need to calculate the qip in the evolved state using the measure , given by eqs .( [ qip])([w ] ) . for the state , given by eq .( [ dej ] ) , the maximum eigenvalue of the matrix , , is equal to .hence , since for all , it follows that when , which is consistent with other well - established definitions of non - markovianity .the measure of non - markovianity based on qip for the initially maximally entangled , composite system + ancilla state , , under single - qubit dephasing on the system , is given by interestingly , the quantification of non - markovianity in terms of qip ( ) , for the paradigmatic single - qubit dephasing model and maximally entangled initial states , is numerically equivalent to the previously introduced measure in terms of the distinguishability of a pair of evolving states using the trace distance .the distinguishability witness for non - markovianity is closely associated with the backflow of information from the environment to the system , which results in the increase of quantum correlations in the dephased bipartite state as detectable through the qip .hence , the non - markovianity defined in terms of the local quantum fisher information , whose minimization defines the qip , exactly captures the intrinsic backflow of information in the system - environment interaction . to further compare the measure against other measures of non - markovianity , we consider specifically the above single - qubit dephasing model with an ohmic reservoir spectral density , where is the ohmicity parameter , is the dimensionless coupling constant , and is the cutoff spectral frequency . for a zero reservoir temperature ,relation ( [ j ] ) can be written as , \gamma_0(s)}{(1+\omega_c^2 t^2)^{s/2 } } , \label{g}\ ] ] where is the euler gamma function .it is known that the dephasing dynamical map corresponding to the spectral density is divisible in the parameter range .hence , the non - markovian regime corresponds to the super - ohmic parameter range 2 , which can be experimentally obtained in ultracold systems utilizing control over atomic noise . to calculate , we need to integrate over the relevant range of , where and .( color online ) . in this figure , we compare the measure (= ) , with other quantifiers of non - markovianity , for a single - qubit dephasing channel with an ohmic spectral density , .the blue dashed region , corresponding to 0 , is the markovian regime .this corresponds to = 0 , for all known measures of non - markovianity .the region 0 denotes the non - markovian regime and we plot the behavior of the measures based on qip ( , green bar ) , quantum mutual information ( , red bar ) , and divisibility criteria ( , purple bar ) .we observe that the non - markovian regime lies in the super - ohmic region , .the behavior of the measures , and , is similar : both indicators initially increase , followed by a decrease , with increasing , with vanishing values for 5 .however , in contrast the measure monotonically increases with in the non - markovian region .this implicitly shows that the measures based on qip and mutual information are independent of the divisibility criteria .the inset figure shows the measures of non - markovianity around the critical transition parameter value , = 2 .we note that , for this model , the non - markovianity measure based on distinguishability is identical to the one derived in this work ., scaledwidth=48.0% ] figure [ fig:1 ] compares different measures of non - markovianity based on qip , quantum mutual information , distinguishability , and divisibility criteria .for a set of maximally entangled initial states , we observe that the measures based on the first three quantities behave in a similar manner as opposed to the non - markovian measure based on divisibility .the measures based on qip and distinguishability are equivalent for this model .we now consider the flow of the qip ( ) in the dissipative dynamics governed by the single - qubit amplitude damping channel .the qubit dynamics can be modelled using the hamiltonian given by , where , ( ) is the raising ( lowering ) pauli operator .the resulting dynamics is given by the differential equation , + \gamma(t)\left(\sigma^-\rho\sigma^+ - \frac{1}{2}\left\{\sigma^+\sigma^-,\rho \right\}\right),\ ] ] where is time dependent .the functions and are defined in terms of the integro - differential equation for the time - dependent function , , given by where , = 2im and = 2re .the function is characteristic of the nature of the environment used to model the local noise in the dynamics .it is defined in terms of the correlation function , that is derived from the fourier transform of the spectral density of the environment , : the dynamics of the system qubit ( ) , is given by the dynamical map , such that the single - qubit amplitude damping dynamics can be extended to the system + ancilla bipartite system . in the computational basis ,the evolved density matrix of the composite two - qubit system is given as follows ( omitting the subscript for simplicity ) .the diagonal matrix elements of are given by the nondiagonal elements are given by where ( ) .let us now consider a reservoir spectral density with a lorentzian distribution , },\ ] ] where is the central frequency of the distribution and is the system - reservoir coupling constant .the spectral width of the distribution , , is the inverse of the reservoir correlation time .the system - reservoir coupling is related to the markovian decay of the system , and is thus the inverse of the system relaxation time .the markovian nature of the dynamics is related to the strength of the system - reservoir coupling and the interplay of the system relaxation and reservoir correlation times . for weak coupling ,the relaxation time of the system is greater than the reservoir correlation time , ( ) , and the dynamics is essentially markovian . for , ( ) or in the strong coupling regime ,the dynamics is non - markovian .hence , the non - markovian character of the considered dynamical map is ingrained in the behavior of the function , which for a lorentzian spectral distribution is of the form , , \label{jt}\ ] ] where , and (= ) is the system - reservoir frequency detuning .for the dynamics to be markovian , in the weak coupling regime , the function needs to have a monotonic decrease with time . fornon - markovian dynamics , the monotonicity of does not hold , consistent with the breakdown of the divisibility of the dynamical cptp map .now , let us consider our initial bipartite state to be a bell - diagonal werner state of the form , where the werner parameter is given by , ] , the maximum eigenvalue of the matrix , eq .( [ w ] ) , is equal to .hence , the qip is given by , therefore , for a maximally entangled initial state undergoing a single - qubit amplitude damping , the nonmonotonic flow of the qip in the non - markovian regime is exactly governed by the nonmonotonicity of the function ( cf .interestingly , also measures the maximal trace distance between a pair of system states and hence quantifies the non - markovianity in terms of distinguishability for single - qubit amplitude damping channels .we find therefore that the flow of the qip is again closely related to the backflow of quantum information in the non - markovian regime .( color online ) the flow of qip ( ) in a two - qubit system with maximal entanglement at =0 under a single - qubit amplitude damping channel .the initial entanglement is set by setting = 1 in the werner state given by eq .( [ wer ] ) . is equal to , as shown by eq .( [ equal ] ) .we observe the flow of in both markovian and non - markovian regimes of the dynamical evolution .the flow under the markovian regime ( red dashed line ) corresponds to the ratio of the reservoir correlation to system relaxation , = 10 . under the non - markovian regime ,the flow is shown for = 0.5 ( blue dotted line ) and = 0.1 ( green solid line ) .the system - reservoir frequency detuning is set at , = 0.01 .the backflow of quantum correlation , in terms of qip , is observed by the nonmonotonic increase of during the evolution ., scaledwidth=42.0% ] figure [ fig:2 ] shows the flow of the qip measure for an initial maximally entangled state , given by eq .( [ wer ] ) , with = 1 .the markovian and non - markovian regimes of the dynamics can be studied in terms of the function , for a lorentzian reservoir spectral distribution , as mentioned in eq .( [ jt ] ) .the markovian regime corresponds to 0.5 , as shown in the figure for = 10 .the non - markovian regime , corresponding to strong system - reservoir coupling 0.5 , is shown for = 0.1 and 0.5 .the system - reservoir detuning is = 0.01 .the figure shows that the non - markovian flow of qip is nonmonotonic , with increase in during certain evolution times .the non - markovianity can be numerically evaluated using the expression for in eqs .( [ n ] ) and ( [ nn ] ) .( color online ) .the flow of qip ( ) for an initially mixed two - qubit system under a single qubit amplitude damping channel .the initial mixed state is obtained for = 0.45 , in the werner state given by eq .( [ wer]).the parameters pertaining to the lorentzian reservoir spectral distribution are set at , and .the figure shows the non - markovian evolution of ( blue solid line ) , concurrence , ( black dotted line ) , quantum mutual information , ( red broad - dashed line ) , and the scaled function ( brown dashed line ) ., scaledwidth=40.0% ] we have seen that the flow of qip for a maximally entangled initial state is determined by the nature of spectral distribution and is equal to the integro - differential function . however , the situation is not so straightforward if the initial system + ancilla state is mixed .this can be obtained easily from the werner state in eq .( [ wer ] ) , by setting the werner parameter 1 .in such instances , the flow of qip is still governed by the ( non-)monotonic behavior of .we observe that the qip ( ) has a dynamical behavior that quite closely replicates the flow of the function , as compared to other measures such as the quantum mutual information or the entanglement ( quantified by the concurrence ) .figure [ fig:3 ] , shows the dynamics of quantum information measures in the case of an initially mixed werner state , defined for = 0.45 , in the non - markovian regime of the single - qubit amplitude damping channel .the reservoir relaxation is set at , and the detuning is .the figure shows specifically the flow of qip , concurrence , and the mutual information in comparison to the behavior of the scaled function .for the considered model , the behavior of closely follows the nonmonotonic and discontinuous evolution of the function .this is in contrast to entanglement and mutual information , which both evolve smoothly with time .furthermore , entanglement decays quickly and vanishes for finite ranges of time ( so - called entanglement sudden death ) and hence can not qualitatively capture the backflow of quantum information in selected intervals of time .non - markovianity is an ubiquitous feature of quantum dynamical maps , and is nowadays recognized as a resource for certain applications of quantum technology , such as metrology , cryptography , and communication .the role of non - markovianity in enhancing the robustness of quantum correlations in systems exposed to noisy environments has been studied by means of various quantitative approaches . in this work ,we have adopted the quantum interferometric power ( qip ) as our reference figure of merit to assess non - markovianity of dynamical maps applied to a system coupled to an ancilla , which plays the role of a measuring apparatus for the operations occurring on the system .the qip has been very recently acknowledged as a physically insightful , operationally motivated , and computable measure of quantum correlations of the most general kind , including and beyond entanglement .the qip corresponds to the guaranteed metrological precision that a system + ancilla probe state enables the estimation of a phase shift on the ancilla part , in the black - box quantum metrology paradigm . when the system is subject to a non - markovian evolution, the qip between system and ancilla ( measured from the perspective of the ancilla ) can undergo a nonmonotonic evolution , with revivals in time .we have shown that such nonmonotonic behavior is closely related to , and can precisely capture , the backflow of information from environment to system which is a clear marker of non - markovianity . in operative terms ,such a dynamical rise of quantum correlations translates into an increase of the guaranteed precision of phase estimation on the ancilla , thanks to the non - markovian noise affecting the system .while here we considered paradigmatic dynamical maps applied to single - qubit systems only , it has been shown in that non - markovian noise affecting a register of qubits can lead to an enhancement in the metrological scaling which is intermediate between the shot noise and the heisenberg limit. it will be interesting to investigate how the measure of non - markovianity proposed here in terms of qip can be employed to investigate the metrological scaling in the black - box paradigm for quantum metrology with multiqubit probes . in this workwe have proposed to quantify non - markovianity in terms of the nonmonotonicity of the qip , similarly to previous proposals to quantify non - markovianity in terms of the nonmonotonicity of entanglement or total correlation measures . by analyzing two simple models of single - qubit noisy dynamics ,we have shown how our measure reliably captures the non - markovian regime , and is quantitatively more sensitive than measures based on entanglement .another advantage associated with the use of the qip to characterize non - markovianity , is that such a measure also has been extended to gaussian states of continuous variable systems , resulting in a computable and reliable measure of quantum correlations in that relevant setting as well . in a subsequent work, it will be worth analyzing non - markovianity in gaussian dynamical maps in terms of the nonmonotonic flow of the gaussian qip .the qip has therefore the potential to offer a unified picture of non - markovianity extending from qubits to infinite - dimensional systems .we hope that the present analysis can stimulate further research in order to pin down the relevance of non - markovian dynamics in quantum information processing and in the description and simulation of complex quantum systems in the biological , physical , and social domains . c. addis , p. haikka , s. mcendoo , c. macchiavello , and s. maniscalco , phys .a * 87 * , 052109 ( 2013 ) ; s. haseli , g. karpat , s. salimi , a. s. khorashad , f. f. fanchini , b. akmak , g. h. aguilar , s. p. walborn , and p. h. souto ribeiro , phys . rev .a * 90 * , 052118 ( 2014 ) .m. thorwart , j. eckel , j. h. reina , p. nalbach , and s. weiss , chem .. lett . * 478 * , 234 - 237 ( 2009 ) ; f. caruso , a. w. chin , a. datta , s. f. huelga , and m. b. plenio , j. chem .131 , 105106 ( 2009 ) ; phys .rev . a 81 , 062346 ( 2010 ) ; p. rebentrost and a. aspuru - guzik , j. chem .phys . * 134 * , 101103 ( 2011 ) ; a. w. chin , j. prior , r. rosenbach , f. caycedo - soler , s. f. huelga , and m. b. plenio , nature phys .* 9 * , 113 ( 2013 ) . c. j. myatt , b. e. king , q. a. turchette , c. a. sackett , d. kielpinski , w. m. itano , c. monroe , and d. j. wineland , nature * 403 * , 269273 ( 2000 ) ; j. wang , h. m. wiseman , and g. j. milburn , physa * 71 * , 042309 ( 2005 ) ; m. j. biercuk , h. uys , a. p. vandevender , n. shiga , w. m. itano , and j. j. bollinger , nature * 458 * , 996 ( 2009 ) ; f. verstraete , m. m wolf , and j. i. cirac , nature phys . * 5 * , 633636 ( 2009 ) ; s. g. schirmer and x. wang , phys .a * 81 * , 062306 ( 2010 ) ; f. lucas and k. hornberger , phys .lett . * 110 * , 240401 ( 2013 ) ; r. sweke , i. sinayskiy , and f. petruccione , phys . rev . a * 90 * , 022331 ( 2014 ) .p. haikka , j. d. cresser , and s. maniscalco , phys .a * 83 * , 012112 ( 2011 ) ; d. chruciski , a. kossakowski , and .rivas , phys .a * 83 * , 052128 ( 2011 ) ; f. f. fanchini , g. karpat , l. k. castelano , and d. z. rossatto , phys . rev .a * 88 * , 012105 ( 2013 ) . the hamiltonian belongs to a set of local hamiltonians , acting on the ancilla , with a fixed nondegenerate eigenvalue spectrum .for every chosen spectrum , by minimization of the fisher information over the corresponding set of hamiltonians one obtains a qip measure . here , we can focus on the canonical choice of hamiltonians with nondegenerate and equispaced eigenvalues . in this way , the unitary shift acting on the ancilla reduces to a root - of - unity unitary operation . for details .
non - markovian evolution in open quantum systems is often characterized in terms of the backflow of information from environment to system and is thus an important facet in investigating the performance and robustness of quantum information protocols . in this work , we explore non - markovianity through the breakdown of monotonicity of a metrological figure of merit , called the quantum interferometric power , which is based on the minimal quantum fisher information obtained by local unitary evolution of one part of the system , and can be interpreted as a quantifier of quantum correlations beyond entanglement . we investigate our proposed non - markovianity indicator in two relevant examples . first , we consider the action of a single - party dephasing channel on a maximally entangled two - qubit state , by applying the jamiokowski - choi isomorphism . we observe that the proposed measure is consistent with established non - markovianity quantifiers defined using other approaches based on dynamical divisibility , distinguishability , and breakdown of monotonicity for the quantum mutual information . further , we consider the dynamics of two - qubit werner states , under the action of a local , single - party amplitude damping channel , and observe that the nonmonotonic evolution of the quantum interferometric power is more robust than the corresponding one for entanglement in capturing the backflow of quantum information associated with the non - markovian process . implications for the role of non - markovianity in quantum metrology and possible extensions to continuous variable systems are discussed .
in its original version sph was formulated adopting a constant particle smoothing length ( spatial smoothing resolution length or resolving power ) , where the adopted interpolation kernel works , to perform free lagrangian gas dynamics .asph methods are currently adopted with the aim of performing better spatial interpolations mainly in expanding or in shock gas dynamics .high physical viscosity accretion discs are well - bound structures around the primary compact star even in low compressibility conditions . in order to build up a well - bound accretion disc in inviscid conditions , the ejection rate at the disc s outer edge must be at least two or three times smaller than the accretion rate at the disc s inner edgewhenever this condition is fulfilled , the disc s outer edge , as well as the whole disc , does not disperse in spite of high pressure forces which are also dependent on the gas compressibility : .therefore , low compressibility gases are naturally more easily sensitive to the loss of blobs of gas at the disc s outer edge itself , towards the empty external space , if the gravitational field is not able to keep disc gas in the gravitational potential well .such effects are enhanced and strongly evident in inviscid conditions and the moderate contribution of artificial viscosity terms does not prevent such effects . such a viscosity does not work like a true physical one since it operates only when different fluid components approach each other , being zero during fluid particle repulsion .high compressibility gas dynamics does not allow us to distinguish the truth regarding whether a technique is able to perform a correct fluid dynamics .in fact , in such a modelling , accretion discs would be formed anyway even in physically inviscid conditions and the role of kernel choice and of its resolving power are hidden .to stress such an idea , in this work a low compressibility polytropic index is adopted throughout , working with the same binary system parameters such as stellar masses and their separation and adopting the largest value ( )as for the shakura and sunyaev viscosity prescription . in this paper ,physically inviscid and viscid disc models are shown , where a more suitable gaussian - derived kernel formulation , as far as both transport mechanisms and expanding or collapsing gas dynamics are concerned , is adopted . throughout the accretion disc models ,the same supersonic mass transfer condition at l1 are adopted .the numerical scheme here adopted , as any other numerical method , is characterized by the assumed spatial smoothing resolution length .the mass and angular momentum radial transport is also affected by the sph particle smoothing resolution length .too small values prevent the radial transport , while large values produce a too effective radial transport of matter towards the centre of the gravitational potential well , as well as of angular momentum toward the disc s outer edge .a large ensures a high particle overlapping ( interpolation ) but at the same time it produces a strong particle repulsion rate due to pressure forces especially in low compressibility regimes on the disc s outer fe . on the contrary ,a too small smoothing resolution length compromises any fluid dynamic behaviour and shock handling .the artificial viscosity term prevents spurious heating and handles shocks as a `` shock capturing method '' .the artificial viscosity is a function of the smoothing resolution length itself or of some kind of spatial length .a too small value does not prevent particle interpenetration , destroying any fluid behaviour , because of lack of artificial viscosity . and what we statistically define as a well - defined and bound accretion disc .as far as the numerical resolution is concerned , a number of disordered neighbour particles of the order of 10 ( more or less ) is considered , in principle , the minimum number of neighbours in order to achieve an adequate 3d numerical interpolation , although a number of neighbours larger than 30 is currently adopted to achieve a higher accuracy .this is the criterion we adopted to define a well - bound accretion disc .lesser neighbours for each particle are considered an unsuitable number as far as both interpolation efficiency and disc binding into the primary s gravitational potential well are concerned . in the next sections , after discerning the artificial and the turbulent physical viscosities , we describe how asph techniques work and their limits when the viscous transport and/or fe conditions are involved , as well as why gaspher could be a solution .in particular , in 2 we compare how artificial and turbulent physical viscosities differently work ; in 3 we show how gaspher works and why it does not suffer of some sph and/or asph lack .at last , in 5 we report 3d accretion disc results showing some interesting features in our viscous simulations , whilst in 6 we discuss on the accuracy of sph - derived techniques . in the appendix , after showing the mathematical background underlying sph - derived schemes ( for readers knowing how sph and asph work , this mathematical section can be easily skipped without any difficulty , being instead essential for others ) , we also compare results of gaspher , sph and asph non viscous 1d and 2d selected tests . a comparison with analytical solutions is also given , whenever it is possible .in our physically viscous disc modelling , the shakura and sunyaev prescription is adopted with the largest value to stress numerical reliability of results . the sph formulation of viscous contributions in the navier - stokes and energy equations has been developed by .these goals are not obtained by artificial viscosity which is , however , introduced in both models to resolve shocks numerically and to avoid spurious heating .artificial viscosity vanishes when the limit value of the particle interpolation domain goes to zero . and demonstrated that the linear component of the artificial viscosity itself , in the continuum limit , yields a viscous shear force .in particular , the last two authors have explicitly formulated such an artificial viscosity contribution in the momentum and energy equations .moreover , and found an analogy between the shear viscosity generated by the linear artificial viscosity term and the well - known shakura and sunyaev shear viscosity , in the continuum limit .sph method , like other finite difference schemes , is far from the continuum limit ; moreover we need the quadratic ( , von neumann - richtmyer - like viscosity ) artificial viscosity term to handle strong shocks .linear and quadratic artificial viscosity terms ( usually and sometimes , in some specific cases , ) are chosen and , respectively . in the viscous models ,the viscous force contribution is represented by the divergence of the symmetric viscous stress tensor in the navier - stokes equation . a symmetric combination of the symmetric shear tensor times the particle velocity has been added to the energy equation as a viscous heating contribution .the bulk physical viscosity contribution has not been considered for the sake of simplicity .artificial and turbulent physical viscosities are independent from each other .the artificial viscosity terms should be smaller than the physically viscous ones , otherwise the physical viscosity role would be negligible. the relevance of viscous forces could be even comparable to the gas pressure forces , especially if .an analytical formulation , describing the numerical artificial viscosity coefficient , is reported in : , where is the sound velocity . according to such a definition ,its ratio with the shakura - sunyaev viscosity coefficient is : , for each sph particle . for , where is the scale - height of the disc , .this implies that the role of artificial viscosity could be significant , compared to the physical viscosity role , if small and large values are taken into account . according to and to the shear viscosity with .according to their results , the numerical artificial viscosity coefficient is even smaller if .in fact , the ratio . hence , for , .this implies that , the role of artificial viscosity can be comparable to the role of a very low physical viscosity , because of the correlation between the sph artificial viscosity parameter and the shakura - sunyaev viscosity parameter is : without any bulk viscosity contribution and supposing gas incompressibility ( ) .notice that , according to these correlations , the shakura - sunyaev parameter ( non zero only for approaching particles ) is not the shakura - sunyaev viscous parameter for physically viscid gases , but the transformation of the artificial viscosity term into the shakura - sunyaev formalism .such results show that the gas compressibility has a relevant role since the physical viscosity mainly works when the density varies on a length - scale of the order of the velocity length - scale , not only as a bulk viscosity , but also as a shear viscosity .moreover , notice that the assumption of an adaptive sph or a constant sph could also have a role both in artificial viscosity and in physical viscosity roles .these results show that the role of a fully viscous fluid dynamics is still far from any conclusion and that physical assumptions as well as numerical hypotheses and boundary conditions are also determinant .as for viscous gas hydrodynamics , the relevant equations to our model are : + \nonumber \\ & & \frac{1}{\rho } \nabla \cdot \bmath{\tau } \ \ \ \ \ \hfill \mbox{navier - stokes momentum equation } % \nonumber\end{aligned}\ ] ] the most of the adopted symbols have the usual meaning : stands for the lagrangian derivative , is the gas density , is the thermal energy per unit mass , is the effective gravitational potential generated by the two stars and is the angular velocity of the rotating reference frame , corresponding to the rotational period of the binary system .self - gravitation has not been included , as it appears irrelevant .the adiabatic index has the meaning of a numerical parameter whose value lies in the range between and , in principle . is the viscous stress tensor , whose presence modifies the euler equations for a non viscous fluid dynamics in the viscous navier - stokes equations .in its original formulation gaussian kernels as : have been adopted in sph , where represents the module of the radial distance between particles and . also , an example of `` super gaussian kernel '' has also been described .even a factorization of gaussian kernels for each dimension has also been adopted in an asph formulation , adopting 3d ellipsoid kernel geometry to achieve a higher accuracy , according to an anisotropic -dependent spatial particle concentrations ; or according to the mean particle spacing , as it varies in time , space , and direction around each particle .kernels based on cubic splines since the end of the 80 s have also widely been adopted . typically , in 3d ,such cubic spline kernels are in the form : where .the hidden problem is whether asph , as previously formulated , are effective whatever is the compressibility regime considered , especially when fe conditions are adopted on the edges of the particle envelope .high compressibility gas dynamics prevents us from distinguishing the truth regarding whether a sph - like technique is able to perform a correct fluid dynamics , since accretion discs would be formed anyway even in physically inviscid conditions . in this case , the roles of the kernel choice and of its resolving power are hidden .gas loss effects in low compressibility conditions naturally develop , especially at the disc s outer edge , because of the pushing action towards the outer space of particles just below the disc s surfaces and below the disc s outer edge , if the gravitational field is not able to keep gas particles in the gravitational potential well . in asphinterpolation particle domains swell at both free edges ( inner and outer ) .normally , in an accretion disc , the density is a decreasing function of the radial distance from the central star .this implies that particle adaptive should decrease towards the inner disc bulk , without any restriction imposed on the number of particle neighbours .problems deriving from the inadequacy of artificial viscosity role and the particle interpolation / interpenetration could be relevant .even the choice of a threshold value for as a lower limit would be arbitrary and no differences would appear in results compared to classical sph results , adopting the same .if asph is adopted , even restricting the particle neighbours to a fixed number in its conservative form , the behaviour of for each particle is contrary , swelling also within the disc bulk and producing enhanced gas loss effects at the disc s outer edge and on the disc surfaces , in spite of the viscosity eventually introduced , as well as a draining effect of the disc s inner edge toward the central compact star due to a stressed radial transport . whenever and wherever spatial isotropy and homogeneity hold , a modulation of spatial smoothing resolution length does not affect results , in principle , in so far as is large enough to prevent particle interpenetration and neighbour particles are enough to allow good interpolationshowever , the situation is rather different if spatial gradients exist .it is quite normal that a smaller threshold limit is imposed on particle because problems on the ineffectiveness of artificial viscosity in handling shock fronts would arise if , together with a too short time step computed according to the friedrich - courant - lewy conditions .artificial viscosity vanishes when the limit value of the particle interpolation domain goes to zero , due to the fact that in its analytical expression it is linearly dependent on the smoothing length .its role , limited to a filing effect , should not be dominant compared to gas pressure terms .therefore such condition is fully altered if , according to eqs .( 18 , 19 ) , , see app. a. other formulations of artificial viscosity , depending on particle mutual distance , do not modify the problem . therefore , asph results would be deeply influenced by a dominant role of artificial terms if such a condition is mostly realized in sonic and subsonic regimes and/or in progressive turbulent rarefying regimes .some authors handle artificial viscosity switching it off , especially in low density conditions , when particle increases or in high temperature conditions when particle sound velocity is subsonic . as a result, the switching on / off of the artificial viscosity limits its role , but low density sonic and subsonic conditions stay still be critical . in both situations ,the problem of a correct hydrodynamics involves not only the bulk of the gas structure in the computational domain , but mainly the physics of the fe of the computational domain .in particular the outer one for gas expansion problems and the inner one for collapse problems . as for physically viscous asph simulations , mass and angular momentum transportare deeply affected by the particle smoothing resolution length .we expect a higher particle transport when particle statistically increases and the opposite effect when statistically decreases . in a low compressibility regime, showed that physical turbulent viscosity hampers particle repulsion , due to pressure forces , contributing to accretion disc consistency and limiting particle loss at the disc s outer edge .however , if particle smoothing resolution length increases in asph , and radial transport becomes unnaturally too much effective , the opposite effect arises so much that the inner edge of the disc could be indefinite . in gaspher modelling , a radial gaussian - derived kernel , related to the well - known `` error function '' with a constant smoothing length equivalent to its hwhmis considered : in such a kernel we stress that its interpolation radial extension is unlimited , although its typical smoothing length is spatially and permanently constant . in gaspher , to collect an adequate particle neighbours number is not a problem because of the unlimited spatial extension of its kernel . in the continuum limit ,the three interpolation kernels give the same interpolation integrals for 1d flows , as well as the last two kernels give the same interpolation integrals for 2d flows .the origin of this kernel function relies in the well known `` error function '' : whose `` complementary error function '' is : for , for , equals the zero order gaussian integral : in performing 3d integral , hence , .also , as well as , this last , considering the well known properties of gaussian integrals : , and in particular .fig . 1 displays , and as a function of . , and , as well as , and are significant for 3d integrations .1 displays the much better gaspher interpolation capabilities , with respect to the current sph or asph techniques using other kernels , not only because 3d interpolations are more weighted toward , but also because as it should be , avoiding the well known `` particle pairing instability '' effect , affecting the other two behaviours ( displays a minimum for ) . in the conversion from mathematical integrals to computational summations in 3d ,the role of is equivalent to .thus , wherever , and spatial gradients exist , the effectiveness of the adopted interpolation kernel comes out . in the resolution of the euler or of the navier stokes equations ,spatial derivatives have to be calculated . in the calculation of in the momentum equation, two particles can not coincide because the pressure force is physically infinite .moreover , also for the in the energy equation or in the continuity equation , this non physical case should be carefully avoided because no velocity divergence can exist if particle mutual separation is zero .summing up , both indexes and .in the unrealistic case of , spatial derivatives to compute gradients or divergences can be bypassed because unphysical . in the case of a very short particle mutual separation, natural computational difficulties can arise only for a very small particle separation .this is unavoidable when a very high compression characterize the fluid , because calculated pressure and individual pressure forces are always naturally very high .however , in particular for accretion or collapse processes , the particle merging in a new particle , created at the centre of mass , conserving mass , energy and momentum could be the best solution .this is a useful physical expedient , also used in asph , whenever a strong gas compression occurs .it avoids a too short explicit time step calculation , according to the well known friedrich - courant - lewy .in particular , for asph technique only , it also avoids any artificial viscosity inadequacy in handling shocks .such an allowed expedient could be correctly also used in gaspher in such conditions .7 ) , ( eq .6 ) , as well as of gaspher ( eq . 8) . , , , as well as the radial derivatives times both useful for 3d calculations are also reported.,width=302,height=302 ] we pay attention that in 3d interpolations , it is not the role of the kernel that it is important . instead , it is the that is to be taken into account , as fig .1 clearly displays . hence , on particle , when , converges toward a finite value .if this is the explanation regarding the continuum limit , in the spatial discretization , this role is carried out by the particle density which , in the sph formulation , divides . only when gaspher formulation becomes : .on the other hand , whenever the concept of dimension is meaningless .this implies that , if , and especially if , the 1d formulation of kernel can also be taken into account to simplify computational complications in some selected cases , whenever the 2d or the 3d fluid kinematics flows along one selected direction .this kernel choice resolves the problem of neighbours inadequacy , as well as the problem of the sph and asph `` particle pairing instability '' for due to the fact that when , does not become infinite . for practical reasons in computational resources , even a limitation to several of the order of with be considered with very small modifications in results , keeping constant the resolving power of all particles .in fact , theoretically considering a homogeneous and isotropic 3d particle distribution , if ( where is the particle concentration ) is the number of neighbours closer than for each ith particle , it increases up to i.e. up to times .alternatively , neighbours can be limited to a selected number ( in our 3d models ) . in both cases , a very small modifications in results , neglecting further interpolating particles ,is made because the most important neighbours in the interpolation are the closest ones . in this case , if neighbours are a large number , due to a very high particle concentration , it is easily possible to merge more particles in a single new particle , created at the centre of mass , conserving mass , energy and momentum .so doing , the asph s risk to decrease the spatial smoothing resolution length to values involving an ineffective artificial viscosity behaviour , as well as the danger to get a too small computed time step in the courant - friedrich - lewy condition when , are avoided . a fixed number of neighbours can be a serious risk by limiting the interaction to neighbours only the inner `` flat part '' of the kernel contributes to sph sums . a large contribution from other particles outside this `` flat part '' would be wrongly neglected . in gaspherthis does not occur because the kernel slope is not `` flat '' for , instead .of course , also asph techniques try to avoid the unpleasant `` particle pairing instability '' .however , the particle resolving power can not decrease too much in regions of very high particle concentrations otherwise the artificial dissipation due to the artificial viscosity does not work well . moreover , at the same time , the time step explicitly computed according to the friedrich - courant - lewy condition becomes too short if .the possibility of adopting a numerical sph code , including the physical viscosity , considering results , makes us able to answer the problem whether asph s and/or gaspher methods are reliable in improving fluid dynamics compared to the original sph , where the smoothing length is constant .although some authors adopted gaussian kernels , their methods belong to the asph numerical schemes where a spatially and temporarily variable smoothing length is adopted .although many efforts try to conciliate a reliable adaptive interpolation technique with computational resources , asph methods are unsatisfactory in describing a correct gas dynamics because hidden numerical errors exist inside an adaptive interpolation , better revealed in a viscous transport process inside a definite potential well .all asph s difficulties in handling the artificial viscosity dominant role in subsonic and/or expanding regimes , as discussed before , are prevented in gaspher by the fact that the particle resolving power is constant and equal to hwhm of spatially unlimited gaussian kernels .gaspher technique limits the problem of particle disorder in computing particle , as discussed in and in as far as shear flows are concerned because , even considering disordered flows , particle disorder is tamed by gaspher extended interacting particle domains .in fact , the longer the particle interpolation range , the better the computational result , without any modification of particle resolving power .finally , an adequate fixed smoothing resolution length allows us to resolve gas turbulence within the confined integration domain even in low compressibility regimes . in non viscous conditions the local reynolds number , considering , and , more stressing , considering , because of . being the whole disc structure typically supersonic , even for , .hence , a moderate turbulence is effective in non viscous conditions , where gas collisions are relevant . instead in viscous conditions , in the shakura and sunyaev formulations , no turbulence is recorded .if an adaptive method is adopted in low compressibility conditions , the increasing of the smoothing resolution length , up to an order of magnitude , prevents any turbulence resolution in an accretion disc , even for supersonic regimes .the evaluation of the minimum linear dimensions of the integration domain , able to solve turbulence adopting an parameter of the order of , gives a value of the order in order to get a reynolds number , the smaller value is for supersonic regimes .the integration domain ( the length of the primary s potential well ) of the order of .therefore , how to handle an adaptive sph with the problem of solving the turbulence is a real difficulty , and the adopted fixed is correct in order to solve this problem .larger ( and adaptive ) values are in open conflict with in order to solve turbulence .these conclusions on turbulence in accretion discs having free edge boundaries are not those concerning the concept of turbulence wherever fixed static boundaried are considered .whenever particles move within a confined box , both sph and asph results are traditionally correct in so far as is not too small . in this casethe problem regards the particle chaotic collisions in a close environment where the particle mean free path is less than two or three times the particle smoothing resolution length .looking at our sph results in a physically viscous low compressibility regime as a reference , where the particle smoothing length is constant and a typical cubic spline function as a smoothing function have been assumed , we systematically perform a series of gaspher simulations with the aim of getting a physically viscous well - bound accretion disc in a close binary .we show that such transport phenomenology , in a low compressibility regime , is significant in deciding the reliability of the adopted kernel formulation for sph fluid dynamics simulations , especially whenever free edge boundary conditions must be taken into account .the characteristics of the binary system are determined by the masses of the two companion stars and their separation .we chose to model a system in which the mass of the primary compact star and the mass of the secondary normal star are equal to and their mutual separation is .the primary s potential well is totally empty at the beginning of each simulation at time .the injection gas velocity at l1 is fixed to while the injection gas temperature at l1 is fixed to , taking into account , as a first approximation , the radiative heating of the secondary surface due to lightening of the disc .gas compressibility is fixed by the adiabatic index .supersonic kinematic conditions at l1 are discussed in , especially when active phases of cb s are considered .however , results of this paper are to be considered as a useful test to check whether disc structures ( viscous and non ) show the expected behaviour .the reference frame is that centred on the primary compact star and corotating , whose rotational period , normalized to , coincide with the orbital period of the binary system .this explain why in the momentum equation ( eq .2 ) , we also include the coriolis and the centrifugal accelerations . in our modelsthe unknowns are : pressure , density , temperature , velocity , therefore we solve the continuity , momentum , energy , and state ( perfect gas ) equations . in order to make our equations dimensionless, we adopt the following normalization factors : for masses , for lengths , for speeds , so that the orbital period is normalized to , for the density , for pressure , for thermal energy per unit mass and for temperature , where is the proton mass and is the boltzman constant .the adopted kernel resolving power in the gaspher modelling is .the geometric domain , including moving disc particles , is a sphere of radius , centred on the primary .the rotating reference frame is centred on the compact primary and its rotational period equals the orbital one .we simulated the physical conditions at the inner and at the outer edges as follows : \a ) inner edge : + the free inflow condition is realized by eliminating particles flowing inside the sphere of radius , centred on the primary .although disc structure and dynamics are altered near the inner edge , these alterations are relatively small because they are counterbalanced by a high particle concentration close to the inner edge in supersonic injection models .\b ) outer edge : + the injection of `` new '' particles from l1 towards the interior of the primary roche lobe is simulated by generating them in fixed points , called `` injectors '' , symmetrically placed within an angle having l1 as a vertex and an aperture of . normally ,as adopted since our first paper on sph accretion disc in cb , the radial elongation of the whole ensemble of injectors is .the initial injection particle velocity is radial with respect to l1 . in order to simulate a constant and smooth gas injection , a `` new '' particleis generated in the injectors whenever `` old '' particles leave an injector free , inside a small sphere with radius , centred on the injector itself .particle masses are determined by the assumed local density at the inner lagrangian point l1 : ( as typical stellar atmospheric value for the secondary star ) , equal to .the formulation adopted for the 3d sph viscous accretion disc models is the well - known and parametrization : , where is the sound velocity , and is a dimensionless estimate of the standard disc thickness , where is the cylindrical radial coordinate of the ith particle . in this paperwe adopt to point out evident differences in disc structure and dynamics between our disc models .we carried out our low compressibility ( ) simulations until we achieved fully stationary configurations .this means that particles injected into the primary potential well ( which is not deep , according to the primary small mass ) are statistically balanced by particles accreted onto the primary and by particles ejected from the outer disc edge .plots for both the inviscid ( ) and the viscous ( ) disc model . the final time and the total particle number , as well as the injection velocity from the inner lagrangian point l1 and , are also reported.,width=302,height=302 ] the orders of magnitude of the mass transfer injection rate from l1 : , the accretion rate and the ejection rate are , and , for the non viscous model and , and for the viscous model , respectively . , is the conversion factor from particle / time to .such values ( also adopted in ) are representative of active phases of cb whenever either the restricted problem of three bodies in terms of the jacobi constant or the bernoulli s theorem are taken into account during such phases , considering the conservation of the flux momentum in the crossing of l1 from the two roche lobes . fig .2 displays xy plots of both the physically inviscid and viscous disc models ( and ) . represents the total number of particles in each model . in classical sph ,no well - bound structures with a definite disc s outer edge come out ( molteni et al .1991 ; lanzafame et al .1992 , for and ) .the inviscid gaspher disc model shows a higher particle concentration at the disc s inner edge , close to the primary star .instead , a well - defined structure comes out in the viscous disc model in stationary conditions .2 also displays the plots obtained by folding all disc bulk particles onto a plane containing the axis and being perpendicular to the xy orbital plane .an evident latitudinal spread appears for the inviscid model .computed latitudinal angular spread is for inviscid disc model .this results compare to that obtained in and in , as far as the non viscous model , and to that obtained in as far as the viscous model , are concerned . as for the viscous model ,the latitudinal spread is .2 clearly displays the coming out of spiral patterns in the xy plot of the viscous model .these particular structures did not come out in sph results , where both the same supersonic injection conditions from l1 , and the same stellar masses , as well as the same low gas compressibility were adopted in viscid conditions . however , an exhaustive literature exists , showing which conditions favour the development of such structures ( e.g. tidal torques , external and/or outer edge perturbations ) . in particular, showed that high angular momentum injection condition from l1 produces these patterns .this beyond doubt shows the better effectiveness of gaspher kernel choice ( 33 ) compared to the common cubic spline sph kernel analytical formulation .the comparison with results ensures us that gaspher technique performs not only correct calculations , but also that particles at disc s outer edge are not isolated .moreover , the full radial transport can not be affected by any `` particle pairing instability '' because the kernel formulation ( 33 ) prevents such unpleasant inconsistency in the disc bulk .low compressibility gas loss effects affect the non viscous gaspher disc surfaces and outer edge .the same result were obtained in , as well as in , working in sph and adopting two different spatial smoothing resolution lengths and sonic injection transfer conditions from l1 .supersonic injection conditions from l1 are now taken into account , as described in for active phases of cb .therefore , non viscous gas loss effects from disc s outer edge and surfaces are _ a fortiori _ correctly expected , taking into account of the higher injection mechanical energy from l1 . at the same time , even though the low density non viscous disc structure is statistically rarefied , no neighbour inadequacy affect gaspher interpolation .the low total number of particles within the primary s potential well ( ) in non viscous conditions is due to the absence of any physical viscosity able to keep bound particles against pressure forces responsible of particle removal from the disc outer free edge for whenever low mass cb s are considered .this result is well known .this particular is not trivial because the bound of the edge of the computational domain prevents any particle removal allowing to get the wished particle concentration in spite of the effective particle repulsion for high values .sph free surface flows were developed by , with the aim of solving the unpleasant problem of fe layer in sph techniques , mainly to simulate breaking waves , but at relatively low resolution . a reduction in noise , with smooth - free surfaces and regular particle distribution ,was obtained by and , developing sph models where first order completeness was enforced , that is that first order polynomials are exactly reproduced .error estimates in a sph interpolant are evaluated in .however in this paper , the lack of completeness of sph interpolants is not taken into account .a formulation for the total error , determining how simulation parameters should be chosen and taking into account of the order of completeness is still not written in the literature . adopted modified kernel gradients into the classical sph equations . however , the hidden problem with this approach is that modified kernels no longer have the property that spatial gradients with respect to their two position arguments are exactly opposite between two contact particles .this kernel property is essential in sph equations . showed `` an expression for the error in an sph estimate , accounting for completeness , an expression that applies to sph generally '' , paying attention to the conservation principles .they found that a common method , enforcing completeness , violates the conservation principle of kernel spatial gradients must be opposite between two contact particles .they also showed some examples of discretization errors : numerical boundary layer errors .errors for a sph summation interpolant are functions of both particle distribution and particle smoothing length . in an exact formulation ,such errors are described by both volume and surface integrals of both neighbour particle distribution and their smoothing resolution length .therefore , in fe layer conditions , not only relevant errors in interpolations , but also unnatural pressure gradients in fe conditions at the edge of the computational domain occur in asph . ) , and the viscous models ( ) .the final time and the total particle number , as well as the injection velocity from the inner lagrangian point l1 and , are also reported.,width=302,height=302 ] a reasonable sph accuracy is related to the number of space neighbours of each sph particle . as a free lagrangian numerical method ,classical sph methods , are free from errors as far as momentum and angular momentum are concerned .instead , errors can occur as to energy as for asph variants .in particular , it is remarkable the evaluation of the energy error propagator ( eepr ) , computed for each particle , to have the correct idea of temporal propagation of energy errors . if sph - like methods involve a systematic error in energy[ |_{err} ] .this means that , if a long time is necessary to achieve a fully stationary configuration , errors in energy conservation could be significant .the evaluation of the gaspher eepr for inviscid disc model is .instead , the gaspher eepr for viscous disc model is .being errors in energy of this order of magnitude we do not usually allow to distinguish if numerical simulations correspond to a fluid physical behaviour . thus , the numerical error in energy on particle over - expansion / over - compression is not dominant step by step .unfortunately , it accumulates in time .this implies that numerical simulations , limited only to explosive or collapse short time tests , would not be reliable in testing asph codes . therefore , once more , this conclusion strengthens numerical tests and simulations based on a transport mechanism .physical viscosity naturally works where the particle mutual velocities ( and separations ) change in time , namely when a mutual acceleration exists , contrasting gas dynamics ( rarefaction or compression ) and converting kinetic energy in thermal energy .such a mechanism clearly supports the development of well - bound accretion discs inside the primary potential well , in spite of the low compressibility , at least for both in classical sph and in gaspher approach .we want to point out that adopting does help emphasize differences in disc structure and dynamics compared to the physically inviscid model .however , values of smaller than the unity may be more realistic according to some thin disc analytical models .we recall that our physical viscosity is only a shear viscosity . for the sake of simplicity ,no bulk viscosity has been considered , as explicitly mentioned in the paper .in fact , a value for the bulk viscosity should be too high . fig .3 displays , in a logarithmic scale , the angular momentum and temperature radial distribution , for all models .such radial distributions for the gaspher viscous model are very close to that of the standard model and .this can be explained considering that , in stationary conditions , an accretion disc redistributes the angular momentum injected at the outer edge into the disc bulk , according to outer edge boundary conditions only , as already shown in and .physical viscosity plays a role in regions where particle velocity gradients are significant .this means that physical viscosity plays a relevant role mainly in the radial transport , while it has scarce influence on the tangential dynamics .a strong difference appears when looking at the temperature radial distribution .in fact , the heating effect of the physical viscosity is particularly evident in the disc s inner zones .we recall that the disc itself is in an equilibrium stationary state where the heated particles are directly accreted towards the primary .this as far as particle advection is concerned .as for conduction , although it is much less important , notice that the temperature decreases towards the exterior , thus dispersing heat outside .however , discs could also radiate energy . in disc models without explicit inclusion of radiative terms in the energy equation ( almost all models , since , this inclusion complicates things considerably ) , the effect of radiative cooling is better simulated with s less than .plots for the inviscid and the viscous ( and ) disc models for a different particle smoothing resolution lengths .the final time and the total particle number , as well as the injection velocity from the inner lagrangian point l1 , are also reported.,width=302,height=302 ] , the particle smoothing resolution lengths , as well as the final time , the total particle number , and the injection velocity from the inner lagrangian point l1 , are also reported.,width=302,height=302 ]to study how fe fluid dynamics is affected by the initial smoothing resolution length choice , we performed two more simulations ( non viscous and viscous ) adopting a smaller smoothing resolution length : , improving spatial resolution since particle injection . taking into account of injection condition from our previous simulations for , particle masses are scaled , conserving the same mass density from l1 , according to the ratio of particle volumes : . thus , the mass transfer rate from l1 is self - consistent and automatically comparable to that relative to simulations with , without any variation of the injection velocity . to do this, it is necessary to recalculate newly the total number of injectors , by adopting the simple scale law : .hence , according to these simple scaling laws , we keep injection conditions comparable both for the initial density and for the mass transfer rate at l1 .results of such further simulations are displayed in figg .4 and 5 , where plots of such 3d gaspher simulations are displayed , free of any difficulty on the sufficient number of neighbour particles , as explained before .the total number of disc particles shows a monotonic increase by decreasing .thus , both disc density is comparable in both disc models as well as the mass of the simulated discs being , as well as constant , within statistical fluctuations . in the non viscous regimethe larger number of disc particles are still affected by a gas chaotic collisional component on top of the spiral disc s kinematics . moreover , the reynolds number increases because of the reduction of the particle smoothing resolution length , from to . instead , whichever is the gaspher adopted particle smoothing resolution length , in a viscous regime , both the radial transport of mass and angular momentum , as well as the radial temperature profile , are not sensitive to any adopted particle resolving power as figg .2 to 5 clearly display .their radial behaviour is strictly comparable to that of the typical standard disc , whose specific angular momentum and whose mean temperature .hence , this result is a further confirming check that gaspher result , in their general aspect , as far as the radial transport and thermal properties , are not strongly dependent on the assumed spatial resolution .moreover , the local physical properties are clearly comparable with each other , being the particle spatial resolution in the graphs different , but not their physical values , that is denser ( more rarefied ) , lighter ( heavier ) particles to get the same density , as an example .from the astrophysical point of view , our results show that in gaspher modelling , where particle interpolation radial extension is conceptually unlimited - although particle smoothing length is spatially and permanently constant - solve the problem of neighbours inadequacy .moreover , physical viscosity supports the development of a well - bound accretion disc in the primary potential well , even in the case of a low compressibility gas dynamics .such results , also shown in , mean , once more , that the initial angular momentum injection conditions at the disc s outer edge are responsible for the disc tangential dynamics , while viscosity is mainly responsible for the thermodynamical disc properties , even for low compressibility disc models ( , here ) when gas loss effects are physically expected according to the low compressibility gas dynamics and to the low stellar mass of the central accretor .moreover , in gaspher viscous fluid dynamics , further details of the flow are revealed ( e. g. the coming out of spiral patterns in disc structures ) . from the numerical point of view, reliable results are reproduced in a gaspher , despite fe conditions are adopted . without considering the injected particle stream , such simulations could also be considered as accretion and transport general tests within a gravitational potential well .typical tests as far as non viscous 1d shock tube show that gaspher technique produce results in a very good comparison with analytical ones , having the advantage to solve the fe difficulties without any `` particle pairing instability '' . simulation , carried out in low compressibility and in high viscosity conditions , to stress out results , is significant to understand the quality of numerical code .the transformation of sph codes in a gaspher code , without further numerical efforts , seems likely to be an interesting future challenge .as far as the computational cpu time is concerned , there is not conceptually any disadvantage in such transformation , if particle neighbours are fixed ( e.g. or ) for each particle , by the introduction of a boundaries counter / limiter because the number of particle neighbours rules the computational cpu time .the necessity to perform better sph numerical interpolations on contact surfaces , or at fe layers , recently inspired authors to develop sph - derived techniques to achieve a higher accuracy .an sph dynamic refinement has recently been developed by to calculate boundary contact forces in fluid flow problems through boundary particle splitting .such a technique could also be very interesting and competitive in solving fe problems .however , this is beyond the scope of this paper .we conclude that although high compressibility inviscid results among different schemes could compare with each other especially if constraints are imposed on boundaries of the computational domain , differences arise either if fe and/or if viscous flows are involved . in such conditions , gaspher technique shows a regular behaviour and better conserve the total energy , as well as reduces the influence of the artificial viscosity for non viscous ideal shear flows free of any gas compression ( see appendix ) .computational cpu time is mainly governed by the number of neighbour particles for each particle .therefore , no disadvantages arise , in principle , in adopting a gaspher code with respect to an asph code if the neighbour particle statistical number is the same .the sph method is a lagrangian scheme that discretizes the fluid into moving interacting and interpolating domains called `` particles '' .all particles move according to pressure and body forces .the method makes use of a kernel useful to interpolate a physical quantity related to a gas particle at position according to : , the interpolation kernel , is a continuous function - or two connecting continuous functions whose derivatives are continuous even at the connecting point - defined in the spatial range , whose limit for is the dirac delta distribution function .all physical quantities are described as extensive properties smoothly distributed in space and computed by interpolation at . in sph termswe write : where the sum is extended to all particles included within the domain , is the number density relative to the jth particle . is the adopted interpolation kernel whose value is determined by the relative distance between particles and . in sph conversion of mathematical equations ( eq . 1 to eq .4 ) there are two principles embedded .each sph particle is an extended , spherically symmetric domain where any physical quantity has a density profile . besides, the fluid quantity at the position of each sph particle could be interpreted by filtering the particle data for with a single windowing function whose width is .so doing , fluid data are considered isotropically smoothed all around each particle along a length scale . therefore , according to such two concepts , the sph value of the physical quantity is both the overlapping of extended profiles of all particles and the overlapping of the closest smooth density profiles of .this means that the compactness of the kernel shape gives the principal contribution to the interpolation summation to each particle by itself and by its closest neighbours . inboth approaches the mass is globally conserved because the total particle number is conserved . in sph formalism ,equations ( 2 ) and ( 3 ) take the form : where , , is the mass of jth particle and _ artificial pressure term_. .the viscous stress tensor includes the positive first and second viscosity coefficients and which are velocity independent and describe shear and tangential viscosity stresses ( ) , and compressibility stresses ( ) : where the shear in these equations and are spatial indexes while tensors are written in bold characters . for the sake of simplicitywe assume , however our code allows us also different choices .defining as the sph formulation of , the sph equivalent of the shear is : a full justification of this sph formalism can be found in . in this schemethe continuity equation takes the form : or , as we adopt , it can be written as : which identifies the natural space interpolation of particle densities according to equation ( 9 ) .the pressure term also includes the artificial viscosity contribution given by and , with an appropriate thermal diffusion term which reduces shock fluctuations .it is given by : where with being the sound speed of the ith particle , , and .these and parameters of the order of the unity are usually adopted to damp oscillations past high mach number shock fronts developed by non - linear instabilities .these and values were also adopted by .smaller and values , as adopted by , would develop more turbulence in the disc and possibly only one shock front at the impact zone between the infalling particle stream and the returning particle stream at the disc s outer edge . in the physically inviscid sph gas dynamics, angular momentum transport is mainly due to the artificial viscosity included in the pressure terms as : where is the intrinsic gas pressure .the advantage of an asph is to perform better particle interpolations ensuring a large enough number of interpolating particle neighbours .several authors have more recently adopted a criterion where the number of sph particle neighbours for each time - step calculation is a fixed number , generally of the order of , decoupling the resolving power calculation by any physical quantity . instead , in previous papers the smoothing length has been considered a function of time by relating it to the local particle density .a spatial and temporal smoothing length together with an appropriate symmetrization concerning particle pairs have also been proposed . in original 3d asph in space and time .symmetry in both indexes is widely adopted , where the evaluation of a symmetrized and a symmetrized kernel are required according to : where indexes and refer to time - step .such a choice is widely considered better than : where and refer to initial values at time zero . such a preference is due to the fact thatbecause of non - linearity , instabilities can easily be produced especially in anisotropic volume changes and flow distortion .equivalently , a further equation able to compute the `` new '' at time - step from the `` old '' at time - step is : \ ] ] or , by considering the continuity equation ( 1 ) : ,\ ] ] whose integration over time gives eq .this equation is easily obtained by performing the derivative of the equation const , expressing the conservation of particle mass : + , , etc .. however eqs .( 22 ) , ( 23 ) or ( 24 ) are more convenient than eq .( a14 ) . and , proposed an adaptive method splitting the 3d scheme into three 1d schemes formulating a factorized gaussian kernel of three 1d gaussian components .in such a scheme a tensorial computation of sph equations has been developed and each asph particle enlarges or contracts as a spheroid rather than a spherule .they successfully applied their technique to a shock front cosmological problem where asph spheroids give a better shock resolution compared to typical sph spherule without adopting any artificial viscosity term . in a further paper the authors , admitting that artificial viscosity terms are necessary , especially in the momentum equation , handle such artificial viscosity terms suppressing or turning on them according to some physical circumstances ( mainly in rarefaction conditions ) .a technique turning on / off the artificial viscosity has also been described in .asph models adopt the sph same formulation , where either : instead of sph , and , or : instead of sph , are adopted .the second formulation is mostly more currently adopted .non - isotropic asph adopt an anisotropic algorithm to compute ellipsoid particle deformation and , consequently , the anisotropic smoothing length , according to the local particle concentration .such a scheme is mainly used in simulations of 2d and 3d oblique shocks and of contact fluid surfaces .the algorithm computes the element , where , of the symmetric matrix : ,\ ] ] where , is the projection of the ellipsoid characteristic semiaxes on the cartesian axes .the eigenvectors of the matrix are the directions along the three axes of the ellipsoid and the corresponding eigenvalues are the dimensions of the ellipsoid along each axis .the determinant of the same matrix determines the normalization volume of each particle .the sph conversion of eq .( a20 ) , similarly to the sph expression of the is : .\end{aligned}\ ] ] showed that energy conservation improves if are introduced into both sph momentum and energy equations .the inclusion of such terms modify substantially those equations in a non practical form .the formal difficulties were overcome by who derived an effective asph conversion of the pressure gradient contribution in the momentum equation ( eq .2 ) , conserving energy and entropy , according to the conservative asph equation : where , and refers to the artificial viscosity contribution . smoothing length computed requiring that a fixed mass is contained within a smoothing volume : where refers to the global mass of neighbours related to the particle .each particle neighbour has a mass .no further modifications to the energy equation are required . in a further paper similar conclusion , as far as both sph and xsph methods are concerned , were reached with the aim of achieving better energy and entropy conservation .the term is easily connected to the by the simple relation : where the derivative strictly involves also the derivative of the in 3d as : . in this scheme , the conservative asph conversion of the navier - stokes equation ( eq .a7 ) is : . as far as the conservative asph energy balance equation for the total energy concerned , where , includes artificial viscosity terms .in conservative asph approach , it is easy to update the particle smoothing resolution length , fixing the number of particle neighbours .in fact , according to the sph interpolation criterion , particle concentration .we remind that kernel is a normalized smooth function of the ratio . therefore , if represents the fixed number of neighbours , .in this section , results of some tests are here reported regarding models where either 1d shock problems , or 2d free edge , or 2d transport themes have to be taken into account to respect the argument declared in this paper . comparison among gaspher ,sph and asph numerical results are reported as well as theoretical analytical ones , whenever the theoretical analytical solution is known .the particle smoothing resolution length , normally adopted throughout , is ( in asph as the initial value ) , but than when explicitly written . throughout .once stated the validity of gaspher for shock collisional modelling , a particular attention is addressed both to free edge and to radial transport results regarding the main argument of this paper . in this section a comparison of analytical and gaspher 1d inviscid shock tube test results ,notice that the so called analytical solution of the 1d shock tube test is obtained through iterative procedures left - right , applying to the discontinuity the rankine - hugoniot `` jump '' solution .figg . b1 andb2 display results concerning the particle density , thermal energy per unit mass , pressure and velocity , after a considerable time evolution at time .the whole computational domain is built up with particles from to , whose mass is different , according to the shock initial position . at time all particles are motionless . , while the ratios and , and and as displayed at the edges of figg . 4 and 5 , between the two sides left - right .the first and the last particles of the 1d computational domain , keep fixed positions and do not move .the choice of the final computational time is totally arbitrary , since the shock progresses in time . at the beginning of each simulation .hence , the adimensional temporal unity is chosen so that . being the sound velocity initially constant , this mathematically means .sph results , adopting the same initial and boundary conditions , as well as the same particle smoothing resolution length , together with the analytical solutions are also displayed in the same plots .our gaspher results , are in a good comparison with the analytical solution .discrepancies involve only particle smoothing resolution lengths at most .this means that , gaspher interpolations are effective in the case of shock collision case in so far as the mach number flows regard the weak shock regimes when the mach number ranges within ] and at $ ] , and the horizontal one at the bottom from to , while the fourth side at , is free to expand towards the outer space .particles , whose mass are regularly located so that their mutual separation equals .the initial thermal energy is , while the initial throughout .the three fixed edges are composed of two lines of fixed particles , whose velocity is abruptly put to zero time by time .notice that the above mentioned constraints have to be considered as geometric conditions not pertinent only to the edge particles of the square box . in this way , particles can move only toward the direction , from onwards , and any particle horizontal translation is mechanically prevented . averaged for each particle for the sph , asph and gaspher simulations of the 2d expansion of the free edge in a box . values are expressed in units .time is reported on a logarithmic scale.,width=302,height=302 ] since an analytical solution is unknown , we pay attention to the conservation of the total energy per unit mass averaged for each particle and , at the same time to the regular face of the expansion of the free horizontal edge at the top .b4 displays the advance of the free front at three selected times for the sph , asph and gaspher simulations .sph and particularly asph fronts are without any doubt more advanced than the gaspher front .this effect is the result of an incorrect computation of the pressure forces on the free edge of the computational domain as discussed in 5 .this conclusion is stressed not only by the fact that the gaspher flow is more regular and free from defects , but also , as it is shown in fig .b5 , by the fact that the total energy per unit mass is much better conserved than in the other two cases . as an order of magnitude ,the degradation of the total energy is for a totality of particles after a time for both sph and asph .this involves that on a single particle , .instead , in gaspher this energy degradation is times smaller .this implies that the choice of the interpolation kernel is crucial in the conservation of prime integrals .the 2d radial spread and migration of an isothermal keplerian annulus ring is widely described in in the case of a constant physical viscosity . at time , the surface density , as a function of the radial distance , is described by a dirac function : , where is the mass of the entire ring and is its initial radius . as a function also of time ,the surface density is computed via standard methods as a function of the modified bessel function : where , . const equals the annulus mass throughout .time is normalized so that is the keplerian period corresponding to the ring at .examples of sph viscous spread on this argument can be found in , as well as in in sph physically inviscid hydrodynamics on the basis that the shear dissipation in non viscous flows can be compared to physical dissipation . in particularan exhaustive comparison can also be found in . in a non viscous particle lagrangianfluid dynamics , any deviation from the initial strictly keplerian kinematics is incorrectly due to the activation of artificial viscosity dissipation in the shear flow when two particles approach each other .this is an unavoidable consequence of the fact that dissipation is currently used to handle the direct head - on collision between pair of particles . to establish whether the adopted kernel has a significant role in the spatial transport phenomena, any gas pressure force component must be removed leaving active only the artificial viscosity dissipation in the momentum equation in a strictly isothermal fluid dynamics .thus , the whole flow should keep its keplerian behaviour because pressure forces are artificiously erased , in so far as artificial viscosity dissipation stays inactive . in this test ,the two marginal edges of the annulus ( the inner and the outer ones ) are considered as fe boundaries .adopting the same artificial viscosity formulation and the same parameters ( and - see app .a ) both for sph and for asph and for gaspher simulations , our aim is to check which technique shows the smaller deviation from the initial keplerian tight particle distribution in isothermal conditions , keeping constant both the sound velocity and the specific thermal energy .sph - derived techniques turns on the artificial viscosity dissipation whenever two close particles approach with each other .this happens also for shear flows .however for inviscid ideal shear flows this is an incorrect result without any gas compression . a significant comparison of gaspher to sph and to asph is displayed in fig .b6 , where density contour map plots are shown at the same .the radial distributions of surface density are displayed in fig .b7 , according to the restricted hypotheses of the standard mechanism of physical dissipation ( constant dissipation , zero initial thickness ) . as in ,the initial ring radius is at , whose thickness is , is composed of equal mass ( ) pressureless keplerian ( , at ) sph particles , with , with , and with initial density radial distribution corresponding to the analytical solution at time , whose . to this purpose, a random number generator has been used .the central accretor has mass normalized to .the kinematic shear dissipation is estimated as .gaspher radial spread is without any doubt the narrower one , while the asph one is naturally the larger because of the increasing particle smoothing resolution length affecting the artificial viscosity analytical expression .for this reason , its physical dissipation counterpart can not be kept constant even preventing any disc heating . only in the case of asph modelling, the initial value is assumed to compute .notice that these results are obtained according to the correlation in the expression where , which appears as the most appropriate .in fact , considering with , it is necessary a time ten times longer to get the same .this involves a larger annulus spread as far as the numerical results are concerned .according to this results , the kernel choice is determinant also in the generation of kinematic deviations from the initial keplerian distribution due to the incorrect sph dissipation because of the particle shear approaching in the non viscous ideal flows .notice that the density radial distribution , as far as the asph modelling is concerned , better fits the spread of the theoretical radial distribution ( here not represented ) .this is a fair result in so far as we are interested in determining which artificial dissipation , coupled with the choice of the interpolation kernel , determines a density radial profile , to be compared with the theoretical one , when a physical dissipation is considered .however , this is another aspect , regarding the study of either the physical dissipation in a viscous fluid dynamics or its artificial numerical dissipation counterpart in a non viscous approach , which is far from the scope of the test here proposed .99 belvedere , g. , lanzafame , g. , molteni , d. 1993 , a&a , 280 , 525 benz , w. , bowers , r.l . ,cameron , a.g.w . , press , w. 1990 , apj , 348 , 647 bonet , j. , lok , t.s.l .1999 , comp. meth . in app .mechanics and engineering , 180 , 97 bonet , j. , kulasegaram , s. , rodriguez - paz , m.x . , profit , m. 2004 , comp . meth . in app . mechanics and engineering , 193 , 1245 boris , j.p . , book , d.l .1973 , jcoph , 11 , 38 costa , v. , pirronello , v. , belvedere , g. , del popolo , a. , molteni , d. , lanzafame , g. , 2010 , mnras , 401 , 2388 drimmel , r. 1996 , mnras , 282 , 982 evrard , a.e .1988 , mnras , 235 , 911 feldman , j. , bonet , j. 2007 , int . j. num .eng . , 72 , 295 flebbe , o. , mnzel , h. , riffert , h. , herold , h. 1994a , mem .s.a.it , 65 , 1049 flebbe , o. , mnzel , h. , herold , h. , riffert , h. , ruder , h. 1994b , apj , 431 , 754 frank , j. , king , a.r . ,raine , d.j ., 2002 , `` accretion power in astrophysics '' , cambridge univ .fulbright , m.s . ,benz , w. , davies , m.b .1995 , apj , 440 , 254 hernquist , l. , katz , n. 1989 , apjs , 70 , 419 kaisig , m. 1989 , a&a , 280 , 525 katz , n. , weinberg , d.h . , hernquist , l. 1996 , apjss , 105 , 19 imaeda , y. , inutsuka , s 2002 , apj , 569 , 501 lanzafame , g. 2003 , a&a , 403 , 593 lanzafame , g. 2008a , `` the role of physical viscosity in accretion disc dynamics in close binaries and agn '' , `` numerical modeling of space plasma flows / astronum 2007 '' , n. v. pogorelov , e. audit , and g. p. zank eds ., asp conf .series 385 , p.115 lanzafame , g. 2008b , pasj , 60 , 259 lanzafame , g. 2009 , an , 330 , 843 lanzafame , g. , belvedere g. , molteni d. 1992 , mnras , 258 , 152 lanzafame , g. , belvedere g. , molteni d. 1993 , mnras , 263 , 839 lanzafame , g. , maravigna f. , belvedere g. 2000 , pasj , 52 , 515 lanzafame , g. , maravigna f. , belvedere g. 2001 , pasj , 53 , 139 lanzafame , g. , belvedere g. , molteni , d. 2006 , a&a , 453 , 1027 lasota , j.p .2001 , new astr ., 45 , 449 lattanzio , j.c . , monaghan j.j . ,pongracic , h. , schwarz , m.p . , 1985 , mnras , 215 , 125 liu , m.b . ,liu , g.r . , lam , k.y . 2006 , shock waves , 15 , 21 lubow , s.h . ,shu , f.h ., 1975 , mnras , 198 , 383 meglicki , z. , wickramasinghe , d. , bicknell , g.v .1993 , mnras , 264 , 691 miyama , s.m . ,hayashi , c. , narita , s. 1984 , apj , 279 , 621 molteni , d. , belvedere , g. , lanzafame , g. 1991 , mnras , 249 , 748 monaghan , j.j .1985 , comp .rept . , 3 , 71 monaghan , j.j .1992 , ara&a , 30 , 543 monaghan , j.j .1994 , jcoph , 110 , 399 monaghan , j.j .1997 , jcoph , 136 , 298 monaghan , j.j .2002 , mnras , 335 , 843 monaghan , j.j .2006 , mnras , 365 , 199 monaghan , j.j . , lattanzio , j.c .1985 , a&a , 149 , 135 morris , j. p. , monaghan , j.j .1997 , jcoph , 136 , 41 murray , j.r .1996 , mnras , 279 , 402 nelson , r.p . , papaloizou , j.c.b .1993 , mnras 270 , 1 nelson , r.p . , papaloizou , j.c.b .1994 , mnras 265 , 905 okazaki , a.t . ,bate , m.r . ,ogilvie , g.i . ,pringle , j.e .2002 , mnras 337 , 967 owen , j.m . ,villumsen , j.v . ,shapiro , p.r . ,martel , h. 1998 , apjss 116 , 155 pringle , j.e . , verbunt , f. , wade , r.a .1986 , mnras , 221 , 169 savonije , g.j . , papaloizou , j.c.b . ,lin , d.n.c .1994 , mnras , 268 , 13 sawada , k. , matsuda , t. 1992 , mnras , 255 , 17 sawada , k. , matsuda , t. , inoue , m. , hachisu , i. 1987 , mnras , 224 , 307 shakura , n.i .1972 , astron ., 49 , 921 shakura , n.i .1973 , sva , 16 , 756 , engl .shakura , n.i . ,sunyaev , r.a .1973 , a&a , 24 , 337 shapiro , p.r . , martel , h. , villumsen , j.v . ,owen , j.m .1996 , apjss , 103 , 269 sod , g.a .1978 , jcoph , 27 , 1 speith , r. , kley , w. , 2003 , a&a , 399 , 395 speith , r. , riffert , h. , 1999 , jcoam , 109 , 231 springel , v. , hernquist , l. 2002 , mnras 333 , 649 spruit , h.c . ,matsuda , t. , inoue , m. , sawada , k. 1987 , mnras , 229 , 517 vaughan , g.l . , healy , t.r . , bryan , k.r . ,sneyd , a.d . , gorman , r.m .2008 , int .
adaptive spatial domains are currently used in smooth particle hydrodynamics ( sph ) with the aim of performing better spatial interpolations , mainly for expanding or shock gas dynamics . in this work , we propose a sph interpolating kernel reformulation suitable also to treat free edge boundaries in the computational domain . application to both inviscid and viscous stationary low compressibility accretion disc models in close binaries ( cb ) are shown . the investigation carried out in this paper is a consequence of the fact that a low compressibility modelling is crucial to check numerical reliability . + results show that physical viscosity supports a well - bound accretion disc formation , despite the low gas compressibility , when a gaussian - derived kernel ( from the error function ) is assumed , in extended particle range - whose half width at half maximum ( hwhm ) is fixed to a constant value - without any spatial restrictions on its radial interaction ( hereinafter gaspher ) . at the same time , gaspher ensures adequate particle interpolations at the boundary free edges . both sph and adaptive sph ( hereinafter asph ) methods lack accuracy if there are not constraints on the boundary conditions , in particular at the edge of the particle envelope : free edge ( fe ) conditions . in sph , an inefficient particle interpolation involves a few neighbour particles ; instead , in the second case , non - physical effects involve both the boundary layer particles themselves and the radial transport . + either in a regime where fe conditions involve the computational domain , or in a viscous fluid dynamics , or both , a gaspher scheme can be rightly adopted in such troublesome physical regimes . + despite the applied low compressibility condition , viscous gaspher model shows clear spiral pattern profiles demonstrating the better quality of results compared to sph viscous ones . moreover a successful comparison of results concerning gaspher 1d inviscid shock tube with analytical solution is also reported . [ firstpage ] accretion , accretion discs hydrodynamics methods : numerical , n - body simulations stars : binaries : close , dwarf novae , cataclysmic variables
to detect a light signal attenuated at the single - photon level , the single - photon avalanche diode ( spad ) is of widespread use .spads have been employed to detect stars feeble light , to perform a sharp optical time - domain reflectometry and in the vast majority of quantum key distribution ( qkd ) realizations reported thus far , both in optical fibres and in free space .in particular , fibre - based qkd at the wavelength of nm , is an emerging technology which promises a high security level in telecommunications while retaining quite a high transmission rate . in this wavelength range ,the spad features quantum efficiencies up to and trigger rates of more than 1 ghz .let us briefly describe some particular aspects of the spad .first , in order to increase the detector s gain , the detector is operated in the so called `` geiger mode '' , where a reverse bias voltage higher than the detector s breakdown voltage is applied .if , despite the high gain , the dark count rate is low , the spad can be operated in `` free running '' mode ; this usually happens at wavelengths close to the visible range , e.g. 800 nm . if the dark count rate is high , as it happens in the infrared domain , e.g. at wavelengths of 1550 nm , then the spad is usually run in `` gated mode '' , where the bias voltage is raised above the breakdown voltage only for a short period of time ( the gate ) , when the photon to be detected is expected to arrive .this requires that the spad is well synchronized with the light source and the rest of the acquisition system .another important point to consider is that the dark count rate can be increased by a non zero afterpulse probability : a detection event is given in the spad by an avalanche of electric charges , some of which can remain trapped in the junction and can be released in the following gates , thus giving rise to additional counts not corresponding to the arrival of new photons . to reduce the afterpulse probability , an afterpulse blocking electronics is usually added to the spad with the aim of `` freezing '' it after the emission of an avalanche , for a time interval decided in advance by the operator .for example , one of the most popular single - photon detector in the third telecom window , the id201 from idq , features an afterpulse probability going to nearly zero in about 4 .this entails that , in order to minimize dark counts , a blocking time of at least 10 should be applied to this detector . during the blocking time, the detector is not capable to detect single photons anymore .hereafter , we generically define `` dead time '' any interval of time in which the spad looses its single - photon sensitivity . this can be due either to the natural response of the detector or to the afterpulse blocking circuit. moreover , we focus on a spad run in gated mode , suitable for working in a high - noise regime , leaving the free - running mode analysis for a future work . in order to use a gated - mode spad ,it is necessary to prepare an acquisition system capable of sending a trigger to it , read its output in a synchronized way and write the result into a memory location .however , during the preparation of such a system , we realized that a few problems arise when the triggering rate of the acquisition system is higher than the inverse of detector s dead - time .this situation is much more widespread than commonly believed .in fact , the acquisition system is usually much faster than the spad and its triggering rate coincides with the maximum rate allowed by the detector .for instance , the same id201 mentioned above , accepts a maximum trigger rate of 8 mhz , which is much higher than the 100 khz corresponding to the inverse of the dead - time value given above , i.e. 10 .one problem related to a spad running at its maximum trigger rate and with a non - zero dead - time , is the readout . if the acquisition system tries to read the spad during the dead - time period it only collects a sequence of futile counts which do not correspond to true detection events .the removal of these counts by a postprocessing algorithm is simple but very inefficient , time- and memory - consuming . a second , more important , problem is that when two or more spads are present in the same detection apparatus , one detector can be active while the other is blind .this happens , e.g. , when one of the two detectors registers a photon and enters the dead - time period while the other does not register any photon and remains active .this situation can be exploited to control the detectors of the receiving unit , as described in .furthermore , in a wholly passive way , this flaw can be exploited to threaten the security of the qkd technology , by making the key distilled by the users not perfectly random and not entirely secret , as we shall show .we group these security breaches under the name of `` self - blinding '' , because they are caused mainly by an improper use of the equipment by the legitimate users . in this paperwe describe an acquisition system that allows for a proper triggering of single - photon detectors .we use an fpga - based electronics to disable not only the trigger driving the spad gates but also the one driving the readout and the write up of the data to the memory . hence useless data are no more registered by the acquisition system ; the memory is better organized and the postprocessing procedures require a shorter time .moreover the triggering technique can be easily extended to two or more detectors , so to avoid self - blinding .the paper is organized as follows . in section[ sec : real_time_trig ] we describe our technique and apply it to a single spad . in section[ sec : pair - det - risk - qkd ] we consider the situation with a pair of spads and describe the security risks connected to self - blinding . in section [ sec : exp - coinc ]we apply our technique to a real setup and demonstrate its effectiveness .section [ sec : conclusion ] is left for concluding remarks .the trigger - disabling acquisition system is schematically shown in figure [ fig_scheme ] .the main element of the circuit is the clock buffer ( clk buf ) with clock enable ce input ( bottom part of the figure ) .its specifications are given in .this buffer drives the clock output which in turn is connected to the detector s trigger input and memory clock input . in normal regime ,the d flip - flop ( ffd ) is reset ; the output is inverted and then fed into the clock buffer so the trigger clock is simply transferred from the input to the output and then to the detector .when a positive detection occurs , the rising edge of the spad s avalanche drives the change in the ffd output , thus disabling the main clock and enabling the counter .the spad and the memory will not see any trigger in this state . the ffd remains in this state until it is reset .the asynchronous reset occurs after a given number of main clock cycles , depending on the counter and the binary comparator ( comp ) level indicated as cmp value in fig .[ fig_scheme ] .all the circuitry is realized in an fpga board ( xilinx sp605 ) .the trigger - disabling circuit has a main constraint i.e. the response time - interval that goes from the spad avalanche to the clk buf must be smaller than the inverse of the trigger rate : if this condition is not fulfilled , the trigger pulses continue to arrive at the detector until the clock buffer succeeds in disabling the trigger. this would result in maximal frequency limitation or in one or more futile zeroes registration by the qkd apparatus s cache memory .several factors contribute to the total response time of the feedback loop : the detector response ( 28 ns for id200/id201 , which are the spads used in our setup ) , the length of cables ( 8 ns ) and circuit inside fpga ( 2 ns ) .this sets a maximum limit for the trigger - disabling technique with our current electronics at 26 mhz , which is much above the maximal trigger rate for id201 .shortening the total response time would increase the maximal frequency of the circuit .however , among the above delays , only the one caused by cables can be decreased easily without changing any other piece of hardware . in order to demonstrate the circuit working, we applied our electronics to a real setup , composed by a laser source ( picoquant ldh - p-1550 ) triggered at 4 mhz , an optical attenuator , which fixes the average photon number at , and a spad id201 set at 2.5 ns gate and 10% efficiency . in this configuration , we analyzed the percentage of cache memory occupied by useful triggers over a total duration of 12 hours .the results are plotted in fig .[ fig : triggers ] . and , providing a percentage of 83.4% .nine hundred samples were taken in 12 hours .,width=288 ] if trigger - disabling is applied , the cache memory is occupied by useful data only , entailing that all the triggers sent to the spad have been effectively used for a meaningful acquisition . by consequence ,the trigger - percentage per cache reaches the maximum , 100% . on the contrary , if the disabling technique is not applied , the total amount of useful counts per cache varies according to the photon emission probability and detector dead - time. in particular , a lower percentage corresponds to a situation with more counts by the spad , i.e. to a higher intensity arriving at the detector .in fact , the more counts at the spad , the higher the number of futile zeroes present in the cache memory .any futile zero must be removed from the cache which causes that the percentage of useful triggers remains significantly lower than 100% .there are two ways of removal , either by post - processing or by disabling the trigger before it arrives to the spad . in any casethe post - processing consumes more time than a hardware solution .further performance analysis shows that the amount of removed triggers from cache depends on the product of the average photon number and detector quantum efficiency .in fact , the probability of a positive detection per each light pulse is given by .each time a pulse is detected , we have 1 useful trigger and useless triggers , which fall inside the detector s dead time .these triggers have to be removed , either by post - processing or by disabling the trigger before it arrives to the spad . in any case , given total triggers arriving at the measuring setup and positive detections , the percentage of useful triggers is given by : the average number of positive detection is found through a numerical simulation .the result is directly substituted in eq . to find the average percentage as function of and .the parameters of the simulation were the following : , , , , resulting in and .this last value has been used in fig .[ fig : triggers ] to plot the dotted curve corresponding to the theoretical model .the number of useful triggers per block of cache memory fluctuates in time .short - term fluctuations are caused by detector s random behavior while long - term fluctuation are caused by an imperfect photon source whose intensity fluctuates .the trigger - disabling technique makes the amount of useful triggers per block of cache memory independent of these factors , as showed in fig .[ fig : triggers ] , because only useful triggers are present in that case .the technique described in the previous section can be fruitfully implemented when two or more spads are used at the same time , for instance in experiments where coincidence detection is important , or in qkd implementations , where each detector is assigned with a logical value of one bit , ` 0 ' or ` 1 ' , and with a measurement basis , or .the sequence of the bits will constitute , after a classical distillation procedure , the secret quantum key used for secure telecommunications . for the moment we consider a practical setup with only two detectors , which we call respectively and , assigned with the value ` 0 ' or ` 1 ' of the key bit .such a description well adapts to what routinely happens in qkd implementations when , e.g. , the bb84 protocol is performed with an active choice of the basis ( see e.g. and ) . in this caseit is crucial for security that both detectors are active or inactive at the same time .if one detector is active and the other one is blind , e.g. during the dead - time following a positive detection event , then a security risk comes about . in the standard scenario of qkd there are two main users , traditionally called alice ( the transmitter ) and bob ( the receiver ) , who try to communicate privately over an insecure quantum channel , plus one eavesdropper , eve , who aims to steal information from the channel .eve can use any means allowed by the laws of physics to reach her goal .she can also exploit an imperfection of the setup to her own advantage .for example in it is described how to exploit the detectors dead - time to hack a particular qkd setup when the detectors are used in free - running mode .the same technique , which is a variant of the `` faked - state attack '' , can be adapted to hack also a setup based on gated - mode detectors , if a narrow temporal selection inside the gate is effected by the users to further clean their quantum signal .this technique , though very powerful , requires an active intervention by eve , who has to carefully design her light pulses and synchronize them with those transmitted by alice to bob .furthermore , it is not effective against a qkd setup based on gated - mode detectors in which there is no temporal selection inside the gate . in that case, the eavesdropping described in would cause an abnormal number of double counts , easily detectable by the legitimate users . in the following ,we restrict our attention to such a kind of qkd setup , using gated - mode spads with no temporal filter inside the gate .we describe a totally _ passive _ action by eve which is sufficient to create a security risk for the communication .our aim is not to provide an attack which gives eve 100 of the information sent by alice ; we only want to show that some extra bit could possibly be captured by eve without the legitimate users being aware of that .let us focus then on the typical situation of self - blinding : at bob s side one detector is active while the other one is blind .this can happen in a qkd setup with a trigger rate higher than the inverse of detector s dead - time , which we define here as .after the whole quantum communication is completed , bob announces on a public channel the addresses corresponding to his non - vacuum counts ; all the remaining addresses are associated to vacuum counts and are discarded by the users .eve will register all the counts announced by bob , in particular those which correspond to qubits distant in time less than .all these bits will be necessarily anti - correlated .in fact , suppose that at a certain time detector fires and bob registers a ` 0 ' .if a second event occurs at time with , it must come necessarily from , because is in its dead - time period .so bob will see a ` 1 ' .if another count occurs at time with it must necessarily be a ` 0 ' , and so on .the net result will be that with a small , but non - zero , probability there will be in the final key groups of bits which are anti - correlated ( e.g. 1010101 ... ) rather than perfectly random .this is already a security breach in the theory of qkd .in fact , the two requisites for unconditional secure communication are( i ) that the final key should be random and ( ii ) that it should be known only to the legitimate users .so the first requisite here is violated .now we want to show that also the second requisite of security can be violated by the passive strategy described above , i.e. some bits of the final key can be learned by eve without alice and bob being aware of that .in fact , after the addresses of the non - vacuum counts have been announced , the standard description of qkd prescribes that alice and bob proceed with two classical procedures known as error correction ( ec ) and privacy amplification ( pa ) .ec aims at correcting potential errors ( i.e. different bits ) in the users final keys . during thisprocedures a few bits are revealed on public by the users to localize and finally correct the errors .eventually the wrong bits and their positions will be known to eve too . with pa ,the users are supposed to reduce eve s knowledge about these bits to nearly zero .however it can happen that some of these bits fall into a group which contains anti - correlated bits , e.g. 10101 . in that caseit is plain that eve will immediately know all the members of the group as soon as she knows a single bit of the group .hence the pa will erase eve s knowledge about the single bit revealed , but not her knowledge about the remaining bits .the attack just described is entirely passive , as it is the result of a bad setup configuration by the users .for this reason we called it _self - blinding_. as already mentioned , self - blinding can occur when detector s dead - time is _ higher _ than the inverse of the trigger rate : the truer this inequality , the easier the self - blinding mechanism .it is worth noting that the above self - blinding condition , eq . , is easy to fulfill, so it could represent a common mistake when using the spads .the reason is that one usually sets the electronics for a long dead - time , to reduce the after - pulses , and at the same time for a high trigger speed , to increase the final transmission rate .one solution against self - blinding is to slow down the triggering rate of the qkd apparatus , until eq .is no more satisfied , but this dramatically reduces the total efficiency of the system .another possibility is to remove all the bits featuring a temporal distance less than by post - processing , but this can be very time - consuming . also , one could resort to alternative descriptions of qkd in which the bits for ec are encrypted , so the positions of the errors will be not known to eve at all .but this solves only the second flaw , not the first , i.e. , the bits of the final key would still be not purely random .our solution is based on hardware .we simply apply the trigger - disabling technique to guarantee that bob s detectors are both active or both inactive at the same time . in the next sectionwe describe how this can be put into practice and present the experimental results obtained with this technique .in order to disable the triggering clock in case of a pairwise detector configuration we used a modified version of the circuit shown in fig . [ fig_scheme ] .we connected both detectors outputs to the clock input port c of ffd through an or gate , and fed their trigger input with the same signal coming from clock output . to show the effectiveness of the trigger - disabling technique we applied the electronics to a real setup composed by two spads , one id201 , as before , and one id200 .both detectors are triggered at 4 mhz .we notice that the id201 detector is one of the receiving units in the `` all - vienna '' qkd system , used in two recent quantum networks . according to what reported in ref . , the system s average trigger rate was of 415 khz .. entails that self - blinding becomes important for that setup if a dead - time longer than 2.4 is used . on the other side ,the id200 detector was used in the qkd system described in , and in the setup for the asymmetric feedback adopted in . in ref . the trigger rate was 2.5 mhz and the dead - time 10 , so the self - blinding could have had an influence in that case .however , it should be noted that the qkd protocol used was not the bb84 but rather the lm05 , in which the two detectors are not directly associated to the logical bit value .hence the above analysis is not directly applicable to this case . in the present paper s experiment we studied two figures of the apparatus ,i.e. , the frequency of coincidence counts and the randomness of the final key . in both caseswe employed a light with intensity higher than the single - photon level , to simulate the detectors response under the eve attack described above . in the previous experimentthe value of was about .now it is . in order to see the frequency of coincidence counts , we collected more than events .we registered of coincidences with trigger disabling on and only with trigger disabling off .this confirms that our technique reduces the probability of a self - blinding , because the two detectors are always operational together .notice that the coincidence rate is not 100% even when trigger disabling is on , because the light is not intense enough .a much more intense light would cause 100% coincidence counts if the trigger disabling is on , but could also damage the very sensitive spad . on a second step we monitored the randomness of the strings obtained from detectors when trigger disabling was either on or off .specifically we assigned a click from detector ( ) and no click from detector ( ) with the value ` 0 ' ( ` 1 ' ) , and collected all the values so to form two strings , one corresponding to on and one to off .two short sequences extracted from such strings are reported in fig .[ fig : rand - nonrand ] ; firing and not firing ( empty circles ) or firing and not firing ( filled circles ) . _ top panel _ : a sequence of clearly non - random events , obtained when the trigger - disabling technique was off .the alternate occurrence of empty and filled circles is apparent . _bottom panel _ : a sequence of events looking like random , obtained when the trigger - disabling technique is on .the occurrence of relatively long sequences of empty and filled dots together with short sequences witnesses the random - like nature of the string . the events with both detectors firing have been removed for explicative purposes . for the measurement we employed a light with average photon - number .,title="fig:",width=264 ] + firing and not firing ( empty circles ) or firing and not firing ( filled circles ) . _ top panel _ : a sequence of clearly non - random events , obtained when the trigger - disabling technique was off .the alternate occurrence of empty and filled circles is apparent . _bottom panel _ : a sequence of events looking like random , obtained when the trigger - disabling technique is on .the occurrence of relatively long sequences of empty and filled dots together with short sequences witnesses the random - like nature of the string . the events with both detectors firing have been removed for explicative purposes . for the measurement we employed a light with average photon - number .,title="fig:",width=264 ] + the empty ( filled )circles correspond to the value ` 0 ' ( ` 1 ' ) .the difference is quite apparent .when trigger disabling is off ( upper string ) , the sequence is clearly non random , since almost always a single 0 is followed by a single 1 , and viceversa . on the contrary , when trigger disabling is on ( lower sequence ) , the occurrence of a 0 or a 1 is much less foreseeable .also , the presence of relatively long sub - strings filled with all 0 s or 1 s , is an additional evidence of a random behavior .note that while non - randomness can be easily demonstrated , for true randomness of finite sequences is not that easy since it does not exist a decisive test in this respect .nevertheless we performed the dieharder statistical tests on our strings , most of which were passed by the on string and none was passed by the off string .it could be argued that our choice is somewhat unusual in a qkd setup , where one has more often .our choice is motivated by the purpose of showing clearly in the experiment the effect of self - blinding .when is smaller , the consequences of self - blinding are smaller too , but they do not disappear as there still remains a non zero probability to find blocks in the final key containing anti - correlated , non random , bits .we introduced a fpga - driven technique to run a single - photon avalanche diode at its maximum trigger rate , regardless of dead - time limitation .while in standard situations some care should be paid when the trigger rate is higher than the inverse of dead - time , with our technique this hindrance is removed by a trigger - disabling loop which stops the trigger to and from the spad until it has properly recovered from the dead - time period .the presented circuit can also serve as a simple dead - time generator for single - photon detectors which are produced without their own generator .because of the absence of any futile counts , our technique is especially useful in case of implementation in embedded qkd systems as in this case one needs to record a huge amount of results in a limited local memory . moreover the technique allows to remove completely the post - processing related to futile zeroes discarding , what results in shorter acquisition time .we provided an experimental evidence of our technique applied to a single spad , by reporting the trigger distribution in the acquisition system s cache memory .additionally , we adapted the acquisition system to a pair of detectors used in coincidence and found an evident advantage of the trigger - disabling technique over a standard trigger usage in terms of distribution of the coincidence counts and in terms of randomness of the final string distilled from detectors counts .moreover , it removes at the roots the possibility of an external attack based on `` self - blinding '' , because it forces the detectors in the receiving unit to be active or inactive altogether , at the same time .the proposed technique can be easily implemented in many different setup by just adding a piece of electronics in front of a standard spad , thus making the detector response time as short as possible .this paves the way to a future straightforward implementation of this technique in integrated commercial qkd apparatuses .we thank l. widmer for his feedback on idq detectors and r. kumar for useful discussions .m.l . is supported by the grant c.f .81001910439 . here and throughout the paperthe reader can think to a single - photon - attenuated light as to a coherent state with mean photon - number , as the one coming , e.g. , from an attenuated pulsed diode - laser .this is a standard intensity in several qkd systems . in our setupthe post - processing was performed on a microblaze architecture and took 99 of the total acquisition time .this means that of 12 hours only about 7 minutes were really necessary for the experiment , the remaining time being devoted to post - processing .even though post - processing can be certainly done in a more efficient way , it can not compete with the zero time assured by our hardware solution .
modern single - photon detectors based on avalanche photodiodes offer increasingly higher triggering speeds , thus fostering their use in several fields , prominently in the recent area of quantum key distribution . it is well known that after a detection event these detectors loose their single photons sensitivity for a period of time generically called _ dead time_. if the acquisition system connected to the detector is not properly designed , and the triggering function not properly controlled , efficiency issues can arise when the triggering rate is faster than the inverse of detector s dead - time . moreover , when this happens with two or more detectors used in coincidence , a security risk called `` self - blinding '' can jeopardize the distribution of a secret quantum key . + in this paper we introduce a trigger - disabling circuitry based on an fpga - driven feedback loop , so to avoid the above - mentioned inconveniences . in the regime of single - photon - attenuated light , the electronics dynamically accept a trigger only after detectors complete recovery from dead - time . this technique proves useful to work with detectors at their maximum speed and to increase the security of a quantum key distribution setup . single - photon detector , quantum key distribution , quantum hacking , detector blinding , detector dead - time , high - speed detection , quantum efficiency .
high - energy physics has advanced over the years by the use of higher energy and higher intensity particle beams , more capable detectors and larger volumes of data .the tevatron collider at fermilab is used to study fundamental properties of matter by colliding protons and anti - protons at very high energy .the fermilab tevatron run ii project has increased the intensity and energy of the proton and anti - proton beams .the collider detector at fermilab ( cdf ) detector is a large general purpose cylindrical detector used to measure charged and neutral particles that result from the proton - anti - proton collision .the cdf detector has been upgraded to take advantage of the improvements in the accelerator .computing systems were also upgraded for processing larger volumes of data collected in run ii .the type of computing required for cdf data production can be characterized as loosely - coupled parallel processing .the data consists of a group of events " , where each event is the result of a collision of a proton and an anti - proton .a hardware and software trigger system is used to store and save as many of the most interesting collisions as possible .each event is independent in the sense that it can be processed through the offline code without use of information from any other event .events of a similar type are collected into files of a data stream .data is logged in parallel to eight data streams for final storage into a mass storage system .each file is processed through an event reconstruction program that transforms digitized electronic signals from the cdf sub - detectors into information that can be used for physics analysis .the quantities calculated include particle trajectories and momentum , vertex position , energy deposition , and particle identities .the cdf production farm is a collection of dual cpu pcs running linux , interconnected with 100 mbit and gigabit ethernet .this farm is used to perform compute and network intensive tasks in a cost - effective manner and is an early model for such computing .historically , fermilab has used clusters of processors to provide large computing power with dedicated processors ( motorola 68030 ) or commercial unix workstations .commodity personal computers replaced unix workstations in the late 1990s . the challenge in building and operatingsuch a system is in managing the large flow of data through the computing units .this paper will describe the hardware integration and software for operation of the cdf production farm .the first section will describe the requirements and design goals of the system .next , the design of the farm , hardware and software , will be given .the software system will be described in the next section .next , the performance and experiences with the system , including prototypes , will be described .finally , conclusions and general directions for the future are given .to achieve the physics goals of the cdf experiment at the fermilab tevatron , the production computing system is required to process the data collected by the experiment in a timely fashion .the cdf production farm is required to reconstruct the raw data with only a short delay that allows for the determination and availability of calibrations or other necessary inputs to the production executable .in addition the farm is expected to reprocess data and to process special data . to accomplish rapid data processing through the farms , adequate capacity in network and cpu is required . in 2001 through 2004the cdf experiment collects a maximum of 75 events / second at a peak throughput of 20 mbyte / sec .the event processing requires 2 - 5 cpu seconds on a pentium iii 1 ghz pc .the exact number depends on the type of event , the version of the reconstruction code , and the environment of the collision .these numbers lead to requirements of the equivalent of 190 - 375 pentium iii 1 ghz cpus , assuming 100% utilization of the cpus .the output of event reconstruction is split into many physics data - sets .the splitting operation is required to place similar physics data together on disk or tape files , allowing faster and more efficient physics analysis .the output event size is currently approximately the same as the input .each event is written 1.2 times on average because some events are written to more than one output data set . therefore the system output capacity is also required to be approximately 20 mbyte / sec .in addition to providing sufficient data flow and cpu capacity for processing of data , the production farm operation is required to be easily manageable , fault - tolerant , scalable , with good monitoring and diagnostics .hardware and software options were explored to meet the requirements for the system .these include large symmetric multiprocessing ( smp ) systems , commercial unix workstations , alternative network configurations .prototype systems were built and tested before the final design was chosen and production systems built .the cdf data production farm is constructed using cost - effective dual cpu pc s .the farm consists of a large number of pcs that run the cpu - intensive codes ( workers ) , pcs that buffer data into and out of the farm ( readers and writers ) and pcs providing various services ( servers ) .the hardware architecture of the cdf production farm is shown in fig .[ fig : farm ] .it has two server nodes cdffarm1 and cdffarm2 .cdffarm1 is a sgi o2000 machine that host a batch submission system and a database server .cdffarm2 is a dual pentium server running control daemons for resource management and job submission .these two servers have recently been replaced by a dell 6650 machine ( cdffarm0 ) .monitoring and control interfaces for farm operation includes a java server to the control daemons and and a web server for monitoring .the disk space is a `` dfarm '' file system , which is a distributed logical file system using a collection of ide hard - disks of all dual pentium nodes .the dfarm server is hosted on cdffarm1 .the job scheduling on the production farm is controlled by a batch management system called fbsng developed by the computing division at fermilab .the cdf data handling group has well - defined interfaces and operation to provide input data for the farm and to write output to a mass storage system ( enstore ) ..farm nodes added over the years ( xeon cpu is scaled by 1.35 to the equivalent pentium iii cpu performance for cdf data reconstruction ) .the total in use in the summer of 2004 is 192 nodes ( 570 ghz ) .[ tab : workers ] [ cols="^,^,^,<",options="header " , ] raw data from the experiment is first written to tape in the enstore mass storage system .raw data are streamed into eight data - sets listed in table [ tab : rawdata ] .these tapes are cataloged in the cdf data file catalog ( dfc ) as a set of tables in an oracle database ( accessed via cdfora1 in fig .[ fig : farm ] ) . after the datais written to tape and properly cataloged , and once the necessary calibration constants exist , the data is available for reconstruction on the farms .the production farm is logically a long pipeline with the constraint that files must be handled in order .the input is fetched directly from enstore tapes and the outputs are written to output tapes .the data flow is illustrated in fig .[ fig : flow ] for the files moving through dfarm storage controlled by four production daemons .the daemons communicate with the resource manager daemon and the internal database to schedule job submission .the internal database is a mysql system used for task control , file - tracking , and process and file history .the dfc records are fetched at the beginning of staging input data .output files written to tapes are recorded in the dfc .job log files and other logs and files are collected to the user accessible fcdflnx3 node .operation status is monitored by a web server fnpcc .the operation daemons are configured specifically for production of a input `` data - set '' . for raw data ,each data stream is a data - set .the input files are sent to worker nodes for reconstruction .each worker node ( dual - cpu ) is configured to run two reconstruction jobs independently .an input file is approximately 1 gbyte in size and is expected to run for about 5 hours on a pentium iii 1 ghz machine .the output is split into multiple files , with each file corresponding to a data - set defined by the event type in the trigger system .an event may satisfy several trigger patterns and is consequently written to multiple data - sets that are consistent with that event s triggers .each data - set is a self - contained sample for physics analysis .the total number of output data - sets is 43 for the eight data streams used in the most recent trigger table .the cdf farm processing system ( fps ) is the software that manages , controls and monitors the cdf farm .it has been designed to be flexible and allows configuration for production of data - sets operated independently in parallel farmlets .a farmlet contains a subset of the farm resources specified for the input data - set , the executable and the output configuration for concatenation .since a farmlet is an independent entity , it is treated as such , that is , its execution is handled by its own daemons taking care of consecutive processing in production and its records are written in the internal database . the task control by fps for a farmlet is illustrated in fig .[ fig : bookkeeping ] .the daemons of the fps farmlets are : * * stager * is a daemon that is responsible for finding and delivering data from tapes based on user selection for a set of data files or run range in the data - set .jobs are typically submitted one `` file - set '' at a time .a file - set is a collection of files with a typical size of 10 gbyte .the stager fetches dfc records for input and checks that proper calibration constants are available .the staging jobs are submitted to the input i / o nodes and the file - sets are copied to their scratch area , and afterward to dfarm .* * dispatcher * submits jobs through the batch manager to the worker nodes and controls their execution .it looks for the staged input file , which is then copied into the worker scratch area .the binary tarball ( an archive of files created with the unix tar utility ) containing the executable , complete libraries , and control parameter files are also copied .this allows the reconstruction program to run locally on the worker nodes and the output files , of various sizes from 5 mbyte to 1 gbyte , are written locally . at the end of the job the output filesare then copied back to dfarm . in case of abnormal system failure ,job recovery is performed and the job is resubmitted . * * collector * gathers any histogram files , log files and any additional relevant files to a place where members of the collaboration can easily access them for the need of validation or monitoring purposes . * * concatenator * writes the output data that is produced to the selected device ( typically the enstore tape ) in a timely organized fashion .it checks the internal database records for a list of files to be concatenated into larger files with a target file size of 1 gbyte .it performs a similar task as the dispatcher , with concatenation jobs submitted to output nodes .the output nodes collect files corresponding to a file - set size ( gbyte ) from dfarm to the local scratch area , execute a merging program to read events in the input files in increasing order of run and section numbers .it has a single output truncated into 1 gbyte files .these files are directly copied to tapes and dfc records are written .since all of the farmlets share the same sets of computers and data storage of the farm , the resource management is a vital function of fps for distribution and prioritization of cpu and dfarm space among the farmlets .the additional daemons are : * * resource manager * controls and grants allocations for network transfers , disk allocations , cpu and tape access based on a sharing algorithm that grants resources to each individual farmlet and shares resources based on priorities .this management of resources is needed in order to prevent congestion either on the network or on the computers themselves and to use certain resources more effectively . * * dfarm inventory manager * controls usage of the distributed disk cache on the worker nodes that serves as a front - end cache between the tape pool and the farm . * * fstatus * is a daemon that checks periodically whether all of the services that are needed for the proper functioning of the cdf production farm are available and to check the status of each computer in the farm .errors are recognized by this daemon and are reported either to the internal database which can be viewed on the web or through the user interfaces in real time .errors can also be sent directly to a pager with a copy to an e - mail address that is registered as the primary recipient of these messages .the fps framework is primarily coded in python .it runs on one of the server computers ( cdffarm2 ) and depends on the kernel services provided by cdffarm1 , namely the fbsng batch system , the fipc ( farm interprocess communication ) between the daemons and dfarm server governing available disk space on the worker nodes .daemons have many interfacing components that allow them to communicate with the other needed parts of the offline architecture of the cdf experiment . those include mainly the dfc ( data file catalog ) and the calibration database . with hundreds of files being processed at the same time it is important to track the status of each file in the farm .file - tracking is an important part of fps and the bookkeeping is based on a mysql database .the database stores information about each individual file , process and the history of earlier processing .three tables are implemented for each farmlet : for stage - in of input files ; reconstruction and output files ; and the concatenation . the processing steps tracked by the book - keeping and records in each tableare illustrated in fig .[ fig : bookkeeping ] .once a file is successfully processed , its records are copied over to the corresponding history tables .the file status is used in order to control the flow of data and to make sure that files are not skipped or processed more than once .the mysql database also includes detailed information about the status of each file at every point as it passes through the system .this information is available through a web interface to the collaboration in real time .this database server was designed to serve thousands of simultaneous connections . for our applicationit is a perfect match .emphasis is put on automatic error recovery and minimal human interaction in the course of processing . with the help of information that is stored in the internal database ,the system is able in most cases to recover and return to the previously known state from which it can safely continue to operate .the daemons checking the file history in the database are not instrumented to detect an abnormal failure for a job in process or a file lost to network or hardware problems .the concatenator often has to wait for output file in order to combine files in order .this bottleneck can be a serious problem and is a major consideration for relaxing strict ordering of files to improve overall system performance .the fps system status is shown in real time on a web page that gives the status of data processing , flow of data , and other useful information about the farm and data processing .the web page is hosted on a dual pentium node ( fnpcc on fig .[ fig : farm ] ) connected to the farm switch .the web interface was coded in the php language and rrdtool for efficient storage and display of time series plots .the structural elements in the schema include output from fps modules , a parser layer that transforms data into a format suitable for rrdtool , a rrdtool cache that stores this data in a compact way , and finally the web access to rrd files and queries from mysql for real time display of file - tracking information .the java fps control interface was designed for platform independent access to production farm control using an internet browser .information transfer between the client and server over the network is done using iiop ( internet inter - orb protocol ) which is part of corba .it has proved to be stable , and there have been no problems with short term disconnections and reconnections .an xml processor is used to generate and interpret the internal representation of data .abstract internal representation of data is important to cope with changes in the fps system .a java programming language , java web start technology was used for implementation of a platform independent client .the cdf experiment collected data samples in the tevatron run ii commissioning run in october , 2000 and the beginning of proton - antiproton collisions in april , 2001 .these events were processed through the cdf production farms .the events collected during the commissioning run were processed through two versions of the reconstruction code .the early 2001 data , taken under various beam and detector conditions , consist of about 7.6 million events and these were processed with one or two versions of the reconstruction code .this processing experience gave some confidence that the farm had the capacity to handle the volume of data coming from the detector and also uncovered many operational problems that had to be solved .beginning in june , 2001 , both the tevatron and the cdf detector ran well and began to provide significant samples for offline reconstruction .this early data was written in 4 streams and the output of the farms was split into 7 output data - sets .the cdf experiment wrote data at a peak rate of 20 mbyte / sec , which met the design goal .the farms were able to reconstruct data at the same peak rate .the output systems of the farm were adjusted to increase their capacity to handle the large output of the farms .more staging disk was added to provide a larger buffer and additional tape - drives were added .beginning in early 2002 the cdf detector and accelerator had reached a point where data was being recorded in the 8 final data streams defined for run 2 and the output was split into the final physics data - sets ( approximately 50 different data - sets ) .data was processed as quickly as possible and was normally run through the farms within a few days of having the final calibrations .approximately 500 million events were collected and processed during this period .upgrades were made to the farm with the addition of new nodes for processing as well as improved i / o capability .these improvements helped to maintain the processing capability as well as to provide some capability to catch up when calibrations were not ready in time or when the data taking rate was high or when reprocessing was necessary . a major reprocessing of all data collected from the beginning of 2002 was begun in the fall of 2003 using the version of cdf code 5.1.1 .the output of this processing was later reprocessed with improved calibration ( version 5.3.1 ) of calorimetry and tracking leading to higher efficiencies .the reprocessing was launched in march , 2004 and the production farm operated at full capacity for a six week period . the main characteristics and performance of the farmis described in the following sections .the cpu speed and data through - put rate are the factors that determine the data reconstruction capacity of the production farm .the computing time required for an event depends on the event characteristics determined by the event trigger in different data streams .in addition , the intensity of the proton and antiproton beams matters .more intense beams lead to multiple events per beam crossing which in turn lead to more cpu time per event .inefficiency in utilizing cpu comes from the file transfer of the executable and data files to and from the worker scratch area .the event size and cpu time varies for different raw data streams . in fig .[ fig : cpuevt ] the cpu time per event is illustrated for reconstruction of cdf software version 5.3.1 .the cpu time on a dual pentium iii 1 ghz machine varies from 1 to 10 sec depending on the beam intensity and event size .the input data files are staged from enstore tapes .the rate of staging data depends on how fast the link to enstore movers is established .once a mover is allocated , staging a file - set of 10 gbyte takes about 20 minutes .the data transmission rate varies file by file , the commonly observed rate is around 10 mbyte / sec .output of concatenated files are copied to tapes .the effectiveness in staging data to a tape is a concern because of the limited dfarm space and output bandwidth . a concatenation job on the output node collects files of a data - set with close to 10 gbyte at a speed that may reach the maximum ide disk transfer speed of 40 mbyte / sec .it takes an average 10 minutes to copy all the files requested .the concatenation program reads the numerous small files and writes output that is split into into 1 gbyte files . on a pentium 2.6 ghz nodethe cpu time is about 24 minutes for processing 10 gbyte .the job continues by copying the output to enstore ( encp ) at an average rate of close to 20 mbyte / sec .the encp takes about 10 minutes for writing 10 gbyte .further delay may be caused by having more than one job accessing the same hard disk in dfarm , or waiting to write to the same physical tape .the output of reprocessing does not require concatenation , ( one - to - one processing with output file size of mbyte ) .therefore the operation has one fewer step .after the files are collected to output nodes , they are copied to enstore tapes . on averagethe stage - out takes 25 minutes for writing a file - set of 10 gbyte to enstore .the tape writing is limited to one mover per data - set at a time , to ensure that files are written sequentially on tape .a tape is restricted to files of the same data - set .the instantaneous tape writing rate is 30 mbyte / sec .however , the average rate drops to below 20 mbyte / sec because of latency in establishing connection to the mass storage system ( this includes mounting and positioning the tape establishing the end - to - end communication ) . running only one data - set onthe farm limits the capability of the farm . running a mix of jobs from different data - sets in parallel increases the through - put of the farm by increasing the output data rate . a concatenation job ( 25 min for 10 gbyte ) spends less than half its time accessing an enstore mover ( 10 min ) . when two jobs are running for the same data - setthere is idle time while the mover is waiting for files .the observed data transmission rate is 9 mbyte / sec per data - set .the idle time for the mover is eliminated by adding one more job ( three concatenation for a data - set ) , and the data transmission rate increases to 15 mbyte / sec .the data reprocessing was performed with the revised cdf software version 5.3.1 . to maximize the farm efficiencythe data reprocessing was performed on five farmlets with each farmlet processing one data - set .the tapes were loaded one data - set at a time , therefore farm cpu usage came in waves shared by a couple data - sets at a time .the cpu usage for the week of march 18 is shown in fig .[ fig : week_cpu ] . a lag in cpu utilizationwas observed when the farm switched to a new data - set , seen as the dips in cpu in fig .[ fig : week_cpu].a , because of lack of input files .file - sets are distributed almost in sequence on a tape the lag at the beginning of staging in a data - set is because the files requested are stored on the same tape , causing all the stage - in jobs to wait for one tape .overall the stage - in is effective in feeding data files to dfarm .the cpu usage varies for data - sets .the `` minimum bias '' data - set has smaller file sizes and the cpu per event is about 40% less than the average . when this data - set was processed , the stage - in rate was not able to keep up with the cpu consumption .the production farm operation is efficient . the cpu usage for the month of april 2004 is shown in fig .[ fig : month_cpu ] .each shaded area seen is one data - set being processed .the output data logging rate is shown in fig .[ fig : stat531 ] for the number of files , number of events , and total file size written to enstore tapes .compressed outputs were also created for selected data - sets . therefore the total events in output was increased by about 25% . the event size was reduced and resulted to a net reduction in storage by about 20% . on averagewe had a through - put of over 2 tbyte ( 10 million events ) per day to the enstore storage .the data logging lasted two extra weeks for a large b physics data - set that accounted for about 20% of the total cdf data .it was the latest data - set processed and the tape logging rate was saturated at about 800 gbyte per day . the production farm uses cron jobs to check the online database for newly acquired data .timely processing is critical for detector monitoring . the express stream -a processing is used for monitoring data quality and beam - line calibrations are performed using stream - g data .data - sets of stream - b and stream - g are used for additional calibrations .full data processing is then carried out after final calibration constants are available .the load on the farm for new data , at a logging rate of 10 pb per week , is less than half of the cpu capacity .the raw - data volume collected and processed are shown in fig .[ fig : dual - line3 ] . in february 2004one of the major detector components was unstable .raw data processing was held except for detector studies .meanwhile the farm was put in use for 5.3.1 reprocessing .raw - data processing resumed in early may 2004 .data collected after february 2004 were processed with preliminary calibrations .later it was reprocessed with refined calibrations .the reprocessing was started in october 2004 .the cdf production pc farms have been successfully prototyped , installed , integrated , commissioned and operated for many years .they have been successful in providing the computing processing capacity required for the cdf experiment in run ii .the success of this system has in turn enabled successful analysis of the wealth of new data being collected by the cdf experiment at fermilab . the system has been modified and enhanced during the years of its operation to adjust to new requirements and to enable new capabilities .the system will continue to be modified in the future to continue to serve the cdf collaboration as required .some of the modifications are simple upgrades of components with more capable ( faster , more capacious ) replacements .other modifications will affect the architecture of the system and quite likely will embrace distributed processing and the grid in some way .these developments will allow cdf to continue to process and analyze data through the end of the life of the experiment .
the data production farm for the cdf experiment is designed and constructed to meet the needs of the run ii data collection at a maximum rate of 20 mbyte / sec during the run . the system is composed of a large cluster of personal computers ( pcs ) with a high - speed network interconnect and a custom design control system for the flow of data and the scheduling of tasks on this pc farm . the farm explores and exploits advances in computing and communication technology . the data processing has achieved a stable production rate of approximately 2 tbyte per day . the software and hardware of the cdf production farms has been successful in providing large computing and data throughput capacity to the experiment . pacs : 07.05-t . keywords : computer system ; data processing
the _ sp theory of intelligence _ , and its realisation in the _ sp computer model _ , is a unique attempt to simplify and integrate observations and concepts across artificial intelligence , mainstream computing , mathematics , and human perception and cognition , with information compression as a unifying theme .this paper , which derives from with revisions and updates , describes how abstract structures and processes in the sp theory may be realised in terms of neurons , their interconnections , and the transmission of impulses between neurons .this part of the sp theory called _ sp - neural_may be seen as a tentative and partial theory of the representation and processing of knowledge in the brain . as such, it may prove useful as a source of ideas for theoretical and empirical investigations in the future .for the sake of clarity , the abstract parts of the theory , excluding sp - neural , will be referred to as `` sp - abstract '' .it is envisaged that sp - neural will be further developed in the form of a computer model . as with the existing computer model of sp - abstract, the development of this new computer model will help to guard against vagueness in the theory , it will serve as a means of testing ideas to see whether or not they work as anticipated , and it will be a means of demonstrating what the model can do , and validating it against empirical data. the next section says something about the theoretical orientation of this research .then sp - abstract will be described briefly as a foundation for the several sections that follow which describe aspects of sp - neural and associated issues .cosmologist john barrow has written that `` science is , at root , just the search for compression in the world '' , an idea which may be seen to be equivalent to occam s razor a good theory should combine conceptual _ simplicity _ with descriptive or explanatory _power_. this is because compression of any given body of information , * i * , may be seen as a process of reducing ` redundancy ' of information in * i * and thus increasing its ` simplicity ' , whilst retaining as much as possible of its non - redundant descriptive and explanatory ` power ' .this works best when * i * is large . but this has not always been observed in practice : newell urged researchers in psychology to address `` a genuine slab of human behaviour '' ; and mccorduck ( * ? ? ?* and 424 ) has described how research in artificial intelligence became fragmented into many narrow sub - fields . in the light of these observations , and in the spirit of research on `` unified theories of cognition '' and `` artificial general intelligence , '' , retrieved 2016 - 01 - 19 . ]the sp programme of research has attempted to simplify and integrate observations and concepts across a broad canvass , resisting the temptation to concentrate only on one small area . in connection with these ideas ,the name `` sp '' may be seen to be short for _ simplicity _ and _ power_. this is partly because , as we shall see , the sp theory , despite its relative simplicity , has quite a lot to say about a wide range of observations and concepts .but more importantly it is because information compression lies at the heart of how the sp system works .as a basis for the description of sp - neural , this section provides a brief informal account of sp - abstract . the theory is described most fully in and quite fully but more briefly in .details of other publications in the sp programme , many of them with download links , are shown on the website of cognitionresearch.org ( http://bit.ly/1mss5xt[bit.ly/1mss5xt ] ) .the origins of sp theory are mainly in a body of research by attneave , barlow and others suggesting that much of the workings of brains and nervous systems may be understood as compression of information , and my own research ( summarised in ) suggesting that , to a large extent , language learning may be understood in the same terms .there is more about the foundations of the theory in . in sp - abstract , all kinds of knowledgeare represented with _patterns _ , where a pattern is an array of atomic _ symbols _ in one or two dimensions . at present , the sp computer model .this version of the computer model is very similar to sp70 , described in ( * ? ? ?* sections 3.9.2 and 9.2 ) . ] works only with 1d patterns but it is envisaged that the model will be generalised to work with 2d patterns . in this connection , a ` symbol ' is simply a ` mark ' that can make a yes / no match with any other symbol no other result is permitted . in most of the examples shown in this paper ,symbols are shown as alphanumeric characters or short strings of characters but , when the sp system is used to model biological structures and processes , such representations may be interpreted as low - level elements of perception such as formants or formant ratios in the case of speech or lines and junctions between lines in the case of vision ( see also section [ sp - n_sensory_data_receptor_array_section ] ) . to help cut through mathematical complexities associated with information compression , the sp system sp - abstract and its realisation in the sp computer model is founded on a simple , ` primitive ' idea : that information may be compressed by finding full or partial matches between patterns and merging or ` unifying ' the parts that are the same .this principle``information compression via the matching and unification of patterns '' ( icmup)provides the foundation for a powerful concept of _ multiple alignment _ , borrowed and adapted from bioinformatics .the multiple alignment concept , outlined in section [ sp - a_multiple_alignment_section ] , below , is itself central in the workings of sp - abstract and is the key to versatility and adaptability of the sp system .it has the potential to be as significant for the understanding of ` intelligence ' in a broad sense as is dna for biological sciences . in themselves , sp patterns are not very expressive .but in the multiple alignment framework ( section [ sp - a_multiple_alignment_section ] ) they become a very versatile medium for the representation of diverse forms of knowledge . andthe building of multiple alignments , together with processes for unsupervised learning ( sections [ sp - a_early_learning_section ] and [ sp - a_later_learning_section ] ) , has proved to be a powerful means of modelling diverse kinds of processing .the two things together sp patterns and multiple alignment have the potential to be a `` universal framework for the representation and processing of diverse kinds of knowledge '' ( ufk ) , as discussed in .an implication of these ideas is that there would not , for example , be any difference between the representation and processing of non - syntactic cognitive knowledge and the representation and processing of the syntactic forms of natural language .a framework that can accommodate both kinds of knowledge is likely to facilitate their seamless integration , as discussed in section [ sp - a_evaluation_systems_section ] .the sp theory is conceived as a brain - like system that receives _ new _ patterns via its ` senses ' and stores some or all of them , in compressed form , as _ old _ patterns . in broad terms , this is how the system learns . in the sp system , all learning is ` unsupervised , ' , retrieved 2016 - 03 - 17 ] meaning that it does not depend on assistance by a ` teacher ' , the grading of learning materials from simple to complex , or the provision of ` negative ' examples of concepts to be learned meaning examples that are marked as ` wrong ' ( _ cf_. ) . notwithstandingthe importance of schools and colleges , it appears that most human learning is unsupervised .other kinds of learning , such as ` supervised ' learning ( learning from labelled examples ) , , retrieved 2016 - 03 - 17 . ] or ` reinforcement ' learning ( learning with carrots and sticks ) , , retrieved 2016 - 03 - 17 . ]may be seen as special cases of unsupervised learning ( * ? ? ?* section v ) . at the beginning of processing by the system , when the repository of old patterns is empty ,new patterns are stored as they are received but with the addition of system - generated ` i d ' symbols at the beginning and end . for example , a new pattern like `` t h e b i g h o u s e ` ' would be stored as an old pattern like `` a 1 t h e b i g h o u s e # a ` ' . here, the lower - case letters are atomic symbols that may represent actual letters but could represent basic elements of speech ( such as formant ratios or formant transitions ) , or basic elements of vision ( such as lines or corners ) , and likewise with other sensory data . later , when some old patterns have been stored , the system may start to recognise full or partial matches between new and old patterns .if a new pattern is exactly the same as an old pattern ( excluding the id - symbols ) , then frequency measures for that pattern and its constituent symbols are incremented .these measures , which are continually updated at all stages of processing , have an important role to play in calculating probabilities of structures and inferences and in guiding the processes of building multiple alignments ( section [ sp - a_multiple_alignment_section ] ) and unsupervised learning . with partial matches , the system will form multiple alignments like the one shown in figure [ partial_match_figure ] , with a new pattern in row 0 and an old pattern in row 1 . from a partial match like this ,the system creates old patterns from the parts that match each other and from the parts that do nt .each newly - created old pattern will be given system - generated id - symbols .the result in this case would be patterns like these : `` b 1 t h e # b ` ' , `` c 1 h o u s e # c ` ' , `` d 1 s m a l l # d ` ' , `` d 2 b i g # d ` ' .in addition , the system forms an abstract pattern like this : `` e 1 b # b d # d c # c # e ` ' which records the sequence [ `` b 1 t h e # b ` ' , ( `` d 1 s m a l l # d ` ' or `` d 2 b i g # d ` ' ) , `` c 1 h o u s e # c ` ' ] in terms the id - symbols of the constituent patterns . notice how `` s m a l l ` ' and `` b i g ` ' have both been given the id - symbol `` d ` ' at their beginnings and the id - symbol `` # d ` ' at their ends .these additions , coupled with the use of the same two id - symbols in the abstract pattern `` e 1 b # b d # d c # c # e ` ' has the effect of assigning `` s ma l l ` ' and `` b i g ` ' to the same syntactic category , which looks like the beginnings of the ` adjective ' part of speech . the overall result in this example is a collection of sp patterns that functions as a simple grammar to describe the phrases _ the small house _ and _ the big house_. in practice , the sp computer model may form many other multiple alignments , patterns and grammars which are much less tidy than the ones shown .but , as outlined in sections [ sp - a_multiple_alignment_section ] and [ sp - a_later_learning_section ] , the system is able to home in on structures that are ` good ' in terms of information compression . as we shall see ( sections [ sp - a_multiple_alignment_section ] , [ sp - a_evaluation_theory_section ] , and [ sp - n_non - syntactic_knowledge_section ] ) , sp patterns , within the sp system , are remarkably versatile and expressive , with at least the power of context - sensitive grammars ( * ? ?* chapter 5 ) .the multiple alignment shown in figure [ partial_match_figure ] is unusually simple because it contains only two patterns .more commonly , the system forms ` good ' multiple alignments like the one shown in figure [ fortune_brave_multiple_alignment_figure ] , with one new pattern ( in row 0 ) and several old patterns ( one in each of several other rows ) . as a matter of convention , the new pattern is always shown in row 0 , but the order of the old patterns across the other rows is not significant . a multiple alignment like the one shown in figure [ fortune_brave_multiple_alignment_figure ] is built in stages , using heuristic search at each stage to weed out structures that are ` bad ' in terms of information compression and retaining those that are ` good ' .problems of computational complexity are reduced or eliminated by a scaling back of ambition : instead of searching for theoretically - ideal solutions , one merely searches for solutions that are `` good enough '' . in this example, multiple alignment achieves the effect of parsing the sentence into parts and sub - parts , such as a sentence ( ` s ' ) defined by the pattern in row 6 , one kind of noun phrase ( ` np ' ) defined by the pattern that appears in row 5 , and another kind of noun phrase shown in row 8 , a verb phrase ( ` vp ' ) defined by the pattern in row 3 , nouns ( ` n ' ) defined by the patterns in rows 4 and 7 , and so on .but there is much more than this to the multiple alignment concept as it has been developed in the sp programme .it turns out to be a remarkably versatile framework for the representation and processing of diverse kinds of knowledge non - verbal patterns and pattern recognition , logical and probabilistic kinds of ` rules ' and several kinds of reasoning , and more ( sections [ sp - a_evaluation_theory_section ] and [ sp - n_non - syntactic_knowledge_section ] ) .a point worth mentioning here is that , although the multiple concept is entirely non - hierarchical , it can model several kinds of hierarchy and heterarchy ( section [ sp - a_evaluation_theory_section ] ) , as illustrated by the parsing example in figure [ fortune_brave_multiple_alignment_figure ] . andsuch hierarchies may not always be ` strict ' hierarchies because any pattern may be aligned with any other pattern and , within one multiple alignment , any pattern may be aligned with two or more other patterns . from a multiple alignment like the one shown in figure [ fortune_brave_multiple_alignment_figure ] , the sp system may derive a _code pattern_a compressed encoding of the sentence as follows : scan the multiple alignment from left to right , identifying the id - symbols that are _ not _ matched with any other symbol and create an sp pattern from the sequence of such symbols . in this case , the result is the pattern `` s 0 2 4 3 7 6 1 5 # s ` ' .this code pattern has several existing or potential uses including : * it provides a basis for calculating a ` compression score ' for the old patterns in the multiple alignment , meaning their effectiveness as a means of compressing the new pattern .compression scores like that have a role in sifting out one or more ` good ' grammars for any given set of new patterns . *if the code pattern is treated as a new pattern then , with the same old patterns as when the code pattern was produced , the sp system can recreate the original sentence , as described in section [ sp - n_output_section ] .* when sp - abstract is developed to take account of meanings as well as syntax , it is likely that each id - symbol in the code pattern will take on a dual role : representing each syntactic form ( word or other grammatical structure ) and representing the meaning of the given syntactic form .* it is envisaged that , with further development of the sp computer model , code patterns will enter into the learning process , as outlined in section [ sp - a_later_learning_section ] , next . as we saw in section [ sp - a_early_learning_section ] , the earliest stage of learning in sp - neural when the repository of old patterns is empty or nearly so is largely a matter of absorbing new information directly with little modification except for the addition of system - generated id - symbols .later , when there are more old patterns in store , the system begins to create old patterns from partial matches between new and old patterns .part of this process is the creation of abstract patterns that describe sequences of lower - level patterns .as the system begins to create abstract patterns , it will also begin to form multiple alignments like the one shown in figure [ fortune_brave_multiple_alignment_figure ] . and ,as it begins to form multiple alignments like that , it will also begin to form code patterns , as described in section [ sp - a_deriving_code_pattern_section ] . at all stages of learning , but most prominent in the later stages , is a process of inferring one or more _ grammars _ that are ` good ' in terms of their ability to encode economically all the new patterns that have been presented to the system . here ,a ` grammar ' is simply a collection of sp patterns .the term ` grammar ' has been adopted partly because of the origins of the sp system in research on the learning of natural language and partly because the term has come to be used in areas outside computational linguistics , such as pattern recognition .inferring grammars that are good in terms of information compression is , like the building multiple alignments , a stage - by - stage process of heuristic search through the vast abstract space of alternatives , discarding ` bad ' alternatives at each stage , and retaining a few that are ` good ' . as with the building of multiple alignments , the search aims to find solutions that are `` good enough '' , and not necessarily perfect .it is envisaged that the sp computer model will be developed so that , in this later phase of learning , learning processes will be applied to code patterns as well as to new patterns .it is anticipated that this may overcome two weaknesses in the sp computer model as it is now : that , while it forms abstract patterns at the highest level , it does not form abstract patterns at intermediate levels ; and that it does not recognise discontinuous dependencies in knowledge ( * ? ? ?* section 3.3 ) . in (* chapter 9 ) , there is a much fuller account of unsupervised learning in the sp computer model .the sp theory in its abstract form may be evaluated in terms of ` simplicity ' and ` power ' of the theory itself ( discussed in section [ sp - a_evaluation_theory_section ] next ) , in terms its potential to promote simplification and integration of structures and functions in natural or artificial systems that conform to the theory ( section [ sp - a_evaluation_systems_section ] below ) , and in comparison with other ai - related systems . in terms of the principles outlined in section [ theoretical_orientation_section ] , the sp system , with multiple alignment centre stage ,scores well .one relatively simple framework has strengths and potential in the representation of several different kinds of knowledge , in several different aspects of ai , and it has several potential benefits and applications : * _ representation and processing of diverse kinds of knowledge_. the sp system ( sp - abstract ) has strengths and potential in the representation and processing of : class hierarchies and heterarchies , part - whole hierarchies and heterarchies , networks and trees , relational knowledge , rules used in several kinds of reasoning , patterns with pattern recognition , images with the processing of images , structures in planning and problem solving , structures in three dimensions ( * ? ? ?* section 6 ) , knowledge of sequential and parallel procedures ( * ? ? ?* section iv - h ) .it may also provide an interpretive framework for structures and processes in mathematics ( * ? ? ?* section 10 ) .+ there is a fuller summary in ( * ? ? ?* section iii - b ) and much more detail in . * _ strengths and potential in ai_. the sp theory has things to about several different aspects of ai , as described most fully in and more briefly in .in addition to its capabilities in parsing , described above , the sp system has strengths and potential in the production of natural language , the representation and processing of diverse kinds of semantic structures , the integration of syntax and semantics , pattern recognition , computer vision and modelling aspects of natural vision , information retrieval , planning , problem solving , and several kinds of reasoning ( one - step ` deductive ' reasoning ; abductive reasoning ; reasoning with probabilistic decision networks and decision trees ; reasoning with ` rules ' ; nonmonotonic reasoning and reasoning with default values ; reasoning in bayesian networks , including ` explaining away ' ; causal diagnosis ; reasoning which is not supported by evidence ; and inheritance of attributes in an object - oriented class hierarchy or heterarchy ) .there is also potential for spatial reasoning ( * ? ? ?* section iv - f.1 ) and what - if reasoning ( * ? ? ?* section iv - f.2 ) .the system also has strengths and potential in unsupervised learning ( * ? ? ?* chapter 9 ) . *_ many potential benefits and applications_. potential benefits and applications of the sp system include : helping to solve nine problems associated with big data ; the development of intelligence in autonomous robots , with potential for gains in computational efficiency ; the development of computer vision ; it may serve as a versatile database management system , with intelligence ; it may serve as an aid in medical diagnosis ; and there are several other potential benefits and applications , some of which are described in . in short , the sp theory , in accordance with occam s razor , demonstrates a favourable combination of simplicity and power across a broad canvass . as in other areas of science, this should increase our confidence in the generality of the theory .closely related to simplicity and power in the sp theory are two potential benefits arising from the use of one simple format ( sp patterns ) for all kinds of knowledge and one relatively simple framework ( chiefly multiple alignment ) for the processing of all kinds of knowledge : * _ simplification_. those two features ( one simple format for knowledge and one simple framework for processing it ) can mean substantial simplification of natural systems ( brains ) and artificial systems ( computers ) for processing information .the general idea is that one relatively simple system can serve many different functions . in natural systems, there is a potential advantage in terms of natural selection , and in artificial systems there are potential advantages in terms of costs . *_ integration_. the same two features are likely to facilitate the seamless integration of diverse kinds of knowledge and diverse aspects of intelligence pattern recognition , several kinds of reasoning , unsupervised learning , and so on in any combination , in both natural and artificial systems .it appears that that kind of seamless integration is a key part of the versatility and adaptability of human intelligence and that it will be essential if we are to achieve human - like versatility and adaptability of intelligence in artificial systems . with regard to the seamless integration of diverse kinds of knowledge ,this is clearly needed in the understanding and production of natural language . to understandwhat someone is saying or writing , we obviously need to be able to connect words and syntactic structures with their non - syntactic meanings , and likewise , in reverse , when we write or speak to convey some meaning .this has not yet been explored in any depth with the sp - abstract conceptual framework but preliminary trials with the sp computer model suggest that it is indeed possible to define syntactic - semantic structures in a set of sp patterns and then , with those patterns playing the role of old patterns , to analyse a sample sentence and to derive its meanings ( * ? ? ?* section 5.7 , figure 5.18 ) , and , in a separate exercise with the same set of old patterns , to derive the same sentence from a representation of its meanings ( _ ibid ._ , figure 5.19 ) . in several publications , such as , potential benefits and applications of the sp system have been described .more recently , it has seemed appropriate to say what distinguishes the sp system from other ai - related systems and , more importantly , to describe advantages of the sp system compared ai - related alternatives .those points have now been set out in some detail in _ the sp theory of intelligence : its distinctive features and advantages _ .it is pertinent to mention that section v of that paper discusses , in some detail , problems with ` deep learning in neural networks ' and shows how , in the sp system , they are overcome .since many ai - related systems may also be seen as models of cognitive structures and processes in brains , this paper may also be seen to demonstrate the relative strength of the sp system in modelling aspects of human perception and cognition .as we have seen in section [ sp - abstract_outline_section ] , sp - abstract is a relatively simple system with descriptive and explanatory power across a wide range of observation and phenomena in artificial intelligence , mainstream computing , mathematics , and human perception and cognition .how can such a system have anything useful to say about the extraordinary complexity of brains and nervous systems , both in their structure and in their workings ?an answer in brief is that sp - neural a realisation of sp - abstract in terms of neurons , their interconnections , and the transmission of impulses between neurons may help us to interpret neural structures and processes in terms of the relatively simple concepts in sp - abstract . to the extentthat this is successful , it may like any good theory in any field help us to understand empirical phenomena in our area of interest , it may help us to make predictions , and it may suggest lines of investigation. it is anticipated that sp - neural will work in broadly the same way as sp - abstract , but the characteristics of neurons and their interconnections raise some issues that do not arise in sp - abstract and its realisation in the sp computer model .these issues will be discussed at appropriate points in this and subsequent sections .this section introduces sp - neural in outline , and sections that follow describe aspects of the theory in more detail , drawing where necessary on aspects of sp - abstract that have been omitted from or only sketched in section [ sp - abstract_outline_section ] .figure [ the_brave_neural_figure ] shows in outline how a portion of the multiple alignment shown in figure [ fortune_brave_multiple_alignment_figure ] , may be realised in sp - neural . with associated patterns and symbols . , with associated patterns and symbols , may be expressed in sp - neural as neurons and their inter - connections .the meanings of the conventions in the figure , and some complexities that are not shown in the figure , are explained in this main section and ones that follow.,scaledwidth=90.0% ] in the figure , ` sensory data ' at the bottom means the visual , auditory or tactile data entering the system which , in this case , corresponds with the phrase ` t h e b r a v e ' . in a more realistic illustration, the sensory data would be some kind of analogue signal . here, the letters are intended to suggest the kinds of low - level perceptual primitives outlined below .it is envisaged that , with most sensory modalities , the receptor array would be located in the primary sensory cortex .of course , a lot of processing goes on in the sense organs and elsewhere between the sense organs and the primary sensory cortices .but it seems that most of this early processing is concerned with the identification of the perceptual primitives just mentioned . as with sp - abstract, it is anticipated that sp - neural will , at some stage , be generalised to accommodate patterns in two dimensions , such as visual images , and then the sensory data may be received in two dimensions , as in the human eye . between the sensory data and the _ receptor array _ ( above it in the figure ) , there would be , first , cells that are specialised to receive particular kinds of input ( auditory , visual , tactile etc ) .these send signals to neurons that encode the sensory data as _ neural symbols _ , the neural equivalents of ` symbols ' in sp - abstract .in the receptor array , each letter enclosed in a solid ellipse represents a neural symbol , expressed as a single neuron or , more likely , a small cluster of neurons .as we shall see ( section [ sp - n_encoding_in_receptor_array_section ] ) , the reality is more complex , at least in some cases . in vision, neural symbols in the receptor array would represent such low - level features as lines , corners , colours , and the like , while in speech perception , they would represent such things as formants , formant ratios and transitions , plosive and fricative sounds , and so on .whether or how the sp concepts can be applied in the discovery or identification of features like these is an open question ( * ? ? ?* section 3.3 ) . fornow , we shall assume that they can be identified and can be used in the creation and use of higher - level structures . in the rest of figure [ the_brave_neural_figure] , each broken - line rectangle with rounded corners represents a _pattern assembly_corresponding to a ` pattern ' in sp - abstract .the word ` assembly ' has been adopted within the expression ` pattern assembly ' because the concept is quite similar to hebb s concept of a ` cell assembly'a cluster of neurons representing a concept or other coherent mental entity .differences between hebb s concept of a cell assembly and the sp concept of a pattern assembly are described in appendix [ cell_pattern_assemblies_appendix ] . within each pattern assembly ,as represented in the figure , each character or group of characters enclosed in a solid - line ellipse represents a _ neural symbol _ which , as already mentioned , corresponds to a ` symbol ' in sp - abstract . as with neural symbols in the receptor array, it is envisaged that each neural symbol would comprise a single neuron or , more likely , a small cluster of neurons .it is supposed that , within each pattern assembly , there are lateral connections between neural symbols but these are not shown in the figure .it is envisaged that most pattern assemblies would represent knowledge that is learned and not inborn , and would be located mainly outside the primary sensory areas of the cortex , in other parts of the sensory cortices .pattern assemblies that integrate two or more sensory modalities may be located in the ` association ' areas of the cortex .research with fmri recordings from volunteers has revealed `` semantic maps '' that `` show that semantic information is represented in rich patterns that are distributed across several broad regions of cortex .furthermore , each of these regions contains many distinct areas that are selective for particular types of semantic information , such as people , numbers , visual properties , or places .we also found that these cortical maps are quite similar across people , even down to relatively small details . '' .] of course , this research says nothing about whether or not the knowledge is represented with pattern assemblies and their interconnections .but it does apparently confirm that knowledge is stored in several regions of the cortex and throws light on how it is organised . although most parts of the mammalian cerebral cortex has six layers and many convolutions , it may be seen , topologically , as a sheet which is very much broader and wider than it is thick .correspondingly , it is envisaged that 1d and 2d pattern assemblies will be largely ` flat ' structures , rather like writing or pictures on a sheet of paper .that said , it is quite possible , indeed likely , that pattern assemblies would take advantage of two or more layers of the cortex , not just one .incidentally , since 2d sp patterns may provide a basis for 3d models , as described in ( * ? ? ?* sections 6.1 and 6.2 ) , flat neural structures in the cortex may serve to represent 3d concepts . in figure [ the_brave_neural_figure ] ,the solid or broken lines that connect with neural symbols represent axons , with arrows representing the direction of travel of neural impulses . where two or more connections converge on a neural symbol, we may suppose that , contrary to the simplified way in which the convergence is shown in the figure , there would be a separate dendrite for each connection .axons represented with solid lines are ones that would be active when the multiple alignment in figure [ fortune_brave_multiple_alignment_figure ] is in the process of being identified .broken - line connections show a few of the many other possible connections .as mentioned in section [ sp - n_pattern_assemblies_section ] , it is envisaged that there would be one or more neural connections between neighbouring neural symbols within each pattern assembly but these are not marked in the figure .compared with what is shown in the figure , it likely that , in reality , there would be more ` levels ' between basic neural symbols in the receptor array and id - neural - symbols representing pattern assemblies for relatively complex entities like the words ` one ' , ` brave ' , ` the ' , and ` table ' , as shown in the figure . in this connection , it is perhaps worth emphasising that , as with the modelling of hierarchical structures in multiple alignments ( section [ sp - a_multiple_alignment_section ] ) , while pattern assemblies may form ` strict ' hierarchies , this is not an essential feature of the concept , and it is likely that many neural structures formed from pattern assemblies may be only loosely hierarchical or not hierarchical at all .given the foregoing account of how knowledge may be represented in the brain , a question that arises is `` are there enough neurons in the brain to store what a typical person knows ? ''this is a difficult question to answer with any precision but an attempt at an answer , described in ( * ? ? ?* section 11.4.9 ) , reaches the tentative conclusion that there are . in brief : * given that estimates of the size of the human brain range from up to neurons , we may estimate , via calculations given in ( * ? ?* section 11.4.9 ) , that the ` raw ' storage capacity of the brain is between approximately 1000 mb and 10,000 mb . * given a conservative estimate that , using sp compression mechanisms , compression by a factor of 3 may be achieved across all kinds of knowledge , our estimates of the storage capacity of the brain will range from about 3000 mb up to about 30,000 mb . *assuming : 1 ) that the average person knows only a relatively small proportion of what is contained in the _ encyclopaedia britannica _ ( eb ) ; 2 ) that the average person knows lots of ` everyday ' things that are _ not _ in the eb ; 3 ) that the ` everyday ' things that we _ do _ know are roughly equal to the things in the eb that we _ do not _ know ; then ( 4 ) , we may conclude that the size of the eb provides a rough estimate of the volume of information that the average person knows .* the eb can be stored on two cds in compressed form . assuming that most of the space is filled , this equates to 1300 mb of compressed information or approximately 4000 mb of information in uncompressed form .* this 4000 mb estimate of what the average person knows is the same order of magnitude as our range of estimates ( 3000 mb to 30,000 mb ) of what the human brain can store . * even if the brain stores two or three copies of its compressed knowledge to guard against the risk of losing it , or to speed up processing , or both our estimate of what needs to be stored ( lets say mb ) is still within the 3000 mb to 30,000 mb range of estimates of what the brain can store . in broad terms, it is envisaged that , for a task like the parsing of natural language or pattern recognition : 1 .sp - neural will work firstly by receiving sensory data and interpreting it as neural symbols in the receptor array with excitation of the neural symbols that have been identified .+ excitatory signals would be sent from those excited neural symbols to pattern assemblies that can receive signals from them directly . in figure[ the_brave_neural_figure ] , these would be all the pattern assemblies except the topmost pattern assembly .+ within each pattern assembly , excitatory signals will spread laterally via the connections between neighbouring neural symbols .+ pattern assemblies would become excited , roughly in proportion to the number of excitatory signals they receive .2 . at this stage, there would be a process of selecting amongst pattern assemblies to identify one or two that are most excited .3 . from those pattern assemblies more specifically , the neural id - symbols at the beginnings and ends of those pattern assemblies excitatory signals would be sent onwards to other pattern assemblies that may receive them . in figure [ the_brave_neural_figure ] , this would be the topmost pattern assembly ( that would be reached immediately after the first pass through stages 2 and 3 ) . + as in stage 1 , the level of excitation of any pattern assembly would depend on the number of excitatory signals it receives , but building up from stage to stage so that the highest - level pattern assemblies are likely to be most excited .4 . repeat stages 2 and 3 until there are no more pattern assemblies that can be sent excitatory signals .the ` winning ' pattern assembly or pattern assemblies , together with the structures below them that have , directly or indirectly , sent excitatory signals to them , may be seen as neural analogues of multiple alignments ( namas ) , and we may guess that they provide the best interpretations of a given portion of the sensory data . if the whole sentence , `` f o r t un e f a v o u r s t h e b r a v e ` ' , is processed by sp - neural with pattern assemblies that are analogues of the sp patterns provided for the example shown in figure [ fortune_brave_multiple_alignment_figure ] , we may anticipate that the overall result would be a pattern of neural excitation that is an analogue of the multiple alignment shown in that figure .when a neural symbol or pattern assembly has been ` recognised ' by participating in a winning ( neural ) multiple alignment , we may suppose that some biochemical or physiological aspect of that structure is increased as an at least approximate measure of the frequency of occurrence of the structure , in accordance with the way in which sp - abstract keeps track of the frequency of occurrence of symbols and patterns ( section [ sp - a_early_learning_section ] ) .some further possibilities are discussed in sections [ sp - n_more_detail_section ] and [ sp - n_inhibition_section ] .the bare - bones description of sp - neural in section [ sp - neural_intro_section ] is probably inaccurate in some respects and is certainly too simple to work effectively .this section and the ones that follow describe some other features which are likely to figure in a mature version of sp - neural , drawing on relevant empirical evidence where it is available . with regard to the encoding of information in the receptor array, it seems that the main possibilities are these : 1 . _explicit alternatives_. for the receptor array to work as described in section [ sp - neural_intro_section ] , it should be possible to encode sensory inputs with an ` alphabet ' of alternative values at each location in the array , in much the same way that each binary digit ( bit ) in a conventional computer may be set to have the value 0 or 1 , or how a typist may enter any one of an alphabet of characters at any one location on the page . at each location in the receptor array, each option may be provided in the form of a neuron or small cluster of neurons . here, there seem to be two main options : 1 ._ horizontal distribution of alternatives_. the several alternatives may be distributed ` horizontally ' , in a plane that is parallel to the surface of the cortex .2 . _ vertical distribution of alternatives_. the several alternatives may be distributed ` vertically ' between the outer and inner surfaces of the cortex , and perpendicular to those surfaces . 2 ._ implicit alternatives_. at each location there may be a neuron or small cluster of neurons that , via some kind of biochemical or neurophysiological process , may be ` set ' to any one of the alphabet of alternative values .rate codes_. something like the intensity of a stimulus may be encoded via `` an interaction between [ the ] firing rates and the number of neurons [ that are ] activated by [ the ] stimulus . ''temporal codes_. a stimulus that varies with time may be encoded via `` the time - varying pattern of activity in small groups of receptors and central neurons . ''( _ ibid . _ ) . in support of option 1.a, there is evidence that neurons in the visual cortex ( of cats ) are arranged in columns perpendicular to the surface of the cortex , where , for example , all the neurons in a given column respond most strongly to a line at one particular angle in the field of view , that within a ` hypercolumn ' containing several columns the preferred angle increases progressively from column to column , and that there are many hypercolumns across the primary visual cortex .`` hubel and wiesel point out that the organization their results reveal means that each small region , about at the surface , contains a complete sequence of ocular dominance and a complete sequence of orientation preference . '' . leaving out the results for ocular dominance, these observations are summarised schematically in figure [ receptor_array_detail_figure ] . in terms of this scheme ,the way in which the receptor array is shown in figure [ the_brave_neural_figure ] , is a considerable simplification each neural symbol in the receptor array in that figure should really be replaced by a hypercolumn .with something like the intensity of a stimulus , it seems that , at least in some cases : `` ... activity in one particular population of somatosensory neurons ... leads the cns to interpret it as painful stimulus .... '' , while `` an entirely separate population of neurons ... would signal light pressure . ''since it is likely that relevant receptors appear repeatedly across one s skin , this appears to be another example of option 1.a .there seems to be little evidence of encoding via option 1.b .indeed , since the concept of a cortical column is , in effect , defined by the fact that all the neurons in any one column have the same kind of receptive field , this seems to rule out the 1.b option ( see also section [ why_multiple_neurons_section ] ) .but , with respect to option 2 , it appears that in some cases , as noted above , the intensity of a stimulus may be encoded via the rates of firing of neurons , together with the numbers of neurons that are activated ( option 3 ) . and, since we can perceive and remember time - varying stimuli such as the stroking of a finger across one s skin , or the rising or falling pitch of a note , some kind of temporal encoding must be available ( option 4 ) . here , it must be acknowledged that options 3 and 4 appear superficially to be outside the scope of the sp theory , in view of the emphasis in many examples on discrete atomic symbols .but , as we know from the success of digital recording , or indeed digital computing , any continuum may be encoded digitally , in keeping with the digital nature of the sp theory . how the sp theory may be applied to the digital encoding and processing of continua has been discussed elsewhere in relation to vision and the development of autonomous robots .as we have seen ( section [ sp - n_encoding_in_receptor_array_section ] ) , some aspects of vision are mediated via columns of neurons in the primary visual cortex in which each column contains many neurons with receptive fields that are all the same , all of them responding , for example , to a line in the visual field with a particular orientation .why , at each of several locations across the visual cortex , should there be many neurons with the same receptive field , not just one ? there seem to be two possible answers to this question ( and they are not necessarily mutually exclusive ) : * _ encoding of sensory patterns_. if , in the receptor array , we wish to encode two or more patterns such as `` m e t ` ' and `` h e m ` ' , they need to be independent of each other , with repetition of the `` e ` ' neural symbol , otherwise there will be the possibly unwanted implication that such things as `` m e m ` ' or `` h e t ` ' are valid patterns . *_ error - reducing redundancy_. at any given location in the receptor array , multiple instances of a given neural symbol may help to guard against the problems that may arise if there is only neural symbol at that location and if , for any reason , it becomes partially or fully disabled .with regard to the first point , the receptor array may have a useful role to play , _inter alia _ , as a short - term memory for many sensory patterns pending their longer - term storage ( section [ sp - n_speed_expressiveness_section ] ) . in vision, for example , the receptor array may store many short glimpses of a scene , as outlined in section [ sp - n_we_see_less_than_we_think_section ] , until such time as further processing may be applied to weld the many glimpses into a coherent structure ( _ ibid . _ ) and to transfer that structure to longer - term memory .section [ sp - n_neural_processing_section ] suggests that normally , at some early stage in sensory processing , raw sensory data is encoded in terms of the excitation of neuronal symbols in a receptor array , then excited neural symbols send excitatory signals to appropriate neural symbols within pattern assemblies , and pattern assemblies that are sufficiently excited send excitatory signals on to other pattern assemblies , and so on .as we shall see ( section [ sp - n_inhibition_section ] ) , it is likely that , in this processing , there will also be a role for inhibitory processes . at first sight, it may be thought that , in the same way that each location in the receptor array should provide an alphabet of alternative encodings ( section [ sp - n_encoding_in_receptor_array_section ] ) , the same should be true for the location of each neural symbol within each pattern assembly .but if a neural symbol in a pattern assembly ( let s call it ` ns1 ' ) receives signals only from neural symbols in the receptor array that represent a given feature , let us say , ` a ' , then , in accordance with the ` labelled line ' principle , ns1 also represents ` a ' . for most sensory modalities, this principle applies all the way from each sense organ , through the thalamus , to the corresponding part of the primary sensory cortex .are preserved so that there is separation of neurons providing touch information from the arm versus from the leg and of neurons responding to low versus high sound frequencies .... '' .also , `` nuclei in the central pathways often contain multiple maps . ''but `` the functional significance of multiple maps in general , however , remains to be clarified . ''( _ ibid . _ ) . ]it seems reasonable to suppose that the same principle will apply onwards from each primary sensory cortex into non - primary sensory cortices and non - sensory association areas . in sp - neural , as in sp - abstract and the sp computer model , the process of matching one pattern with another should respect the orderings of symbols . for example , `` a b c d ` ' matched with `` a b c d ` ' should be rated more highly in terms of information compression than , for example , `` a b c d ` ' matched with `` c a d b ` . 'it appears that this problem may be solved by the adoption , within sp - neural , of the following feature of natural sensory systems : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` receptors within [ the retina and body surface ] communicate with ganglion cells and those ganglion cells with central neurons in a strictly ordered fashion , such that relationships with neighbours are maintained throughout .this type of pattern , in which neurons positioned side by side in one region communicate with neurons positioned side - by - side in the next region , is called a _ typographic pattern_. '' ( emphasis in the original ) . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ a prominent feature of human visual perception is that we can recognise any given entity over a wide range of viewing distances , with correspondingly wide variations in the size , on the retina , of the image of that entity . for any model of human visual perception that is based on a simplistic or naive process for the matching of patterns, this aspect of visual perception would be hard to reproduce or to explain .but the sp system is different : 1 ) knowledge of entities that we may recognise are always stored in a compressed form ; 2 ) the process of recognition is a process of compressing the incoming data ; 3 ) the overall effect is that an image of a thing to be recognised can be matched with stored knowledge of that entity , regardless of the original size of the image . as an example , consider how the concept of an equilateral triangle ( as white space bounded by three black lines all of the same length ) may be stored and how an image of such a triangle may be recognised .regarding storage , there are three main redundancies in any image of that kind of triangle : 1 ) the white space in the middle may be seen as repeated instances of the symbol ` white ' ; 2 ) each of the three sides of the triangle may be seen as repeated instances of the symbol ` black ' or ` point ' ; and 3 ) there is redundancy in that the three sides of the triangle are the same .all three sources of redundancy may be encoded recursively as suggested in figure [ triangle_multiple_alignment_figure ] , which shows a multiple alignment modelling the recognition of a one - dimensional analogue of a triangle .column 0 shows information about the triangle to be recognised , comprising three `` corners ` ' and three sides of the triangle , each one represented by just two `` points ` ' .the pattern `` ln ln1 point ln # ln # ln ` ' in columns 1 and 2 is a self - referential and thus recursive definition of a line as a sequence of `` points ` ' .it is self - referential because , within the body of the pattern , it contains a reference to itself via the symbols at the beginning and end of the pattern : `` ln # ln ` ' . because there is no limit to this recursion, it may represent a line containing any number of points . in a similar way, a second side is encoded via the same pattern in columns 6 and 7 , and , again with the same pattern , the third line is encoded in columns 12 and 12 . in columns 4 , 9 and 15 in the figure , the pattern `` sg sg1 cr # cr ln # ln # sg ` ' shows one of the three elements of a triangle as a corner ( `` cr # cr ` ' ) followed by a line ( `` ln # ln ` ' ) . andthe recursion to encode multiple instances of that structure is in self - referential occurrences of the pattern `` tr tr1 sg # sg tr # tr # tr ` ' in columns 5 , 10 and 22 . strictly speaking ,the encoding is for a polygon , not a triangle , because there is nothing to stop the recursive repetition of `` sg sg1 cr # cr ln # ln # sg ` ' . and , in terms of the problem ,as described above , the representation is incomplete because there is nothing to show that the three sides of the triangle are the same .these encodings account for the redundancy in the repetition of points along a line and also the redundancy in the repetition of three sides of a triangle . in a 2d version ,they would also account for the redundancy in the white space within the body of the triangle , because they would allow most of the white space to be eliminated via shrinkage of the representation to the minimum needed to express the concept of a triangle .most people with normal vision have a powerful sense that their eyes are a window on to a kind of cinema screen that shows what we are looking at with great detail from left to right and from top to bottom .but research shows otherwise : * in the phenomenon of _ inattentional blindness _ , people may fail to notice salient things in their visual fields when they are looking for something else , even if they are trained observers . in a recent demonstration , radiologists were asked to search for lung - nodules in chest x - rays but many of them ( 83% ) failed to notice the image of a gorilla , 48 times the size of the average nodule , that was inserted into one of the radiographs . * in the phenomenon of _ change blindness _ , people often fail to notice large changes to visual scenes . for example , if a conversation between two people the investigator and the experimental subject is interrupted by a door being carried between them , the experimental subject may fail to notice , when the door has gone by , that the person they are speaking to is different from the person they were speaking to before . *although each of our eyes has a blind spot , retrieved 2016 - 04 - 08 .] , we do nt notice it , even when we are viewing things with one eye . apparently , our brains interpolate what is likely to be in the blind part of our visual field .it seems that part of the reason for this failure to see things is that photoreceptors are concentrated at the fovea , and cones are only found in that region ( _ ibid . _ ) , so that , with two eyes , we are , to a large extent , looking at the world through a keyhole composed of two circumscribed and largely overlapping views , one from each eye .it seems that our sense that the world is displayed to us on a wide and deep cinema screen is partly because our perception of any given scene draws heavily on our memories of similar scenes and partly because we can piece together what will normally be a partial view of what we are looking at from many short glimpses through the ` keyhole ' as we move our gaze around the scene .the sp theory provides an interpretation for these things as follows : * the theory provides an account in some detail of how new ( sensory ) information may be related to old ( stored ) information and how an interpretation of the new information may be built up via the creation of multiple alignments .* the theory provides an account of how we can piece together a picture of something , or indeed a 3d model of something , from many small but partially - overlapping views , in much the same way that : 1 ) with digital photography , it is possible to create a panoramic picture from several partially - overlapping images ; 2 ) the views in google s streetview are built up from many partially - overlapping pictures ; 3 ) a 3d digital image of an object may be created from partially - overlapping images of the object , taken from viewpoints around it .these things are discussed in ( * ? ? ?* sections 5.4 and 6.1 ) . with regard to the second point, it should perhaps be said that partial overlap between ` keyhole ' views is not an essential part of building up a big picture from smaller views .but if two or more views do overlap , it is useful if they can be stitched together , thus removing the overlap . andpartial overlap may be helpful in establishing the relative positions of two or more views .as we have seen ( section [ sp - n_encoding_in_receptor_array_section ] ) , each hypercolumn in the primary visual cortex of cats occupies about at the surface of the cortex , and it seems likely that each such hypercolumn provides a means of encoding one out of an alphabet of perceptual primitives , such as a line at a particular angle . assuming that this interpretation is correct , and if we view the primary visual cortex as if it was film in an old - style camera or the image sensor in a digital camera , it may seem that the encoding of perceptual primitives , with for each one , is remarkably crude. how could such a system with the area of the primary visual cortex corresponding to the area of our field of view create that powerful sense that , through our eyes , we see a detailed ` cinema screen ' view of the world ( section [ sp - n_we_see_less_than_we_think_section ] ) . part of the answer is probably that we see much less than we think we see ( section [ sp - n_we_see_less_than_we_think_section ] ) .but it seems likely that another part of the answer is to reject the assumption that the area of the primary visual cortex corresponding to the area of our field of view . in the light of the remarks in section [ sp - n_we_see_less_than_we_think_section ], it seems likely that , normally , in each of the previously - mentioned glimpses of a scene , most of the primary visual cortex is applied in the assimilation and processing of information capture by the fovea and , perhaps , parts of the retina that are very close to the fovea . , emphasis in the original , retrieved 2016 - 04 - 14 . ] in that case , what appears superficially to be a rather course - grained recording and analysis of visual data , may actually be very much more detailed .as described in section [ sp - n_we_see_less_than_we_think_section ] , it seems likely that our view of any scene is built up partly from memories and partly from many small snapshots or glimpses of the scene . in terms of concepts that have been debated about how knowledge may be represented in the brain ,the id - neural - symbols for any pattern assembly are very much like the concept of a _ grandmother cell_a cell or small cluster of cells in one s brain that represents one s grandmother so that , if the cell or cells were to be lost , one would lose the ability to recognise one s grandmother .it seems that the weight of observational and experimental evidence favours the belief that such cells do exist .this is consistent with the observation that people who have suffered a stroke or are suffering from dementia may lose the ability to recognise members of their close family . since sp - neural , like hebbs theory of cell assemblies , proposes that concepts are represented by coherent groups of neurons in the brain , it is very much a ` localist ' type of theory .as such , it is quite distinct from ` distributed ' types of theory that propose that concepts are encoded in widely - distributed configurations of neurons , without any identifiable location or centre .however , just to confuse matters , sp - neural does _ not _ propose that all one s knowledge about one s grandmother would reside in a pattern assembly for that lady .probably , any such pattern assembly would , in the manner of object - oriented design as discussed in section [ sp - n_non - syntactic_knowledge_section ] and illustrated in figure [ class_hierarchy_figure ] , be connected to and inherit features from a pattern assembly representing grandmothers in general , and from more general pattern assemblies such as pattern assemblies for such concepts as ` person ' and ` woman ' . andagain , a pattern assembly for ` person ' would not be the sole repository of all one s knowledge about people .that pattern assembly would , in effect , contain ` references ' to pattern assemblies describing the parts of a person , their physiology , their social and political life , and so on .thus , while sp - neural is unambiguously localist , it proposes that knowledge of any entity or concept is likely to be encoded not merely in one pattern assembly for that entity or concept but also in many other pattern assemblies in many parts of the cortex , and perhaps elsewhere . with something simple like a touch on the skin , or a pin prick , it is not too difficult to see how the sensation may be transmitted to the brain via any one of many relevant receptors located in many different areas of the skin . but with something more complex , like an image on the retina of a table , a house , or a tree , and so on , it is less straightforward to understand how we might recognise such a thing in any part of our visual field .for each entity to be recognised , it seems necessary at first sight to provide connections , directly or indirectly , from every part of the receptor array to the relevant pattern assembly . in terms of the schematic representation shown in figure [ the_brave_neural_figure ] ,it would mean repeating the connections for `` t h e ` ' and `` b r a v e ` ' in each of many parts of the receptor array .bearing in mind the very large number of different things we may recognise , the number of necessary connections would become very large , perhaps prohibitively so .however , things may be considerably simplified via either or both of two provisions : 1 . for reasons outlined in section [ sp - n_we_see_less_than_we_think_section ], it seems likely that , with vision , we build up our perception of a scene , partly from memories of similar scenes and partly via many relatively narrow ` keyhole ' views of what is in front of us .if that is correct , and if , as suggested in section [ sp - n_resolution_problem_section ] , most of the primary visual cortex is devoted to analysing information received via the fovea and , perhaps , via parts of the retina that are very close to the fovea , then the need to provide for any given pattern in many parts of the receptor array may be greatly reduced . since , by moving our eyes , we may view any part of a scene , it is possible that any given entity would need only one or two sets of connections between the receptor array and the pattern assembly for that entity .as noted in section [ sp - n_connections_between_pattern_assemblies_section ] , it seems likely that , with regard to figure [ the_brave_neural_figure ] , there would , in a more realistic example , be several levels of structure between neural symbols in the receptor array and relatively complex structures like words .at the first level above the receptor array there would be pattern assemblies for relatively small recurrent structures , and the variety of such structures would be relatively small .this should ease any possible problems in connecting the receptor array to pattern assemblies . if it turns out that the number of necessary connections is indeed too large to be practical , or if there is empirical evidence against such numbers , then a possible alternative to what has been sketched in this paper is some kind of dynamic system for the making and breaking of connections between the receptor array and pattern assemblies .it seems likely that permanent or semi - permanent connections would be very much more efficient and the balance of probabilities seems to favour such a scheme . in connection with positional invariance ,it is relevant to note that `` ... lack of localization is quite common in higher - level neurons : receptive fields become larger as the features they represent become increasingly complex .thus , for instance , neurons that respond to faces typically have receptive fields that cover most of the visual space . for these cells ,large receptive fields have a distinct advantage : the preferred stimulus can be identified no matter where it is located on the retina . ''a tentative and partial explanation of this observation is that repetition of neurons that are sensitive to each of several categories of low - level feature in the receptor array and as id - neural - symbols for ` low - level ' pattern assemblies is what allows positional invariance to develop at higher levels .as was emphasised in section [ sp - abstract_outline_section ] , the sp system ( sp - abstract ) has strengths and potential in the representation and processing of several different kinds of knowledge , not just the syntax of natural language .that versatility has been achieved using the mechanisms in sp - abstract that were outlined in that section .if those mechanisms can be modelled in sp - neural , it seems likely that the several kinds of knowledge that may be represented and processed in sp - abstract may also be represented and processed in sp - neural . as an illustration , figure [ class_hierarchy_figure ] and [ fortune_brave_multiple_alignment_figure ] ,this multiple alignment has been rotated by .the choice between these alternative presentations of multiple alignments depends entirely on what fits best on the page . ]shows a simple example of how , via multiple alignment , the sp computer model may recognise an unknown creature at several different levels of abstraction , and figure [ class_hierarchy_neural_figure ] suggests how part of the multiple alignment , with associated patterns , may be realised in terms of pattern assemblies and their inter - connections .figure [ class_hierarchy_figure ] shows the best multiple alignment found by the sp computer model with four symbols representing attributes of an unknown creature ( shown in column 0 ) and a collection of old patterns representing different creatures and classes of creature , some of which are shown in columns 1 to 4 , one pattern per column . in a more detailed and realistic example , symbols like `` eats ` ' , `` retractile - claws ` ' , and `` breathes ` ' , would be represented as patterns , each with its own structure . from this multiple alignment , we can see that the unknown creature has been identified as an animal ( column 4 ) , as a mammal ( column 3 ) , as a cat ( column 2 ) and as a specific cat , ` tibs ' ( column 1 ) .it is just an accident of how the sp computer model has worked in this case that the order of the patterns across columns 1 to 4 of the multiple alignment corresponds with the level of abstraction of the classifications . in general , the order of patterns in columns above 0 is entirely arbitrary , with no significance .may be realised in sp - neural showing two of the attributes from column 0 in the multiple alignment and with ` animal ' and ` mammal ' pattern assemblies corresponding to patterns from columns 4 and 3with an associated pattern assembly for ` reptile ' .the conventions are the same as in figure [ class_hierarchy_figure].,scaledwidth=90.0% ] figure [ class_hierarchy_neural_figure ] shows how part of the multiple alignment from figure [ class_hierarchy_figure ] may be realised in sp - neural .the figure contains pattern assemblies for ` animal ' and ` mammal ' , corresponding to patterns from columns 4 and 3 of the multiple alignment .notice that the left - right order of the pattern assemblies is different from the order of the patterns in the multiple alignment , in accordance with the remarks , above , about the workings of the sp computer model , and also because there is no reason to believe that pattern assemblies are represented in any particular order .neural connections amongst the things that have been mentioned so far are very much the same as alignments between neural symbols in figure [ class_hierarchy_figure ] : ` eats ' on the left connects with ` eats ' in the ` animal ' pattern assembly ; ` furry ' connects with ` furry ' in the ` mammal ' pattern assembly , and the ` a ' and ` # a ' connections for those two pattern assemblies correspond with the alignments of symbols in the multiple alignment . as in figure[ the_brave_neural_figure ] , some neural connections are shown with broken lines to suggest that they would be relatively inactive during the neural processing which identifies one or more ` good ' namas . andas before , it is envisaged that there would be one or more neural connections between each neural symbol and its immediate neighbours within each pattern assembly , but these are not marked in the figure .the inclusion of a pattern assembly for ` reptile ' in figure [ class_hierarchy_neural_figure ] , with some of its neural connections , is intended to suggest some of the processing involved in identifying one or more winning namas . in the same way that the pattern for ` mammal ' is receiving excitatory signals from the pattern for ` animal ', one would expect excitatory signals to flow to pattern assemblies for the other main groups of animals , including reptiles .ultimately , ` reptile ' would fail to feature in any winning nama because of evidence from the neural symbols ` furry ' , ` purrs ' and ` white - bib ' .like any good database or dictionary , the repository of old patterns in sp - abstract should only contain one copy of any given sp pattern .but in something like _ jack sprat could eat no fat , his wife could eat no lean _ , the words _ could _, _ eat _ , and _ no _ each occur twice . with an example like this , it seems reasonable to suppose that there is only one stored pattern for each of the repeated words , and likewise for the many other examples of entities that are repeated within something larger , witness the many legs of a centipede . in sp - abstract ,this apparent difficulty has been overcome by saying that each sp pattern in a multiple alignment is an _ appearance _ of the pattern , not the pattern itself which allows us to have multiple instances of a pattern in a multiple alignment without breaking the rule that the repository of old patterns should contain only one copy of each pattern .but in sp - neural , it is not obvious how to create an ` appearance ' of a pattern assembly that is not also a physical structure of neurons and their interconnections but the speed with which we can understand natural language seems to rule out what appears to be the relatively slow growth of new neurons and their interconnections .how we can create new mental structures quickly arises again in other connections , as discussed in section [ sp - n_speed_expressiveness_section ] .if we duck these questions for the time being and return to parsing , it may be argued that with something like _ jack sprat could eat no fat , his wife could eat no lean _ , the first instance of _ could _ is represented only for the duration of the word by the stored pattern for _ could _ , so that the same pattern can be used again to represent the second instance of _ could_and likewise for _ eat _ and _ no_. but it appears that this line of reasoning does not work with a recursive structure like _ the very very very fast car_. native speakers of english know that with a phrase like _ the very very very fast car _ , the word _ very _ may in principle be repeated any number of times .this observation , coupled with the observation that recursive structures are widespread in english and other natural languages , suggests strongly that the most appropriate parsing of the phrase is something like the multiple alignment shown in figure [ ma_recursion_figure ] . here, the repetition of _ very _ is represented via three appearances of the pattern `` ri ri1 ri # ri i # i # ri ` ' , a pattern which is self - referential because the inner pair of symbols `` ri # ri ` ' can be matched with the same two symbols , one at the beginning of the pattern and one at the end . because the recursion depends on at least two instances of `` ri ri1 ri # ri i # i # ri ` ' being ` live ' at the same time , it seems necessary for sp - neural to be able to model multiple appearances of any pattern .that conclusion , coupled with the above - mentioned arguments from the speed at which we can speak , and the speed with which we can imagine new things , argues strongly that sp - neural and any other neural theory of cognition must have a means of creating new mental structures quickly .it seems unlikely that these things could be done via the growth of new neurons and their interconnections .the tentative answer suggested here is that , in processes like parsing or pattern recognition , including examples with recursion like that shown in figure [ ma_recursion_figure ] , virtual copies of pattern assemblies may be created and destroyed very quickly via the switching on and switching off of synapses ( see section [ sp - n_speed_expressiveness_section ] ) . clearly , more detail is needed for a fully satisfactory answer .pending that better answer , figure [ neural_recursion_figure ] shows tentatively how recursion may be modelled in sp - neural , with neural symbols and pattern assemblies corresponding to selected symbols and patterns in figure [ ma_recursion_figure ] .on the left of that figure , we can see how the neural symbol `` very ` ' connects with a matching neural symbol in the pattern assembly `` i i1 very # i ` ' .further right , we can see how the first and last neural symbols in `` i i1 very # i ` ' connect with matching neural symbols in the pattern assembly `` ri ri1 ri # ri i # i # ri ` ' . in the figure , the self - referential nature of the pattern assembly `` ri ri1 ri # ri i # i # ri ` ' can be seen in the neural connection between `` ri ` ' at the beginning of that pattern assembly and the matching neural symbol in the body of the same pattern assembly , and likewise for `` # ri ` ' at the end of the pattern assembly .although it is unclear how this recursion may achieve the effect of repeated appearances of the pattern assembly at the speed with which we understand or produce speech , the analysis appears to be sounder than what is described in ( * ? ? ?* section 11.4.2 ) , especially figure 11.10 in that section .an inspection of figure [ the_brave_neural_figure]showing how , in sp - neural , a small portion of natural language may be analysed by pattern assemblies and their interconnections may suggest that if we wish to reverse the process to create language instead of analysing it then the innervation would need to be reversed : we may guess that two - way neural connections would be needed to support the production of speech or writing as well as their interpretation . buta neat feature of sp - abstract is that one set of old patterns , together with the processes for building multiple alignments , will support both the analysis and the production of language .so it is reasonable to suppose that if sp - neural works at all , similar duality will apply to pattern assemblies and their interconnections , without the need for two - way connections amongst pattern assemblies and neural symbols ( but see section [ sp - n_efferent_projections_section ] ) . of course , speaking or writing would need peripheral motor processes that are different from the peripheral sensory processes required for listening or reading , but , more centrally , the processes for analysing language or producing it may use the same mechanisms .the reason that sp - abstract , as expressed in the sp computer model , can work in ` reverse ' so to speak , is that , from a multiple alignment like the one shown in figure [ fortune_brave_multiple_alignment_figure ] , a code pattern like `` s 0 2 4 3 7 6 1 8 5 # s ` ' may be derived , as outlined in section [ sp - a_deriving_code_pattern_section ] .then , if that code pattern is presented to the sp system as a new pattern , the system can recreate the original sentence , `` f o r t u n e f a v o u r s t h e b r a v e ` ' , as shown in figure [ output_figure ] .it is likely that , in a more fully - developed account , code patterns would represent connections between syntax and semantics so that they may provide the means of generating sentences from meanings .as noted in section [ sp - a_evaluation_systems_section ] , preliminary trials show how , with the sp computer model , sentences may be generated from meanings and _ vice versa_. that the sp system should be able to reconstruct a sentence that was originally compressed by means of the same system ( section [ sp - n_output_section ] ) may seem paradoxical .how is it that a system that is dedicated to information compression should be able , so to speak , to drive compression in reverse ? a resolution of this apparent paradoxis described in ( * ? ? ?* section 3.8 ) . in brief , the key to the conjuring trick is to ensure that , after the sentence has been compressed , there is enough residual redundancy in the code pattern to allow further compression , and to ensure that this further compression will achieve the effect of reconstructing the sentence . of course , parsing a sentence ( as shown in section [ sp - a_multiple_alignment_section ] ) or constructing a sentence from a code pattern ( as shown in section [ sp - n_output_section ] ) are very artificial applications with natural language. normally , when we read some text or listen to someone speaking , we aim to derive meaning from the writing or the speech . andwhen we write or speak , it seems , intuitively , that the patterns of words that we are creating are derived from some kind of underlying meaning that we are trying to express .it is envisaged that , in future development of sp - abstract and the sp computer model , the id - symbols in code patterns will provide some kind of bridge between syntactic forms and representations of meanings , thus facilitating the processes of understanding the meanings of written or spoken sentences and of creating sentences to express particular meanings . as noted at the end of section [ sp - a_evaluation_systems_section ] ,there are preliminary examples of how , with the sp computer model , a sentence may be analysed for its meaning ( * ? ? ?* section 5.7 , figure 5.18 ) , and how the same sentence may be derived from a representation of its meaning _ ibid ._ , figure 5.19 ) .although as we have seen earlier in section [ sp - n_output_section ] , sp - neural , via principles established in sp - abstract , provides for the creation of language , and other kinds of knowledge , without the need for efferent connections from the cortex back along the path of afferent nerves , there is evidence that such connections do exist : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` neurons of the cerebral cortex send axons to subcortical regions .... subcortical projections are to those nuclei in the thalamus and brainstem that provide ascending sensory information . by farthe most prominent of these is to the thalamus : the neurons of a primary sensory cortex project back to the same thalamic nucleus that provides input to the cortex .this system of descending connections is truly impressive because the number of descending corticothalamic axons greatly exceeds the number of ascending thalamocortical axons .these connections permit a particular sensory cortex to control the activity of the very neurons that relay information to it . ''_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ but the descending nerves described in this quotation may have a function that is quite different from the creation of sentences or other patterns of activity .one possible role for such nerves may be `` the focussing of activity so that relay neurons most activated by a sensory stimulus are more strongly driven and those in surrounding less well activated regions are further suppressed . ''a familiar observation is that , if something like a fan is switched on near us , we notice the noise for a while and then come to ignore it . andif , later , the fan is switched off , we notice the relative quiet for a while and then cease to be aware of it . in general, it seems that we are relatively sensitive to changes in our environment and relatively insensitive to things that remain constant .it has been accepted for some time that the way we adapt to constant stimuli is due to inhibitory neural structures and processes in our brains and nervous systems , that inhibitory structures and processes are widespread in the animal kingdom , and that they have a role in reducing the amount of information that we need to process .regarding the last point , it is clearly inefficient for anyone to be constantly registering , second - by - second , the noise of a nearby fan : `` noise , noise , noise , noise , noise , ... ` ' and likewise for the state of relative quietness when the fan is switched off . in terms of information theory ,there is _ redundancy _ in the second - by - second recurrence of the noise ( or quietness ) , and we can eliminate most or all of the redundancy and thus compress the information by simply recording that the noise is ` on ' and that it is continuing ( and likewise , _ mutatis mutandis _ , for quiet ) . this is the ` run - length encoding ' technique for compression of information , , retrieved 2016 - 03 - 04 . ]it is essentially what adaptation does , and , in neural tissue , it appears to be mediated largely by ` lateral ' inhibition . with lateral inhibition in sensory neurons , there are inhibitory connections between neighbouring neurons so that , when they are both stimulated , they tend to inhibit each other , and thus reduce their rates of firing where there is strong uniform stimulation .but inhibition is reduced where strong stimulation gives way to weaker stimulation , leading to a local swing in the rate of firing ( ; see also ( * ? ? ?* section 2.3.1 ) ; there is more about lateral inhibition in ) .there are similar effects in the time dimension .again , barlow says , in connection with neurons in the mammalian cortex that receive inputs from both eyes , `` ... it is now clear that input from one eye can , and frequently does , inhibit the effects of input from the other eye , ... '' ( p. 147 ) . taking these observations together ,we may abstract a general rule : _ when , in neural processing , two or more signals are the same , they tend to inhibit each other , and when they are different , they do nt ._ the overall effect should be to detect redundancy in information and to reduce it , whilst retaining non - redundant information , in accordance with the central principle in the sp theory that much of computing and cognition may , to a large extent , be understood as information compression . in a similar vein: `` lateral inhibition represents the classic example of a general principle : most neurons in sensory systems are best adapted for detecting changes in the external environment . ... as a rule , it is change which has the most significance for an animal ... this principle can also be explained in terms of information processing . given a world that is filled with constants with uniform objects , with objects that move only rarely it is most efficient to respond only to changes . '' . in view of the widespread occurrence of inhibitory mechanisms in the brain ,interneurons constitute approximately 15 to 30% of the total population of cortical neurons , and they appear to be mostly gabaergic , representing the main components of inhibitory cortical circuits .... '' ; `` synaptic inhibition in the mammalian brain is mediated principally by gaba receptors . ''_ , p. 169 ) ; `` one of the great mysteries of synaptic integration is why there are so many different types of inhibitory interneurons .... more than 20 different types of inhibitory interneuron have been described in the ca1 region of the hippocampus alone . ''_ , p. 249 ) . ] andin view of their apparent importance for the compression of information , and thus for selective advantage ( * ? ? ?* section 4 ) , it is pertinent to ask what role or roles they may play in sp - neural . here are some possibilities : * _ low - level sensory features_. at relatively ` low ' levels in sensory processing , it appears that , as noted above , lateral inhibition has a role in identifying such things as boundaries between uniform areas , meaning lines .it may also have a role in identifying other kinds of low - level sensory features mentioned in section [ sp - n_sensory_data_receptor_array_section ] .* _ information compression via the matching and unification of patterns ( icmup)_. as noted in section [ elements_of_sp - a_section ] , sp - abstract , and the sp computer model , is founded on the principle that information compression may be achieved by the matching and unification of patterns ( icmup ) . here, there appear to be two possible roles for inhibition : * * as we have seen , lateral inhibition can have the effect of inhibiting signals from neighbouring sensory neurons when they are receiving stimulation which is the same of nearly so .this may be seen as an example of icmup . * * in accordance with the rule suggested above , inhibitory processes may serve as a means of detecting redundancy between a new pattern assembly like `` t a b l e ` ' and an old pattern assembly like ``n 9 t a b l e # n ` ' : * * * we may suppose that there are inhibitory links between neighbouring neural symbols in the old pattern assembly so that , if all of the neural symbols in the body of the pattern assembly ( ie , `` t a b l e ` ' ) are stimulated , or most of them , then mutual inhibition amongst those neural symbols will suppress their response . and , as with lateral inhibition in sensory neural tissue , inhibition in one area can mean enhanced responses at the boundaries with neighbouring areas , which , in this case , would be the id - symbols `` n ` ' and `` 9 ` ' on the left , and `` # n ` ' on the right .then , excitatory signals from `` n ` ' and `` # n ` ' may go on to higher - level patterns that contain nouns , as suggested by the broken - line links from those two neural symbols in figure [ the_brave_neural_figure ] . since there is no link to export excitatory signals from `` 9 ` ' , no such signals would be sent . * * * alternatively , we may suppose that a stored pattern assembly like `` n 9 t a b l e # n ` ' has a background rate of firing and that , when matching stimulation is received for the neural symbols `` t a b l e ` ' , the background rate of firing in the corresponding neural symbols in `` n 9 t a b l e # n ` ' is reduced , with an associated upswing in the rates of firing of the neural symbols `` n ` ' and `` 9 ` ' and `` # n ` ' , as before . * _ sharpening choices amongst alternatives_. as mentioned in section [ sp - n_neural_processing_section ] , the process of forming neural analogues of multiple alignments ( namas ) means identifying one or two of the most excited pattern assemblies , with structures below them that feed excitation to them . here , inhibition may play a part by enhancing the status of the most excited pattern assemblies and suppressing the rest . how inhibition may achieve that kind of effect is discussed quite fully by von bksy ( * ? ? ?* chapters ii and v ) , and also in .more information and discussion about the possible roles of inhibition in the cerebral cortex may be found in .this section considers how the learning processes in sp - abstract , which are outlined in sections [ sp - a_early_learning_section ] and [ sp - a_later_learning_section ] , may be realised in sp - neural .it seems likely that neural structures for the detection of ` low level ' features like lines and corners in vision , or formant ratios and transitions in hearing , are largely inborn , although `` it is a curious paradox that , while [ hubel and wiesel ] have consistently argued for a high degree of ontogenetic determination of structure and function in the visual system , they are also the authors of the best example of plasticity in response to changed visual experience . '' , and `` it has ... been shown convincingly that the orientation preference of cells can be modified , ... '' ( _ ibid . _ ) .also , `` in the somatosensory system , if input from a restricted area of the body surface is removed by severing a nerve or by amputation of a digit , the portion of the cortex that was previously responsive to that region of the body surface becomes responsive to neighbouring regions .... '' .but it is clear that most of what we learn in life is at a ` higher ' level which , in sp - neural , will be acquired via the the creation and destruction of pattern assemblies , as discussed in the following subsections . let us suppose that a young child hears the speech equivalent of `` t h e b i g h o u s e ` ' in accordance with the example in section [ sp - a_early_learning_section ] .as we have seen , when the repository of old patterns is empty or nearly so , new patterns are stored directly as old patterns , somewhat like a recording machine , but with the addition of id - symbols at their beginnings and ends .it seems unlikely that a young child would grow new neurons to store a newly - created old pattern assembly like `` a 1 t h e b i g h o u s e # a ` ' , as discussed in section [ sp - a_early_learning_section ] .it seems much more likely that a pattern assembly like that would be created by some kind of modification of pre - existing neural tissue comprising sequences or areas of unassigned neural symbols with lateral connections between them as suggested in section [ sp - n_pattern_assemblies_section ] .pattern assemblies would be created by the switching on and off of synapses at appropriate points , in a manner that is more like a tailor cutting up pre - woven cloth than someone knitting or crocheting each item from scratch . in accordance with the labelled line principle ( section [ sp - n_labelled_line_principle_section ] ) , the meaning of each symbol in a newly - created pattern assemblywould be determined by what it is connected to , as described in section [ sp - n_creating_neural_connections_section ] .similar principles would apply when old patterns are created from partial matches between patterns , as described in section [ sp - a_early_learning_section ] . as with the laying down of newly - created old patterns ( section [ sp - n_creating_old_pattern_assemblies_section ] ), it seems unlikely that connections between pattern assemblies , like those shown in figure [ the_brave_neural_figure ] , would be created by growing new axons or dendrites .it seems much more likely that such connections would be established by switching on synapses between each of the two neurons to be connected and pre - existing axons or dendrites , somewhat like the making of connections in a telephone exchange ( see section [ sp - n_speed_expressiveness_section ] ) .this idea , together with the suggestions in section [ sp - n_creating_old_pattern_assemblies_section ] about how old pattern assemblies may be created , is somewhat like the way in which an ` uncommitted logic array ' ( ula ) , retrieved 2016 - 03 - 20 . ]may , via small modifications , be made to function like any one of a wide variety of ` application - specific integrated circuits ' ( asics ) , , retrieved 2016 - 03 - 20 .] or how a ` field - programmable gate array ' ( fpga ) , retrieved 2016 - 03 - 20 .] may be programmed to function like any one of a wide variety of integrated circuits . in the sp theory ,patterns and pattern assemblies are never modified they are either created or destroyed .the latter process occurs mainly in the process of searching for ` good ' grammars to describe a given set of new patterns , as outlined in section [ sp - a_later_learning_section ] . at each stage , when a few ` good ' grammars are retained in the system , the rest are discarded .this means that any pattern assembly in one or more of the ` bad ' grammars that is not also in one or more of the ` good ' grammars may be destroyed .it seems likely that , in a process that may be seen as a reversal of the way in which pattern assemblies and their interconnections are created , the destruction of a pattern assembly does not mean the physical destruction of its neurons .it seems more likely that all neural connections from the pattern assembly are broken by switching off relevant synapses ( sections [ sp - n_destroying_neural_connections_section ] and [ sp - n_speed_expressiveness_section ] ) and that its constituent neurons are retained for later use in other pattern assemblies .it must be admitted that , apart from the remarks in forgoing subsections about the creation and destruction of pattern assemblies and their inter - connections , it is unclear how , in sp - neural , one may achieve anything equivalent to the process of searching the abstract space of possible grammars that has been implemented in the sp computer model .one possibility is to simplify things as follows . instead of evaluating whole grammars , as in the sp computer model, it may be possible to achieve roughly the same effect by evaluating pattern assemblies in terms of their effectiveness or otherwise for the economical encoding of new information and , periodically , to discard those pattern assemblies that do badly .readers familiar with issues in ai or neuroscience may wonder what place , if any , there may be in sp - neural for the concept of ` hebbian ' learning .this idea , proposed by hebb , is that : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` when an axon of cell a is near enough to excite a cell b and repeatedly or persistently takes part in firing it , some growth process or metabolic change takes place in one or both cells such that a s efficiency , as one of the cells firing b , is increased . ''( p. 62 ) ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ variants of this idea are widely used in versions of ` deep learning ' in artificial neural networks and have contributed to success with such systems . , 2016 - 03 - 08 ; and `` google unveils neural network with ` superhuman ' ability to determine the location of almost any image '' , _ mit technology review _ , http://bit.ly/1p5qmse[bit.ly/1p5qmse ] , 2016 - 02 - 24 .] but in ( * ? ?* section v - d ) i have argued that : * the gradual strengthening of neural connections which is a central feature of hebbian learning ( and deep learning ) does not account for the way that people can , very effectively , learn from a single occurrence or experience ( sometimes called ` one - trial ' learning ) .* hebb was aware that his theory of learning with cell assemblies would not account for one - trial learning and he proposed a ` reverberatory ' theory for that kind of learning .but , as noted in ( * ? ? ?* section v - d ) , milner has pointed out that it is difficult to understand how this kind of mechanism could explain our ability to assimilate a previously - unseen telephone number : for each digit in the number , its pre - established cell assembly may reverberate ; but this does not explain memory for the _ sequence _ of digits in the number . andit is unclear how the proposed mechanism would encode a phone number in which one or more of the digits is repeated .* one - trial learning is consistent with the sp theory because the direct intake and storage of sensory information is bedrock in how the system learns ( section [ sp - a_early_learning_section ] ) .* the sp theory can also account for the relatively slow learning of complex skills such as how to talk or how to play tennis at a high standard because of the complexity of the abstract space of possible solutions that needs to be searched .does this mean that hebbian learning is dead ?probably not : * in some forms , the phenomena of ` long - term potentiation ' ( lpt ) in neural functioning seem to be linked to hebbian types of learning . * gradual strengthening of neural connections may have a role to play in sp - neural because some such mechanism is needed to record , at least approximately , the frequency of occurrence of neural symbols and pattern assemblies ( sections [ sp - a_early_learning_section ] and [ sp - n_neural_processing_section ] ) .a general issue for any neural theory of the representation and processing of knowledge , is how to account for the speed with which we can create neural structures , bearing in mind that such structures must be sufficiently versatile to accommodate the representation and processing of a wide range of different kinds of knowledge .this issue arises mainly in the following connections : * _ one - trial learning_. in keeping with the remarks above about one - trial learning ( section [ sp - n_hebbian_learning_section ] ) , it is a familiar feature of everyday life that we can see and hear something happening a football match , a play , a conversation , and so on and then , immediately or some time later , give a description of the event .this implies that we can lay down relevant memories at speed . * _ the learning of complex knowledge and skills_. if we accept the view of unsupervised learning which is outlined in sections [ sp - a_early_learning_section ] , [ sp - a_later_learning_section ] , and [ sp - n_unsupervised_learning_section ] , then it seems necessary to suppose that pattern assemblies are created and destroyed during the search for one or two grammars that provide a ` good ' description of the knowledge or skills that is being learned and it seems likely that the creation and destruction of pattern assemblies would be fast . *_ the interpretation of sensory data_. in processes like the parsing of natural language or , more generally , understanding natural language , and in processes like pattern recognition , reasoning , and more , it seems necessary to create intermediate structures like those shown in figure [ fortune_brave_multiple_alignment_figure ] , and for those structures to be created at speed .* _ speech and action_. in a similar way , it seems necessary for us to create mental structures fast in any kind of activity that requires thought , such as speaking in a way that is meaningful and comprehensible , most kinds of sport , most kinds of games , and so on . * _imagination_. most people have little difficulty in imagining things they are unlikely ever to have seen such as a cat with a coat made of grass instead of fur , or a cow with two tails .we can create such ideas fast and , if we like them well enough , we may remember them for many years .one possible solution , which is radically different from sp - neural , is to suppose that our knowledge is stored in some chemical form such as dna , and that the kinds of mental processes mentioned above might be mediated via the creation and modification of such chemicals .another possibility is that learning is mediated by epigenetic mechanisms , as outlined in ( * ? ? ?* section 7.4 ) . without wishing to prejudge what the primary mechanism of learning may be , orwhether perhaps there are several such mechanisms , this paper focusses on sp - neural and how it may combine speed with expressiveness , as seems to be required for the kinds of functions outlined above . at first sight , the problem of speed in the creation of pattern assemblies and their interconnections is solved via the long - established idea that we can remember things for a few seconds via a ` short - term memory ' , retrieved 2016 - 04 - 04 . ] that is distinct from ` long - term memory ' , retrieved 2016 - 04 - 04 . ] and ` working memory . ' , retrieved 2016 - 04 - 04 . ] but there is some uncertainty about the extent to which these three kinds of memory may be distinguished , one from another , and there is considerable uncertainty about how they might work , and how information may be transferred from one kind of memory to another . as a proffered contribution to discussions in this area ,the suggestion here is that , in any or all of short - term memory , working memory , and long - term memory , sp - neural may achieve the necessary speed in the creation of new structures , combined with versatility in the representation and processing of diverse kinds of knowledge , by the creation of pattern assemblies and their interconnections via the switching on and off of synapses in pre - established neural structures and their inter - connections somewhat like the making and breaking of connections in a telephone exchange or the creation of electronic circuits in ulas and fpgas , as outlined in section [ sp - n_creating_neural_connections_section ] .with regard to possible mechanisms for the switching on and off of synapses : * it appears that , in the entorhinal cortex between the hippocampus and the neocortex , there are neurons that can be switched `` on '' and `` off '' in an all - or - nothing manner , and we may suppose that synapses have a role to play in this behaviour .* `` the efficacy of a synapse can be potentiated through at least six mechanisms '' ( * ? ? ?* caption to figure 47.10 ) and it is possible that at least one them has the necessary speed , especially since `` [ long - term potentiation ] is defined as a persistent increase in synaptic strength ... that can be induced _ rapidly _ by a brief burst of spike activity in the presynaptic afferents . ''( emphasis added ) . * `` [ long - term depression ] is believed by many to be ... a process whereby [ long - term potentiation ] could be reversed in the hippocampus and neocortex .... '' .* `` ... it is now evident that [ long - term potentiation ] , at least in the dentate gyrus , can either be ... stable , lasting months or longer . ''* abstract ) , although there appears to be little or no evidence with a bearing on whether or not there might be an upper limit to the duration of long - term potentiation .* there is evidence that the protein kinase m ( pkm ) may provide a means of turning synapses on and off , and thus perhaps storing long - term memories . with all these possible mechanisms ,key questions are : do they act fast enough to account for the speed of the phenomena described above ; and can they provide the basis for memories that can last for 50 years or more .a prominent feature of human perception is that we have a robust ability to recognise things despite disturbances of various kinds .we can , for example , recognise a car when it is partially obscured by the leaves and branches of a tree , or by falling snow or rain .one of the strengths of sp - abstract and its realisation in the sp computer model is that , in a similar way , recognition of a new pattern or patterns is not unduly disturbed by errors of omission , commission , and substitution in those data ( ( * ? ? ?* chapter 6 ) , ( * ? ? ?* section 4.2.2 ) ) .this is because of the way the sp computer model searches for a global optimum in the building of multiple alignments , so that it does not depend on the presence or absence of any particular feature or combination of features in the new information that is being analysed . in its overall structure, sp - neural seems to lend itself to that kind of robustness in recognition in the face of errors in data .but the devil is in the detail . in further development of the theory , and in the development of a computer model of sp - neural, it will be necessary to clarify the details of how that kind of robustness may be achieved . in shaping this aspect of sp - neural , the principles that have been developed in sp - abstractare likely to prove useful and , with empirical evidence from brains and nervous systems , they may serve as a touchstone of success .as was mentioned in the introduction , sp - neural is a tentative and partial theory .that said , the close relationship between sp - neural and sp - abstract , the incorporation into sp - abstract of many insights from research on human perception and cognition , strengths of sp - abstract in terms of simplicity and power ( section [ sp - a_evaluation_theory_section ] ) , and advantages of sp - abstract compared with other ai - related systems ( section [ sp - a_distinctive_features_advantages_section])lend support to sp - neural as it is now as a conceptual model of the representation and processing of knowledge in the brain , and a promising basis for further research . naturally , we may have more confidence in some parts of the theory than others .arguably , the parts that inspire most confidence are these : * _ neural symbols and pattern assemblies_. _ all _ knowledge is represented in the cerebral cortex with _ pattern assemblies _ , the neural equivalent of patterns in sp - abstract .each such pattern assembly is an array of _ neural symbols _ , each of which is a single neuron or a small cluster of neurons the neural equivalent of a symbol in sp - abstract .topologically , each array has one or two dimensions , perhaps parallel to the surface of the cortex . * _ information compression via the matching and unification of patterns_. as in sp - abstract , sp - neural is governed by the overarching principle that many aspects of perception and cognition may be understood in terms of information compression via the matching and unification of patterns . * _ information compression via multiple alignment_. more specifically , sp - neural is governed by the overarching principle that many aspects of perception and cognition may be understood via a neural equivalent of the powerful concept of _ multiple alignment_. * _ unsupervised learning_. as in sp - abstract , unsupervised learning in sp - neural is the foundation for other kinds of learning supervised learning , reinforcement learning , learning by imitation , learning by being told , and so on . and as in sp - abstract , unsupervised learning in sp - neural is achieved via a search through alternative grammars to find one or two that score best in terms of the compression of sensory information . as noted in section [ sp - n_hebbian_learning_section ] , this is quite different from the kinds of ` hebbian ' learning that are popular in artificial neural networks . * _ problems of speed and expressiveness in the creation of pattern assemblies and their interconnections_. to account for the speed with which we can assimilate new information , and the speed of other mental processes ( section [ sp - n_speed_expressiveness_section ] ) , it seems necessary to suppose that pattern assemblies and their interconnections may be created from pre - existing neural structures by the making and breaking of synaptic connections , somewhat like the making and breaking of connections in a telephone exchange , or the creation of a bespoke electronic system from an ` uncommitted logic array ' ( ula ) or a ` field - programmable gate array ' ( fpga ) . as with sp - abstract , areas of uncertainty in sp - neuralmay be clarified by casting the theory in the form of a computer model and testing it to see whether or not it works as anticipated .it is envisaged that this would be part of a proposed facility for the development of the sp machine , a means for researchers everywhere to explore what can be done with the sp machine and to create new versions of it . at all stages in its development, the theory may suggest possible investigations of the workings of brains and nervous systems . andany neurophysiological evidence may have a bearing on the perceived validity of the theory and whether or how it may need to be modified .the main differences between hebb s concept of a ` cell assembly ' and the sp - neural concept of a ` pattern assembly ' are : * the concept of a pattern assembly has had the benefit of computer modelling of sp - abstract reducing vagueness in the theory and testing whether or not proposed mechanisms actually work as anticipated .these things would have been difficult or impossible for hebb to do in 1949 .* cell assemblies were seen largely as a vehicle for recognition , whereas , as neural realisations of sp ` patterns ' , pattern assemblies should be able to mediate several aspects of intelligence , including recognition . *anatomically , pattern assemblies are seen as largely flat groupings of neurons in the cerebral cortex ( section [ sp - n_pattern_assemblies_section ] ) , whereas cell assemblies are seen as structures in three dimensions . *as described below , a fourth difference between cell assemblies and pattern assemblies is in how structures may be shared . in literal sharing ,structures b and c in the figure both contain structure a. in sharing by copying , structures b and c each contains a copy of structure a. while in sharing by reference , structures b and c each contains a reference to structure a , in much the same way that a paper like this one contains references to other publications .by contrast , with the concept of a pattern assembly in the sp - neural , sharing is almost always achieved by means of neural ` references ' between structures .for example , a noun like ` table ' is likely to have neural connections to the many grammatical contexts in which it may occur , as suggested by the two broken - line connections from each of ` n ' and ` # n ' in the pattern assembly for ` table ' shown in figure [ the_brave_neural_figure ] .notice that , in this example , the putative direction of travel of nerve impulses is not relevant it is the neural connection that counts . in the sp system , it is intended that literal sharing should be impossible and that sharing by copying may only occur on the relatively rare occasions when the system has failed to detect the corresponding redundancy , and not always then .h. b. barlow .sensory mechanisms , the reduction of redundancy , and intelligence . in hmso ,editor , _ the mechanisation of thought processes _ , pages 535559 .her majesty s stationery office , london , 1959 .a. newell .you ca nt play 20 questions with nature and win : projective comments on the papers in this symposium . in w.g. chase , editor , _ visual information processing _ ,pages 283308 . academic press , new york , 1973 .j. g. wolff . learning syntax and meanings through optimization and distributional analysis . in y.levy , i. m. schlesinger , and m. d. s. braine , editors , _ categories and processes in language acquisition _ ,pages 179215 .lawrence erlbaum , hillsdale , nj , 1988 .http://bit.ly/zigjyc[bit.ly/zigjyc ] .j. g. wolff .medical diagnosis as pattern recognition in a framework of information compression by multiple alignment , unification and search ., 42:608625 , 2006 .http://bit.ly/xe7prg[bit.ly/xe7prg ] , arxiv:1409.8053 [ cs.ai ] .j. g. wolff . .cognitionresearch.org , menai bridge , 2006 .s : 0 - 9550726 - 0 - 3 ( ebook edition ) , 0 - 9550726 - 1 - 1 ( print edition ) .distributors , including amazon.com , are detailed on http://bit.ly/wmb1rs[bit.ly/wmb1rs ] .j. g. wolff .big data and the sp theory of intelligence . , 2:301315 , 2014 .http://bit.ly/1jgwxdh[bit.ly/1jgwxdh ] .this article , with minor revisions , is to be reproduced in fei hu ( ed . ) , _ big data : storage , sharing , and security ( 3s ) _ , taylor & francis llc , crc press , 2016 , pp .143170 .j. g. wolff .information compression , intelligence , computing , and mathematics .technical report , cognitionresearch.org , 2014 .http://bit.ly/1jeoech[bit.ly/1jeoech ] , arxiv:1310.8599 [ cs.ai ] . submitted for publication . j. g. wolff and v. palade .proposal for the creation of a research facility for the development of the sp machine .technical report , cognitionresearch.org , 2015 .http://bit.ly/1zzjjis[bit.ly/1zzjjis ] , arxiv:1508.04570 [ cs.ai ] .
the _ sp theory of intelligence _ , with its realisation in the _ sp computer model _ , aims to simplify and integrate observations and concepts across artificial intelligence , mainstream computing , mathematics , and human perception and cognition , with information compression as a unifying theme . this paper describes how abstract structures and processes in the theory may be realised in terms of neurons , their interconnections , and the transmission of signals between neurons . this part of the sp theory_sp - neural_is a tentative and partial model for the representation and processing of knowledge in the brain . empirical support for the sp theory outlined in the paper provides indirect support for sp - neural . in the sp theory ( apart from sp - neural ) , all kinds of knowledge are represented with _ patterns _ , where a pattern is an array of atomic symbols in one or two dimensions . in sp - neural , the concept of a ` pattern ' is realised as an array of neurons called a _ pattern assembly _ , similar to hebb s concept of a ` cell assembly ' but with important differences . central to the processing of information in the sp system is the powerful concept of _ multiple alignment _ , borrowed and adapted from bioinformatics . processes such as pattern recognition , reasoning and problem solving are achieved via the building of multiple alignments , while unsupervised learning is achieved by creating patterns from sensory information and also by creating patterns from multiple alignments in which there is a partial match between one pattern and another . it is envisaged that , in sp - neural , short - lived neural structures equivalent to multiple alignments will be created via an inter - play of excitatory and inhibitory neural signals . it is also envisaged that unsupervised learning will be achieved by the creation of pattern assemblies from sensory information and from the neural equivalents of multiple alignments , much as in the non - neural sp theory and significantly different from the ` hebbian ' kinds of learning which are widely used in the kinds of artificial neural network that are popular in computer science . the paper discusses several associated issues , with relevant empirical evidence . _ keywords : _ multiple alignment , cell assembly , information compression , unsupervised learning , artificial intelligence .
in a deformable porous material , deformation of the solid skeleton is mechanically coupled to flow of the interstitial fluid(s ) .this poromechanical coupling has relevance to problems as diverse as cell and tissue mechanics ( _ e.g. _ , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) , magma / mantle dynamics ( _ e.g. _ , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) , and hydrogeology ( _ e.g. _ , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?these problems are notoriously difficult due to the inherently two - way nature of poromechanical coupling , where deformation drives flow and vice - versa . in poroelasticity , the mechanics of the solid skeleton are described by elasticity theory .the theory of poroelasticity has a rich and interdisciplinary history . provides an excellent discussion of the historical roots of linear poroelasticy , which models poroelastic loading under infinitesimal deformations .major touchstones in the development of linear poroelasticity include the works of m. a. biot ( _ e.g. _ , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) , who formalized the linear theory and provided a variety of analytical solutions through analogy with thermoelasticity , as well as that of , who reformulated the linear theory in terms of more tangible material parameters and derived solutions to several model problems .when the deformation of the skeleton is not infinitesimal , poroelasticity must be cast in the framework of large - deformation elasticity ( _ e.g. _ , * ? ? ?* ; * ? ? ?large - deformation poroelasticity has found extensive application over the past few decades in computational biomechanics for the study of soft tissues , which are porous , fluid - saturated , and can accommodate large deformations reversibly .much of this effort has been directed at capturing the complex and varied structure and constitutive behavior of biological materials ( _ e.g. _ , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?large - deformation poroelasticity has also been applied in computational geomechanics for the study of soils and other geomaterials .soils typically accommodate large deformations through plasticity ( ductile failure or yielding ) due to their granular and weakly cemented microstructure , and much of the effort has been directed at the challenges of large - deformation elastoplasticity ( _ e.g. _ , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?large deformations can also occur through swelling ( _ e.g. _ , * ? ? ?* ; * ? ? ?* ) , which has attracted interest recently in the context of gels ( _ e.g. _ , * ? ? ?* ; * ? ? ?* ; * ? ? ?large - deformation ( poro)elasticity is traditionally approached almost exclusively with computational tools , and these are based almost exclusively on the finite - element method and in a lagrangian framework ( _ e.g. , _ * ? ? ? * ; * ? ? ?a thorough treatment of the lagrangian approach to large - deformation poroelasticity can be found in ref . .although powerful , these tools can be cumbersome from the perspective of developing physical insight .they are also poorly suited for studying nontrivial flow and solute transport through the pore structure . here, we instead consider the general theory of large - deformation poroelasticity in an eulerian framework .although the eulerian approach is well known ( _ e.g. _ , * ? ?* ) , it has very rarely been applied to practical problems and a clear and unified presentation is lacking .this approach is useful in the present context to emphasize the physics of poromechanical coupling . in the first part of this paper , we review and discuss the well - known theory ( [ s : poroelasticity][s : linearporoelasticity ] ) .we first consider the exact description of the kinematics of flow and deformation , adopting a simple but nonlinear elastic response in the solid skeleton ( [ s : poroelasticity ] ) .we then show how this theory reduces to linear poroelasticity in the limit of infinitesimal deformations ( [ s : linearporoelasticity ] ) .in the second part of this paper , we compare the linear and large - deformation theories in the context of two uniaxial model problems ( [ s : rect][s : flow_rect ] ) .the primary goals of this comparison are to study the role of kinematic nonlinearity in large deformations , and to examine the resulting error in the linear theory .the two model problems are : ( i ) compression driven by an applied load ( the consolidation problem ) and ( ii ) compression driven by a net fluid throughflow . in the former ,the evolution of the deformation is controlled by the rate at which fluid is squeezed through the material and out at the boundaries ; as a result , fluid flow is central to the rate of deformation but plays no role in the steady state . in the latter ,fluid flow is also central to the steady state since this is set by the steady balance between the gradient in fluid pressure and the gradient in stress within the solid skeleton .we show that , in both cases , the error in linear poroelasticity due to the missing kinematic nonlinearities plays a surprisingly important role in the dynamics of the deformation , and that this error is amplified by nonlinear constitutive behavior such as deformation - dependent permeability .poroelasticity is a multiphase theory in that it describes the coexistence and interaction of multiple immiscible constituent materials , or phases ( _ e.g. _ , * ? ? ?, we restrict ourselves to two phases : a solid and a fluid .the solid phase is arranged into a porous and deformable macroscopic structure , `` the solid skeleton '' or `` the skeleton '' , and the pore space of the solid skeleton is saturated with a single interstitial fluid .deformation of the solid skeleton leads to rearrangement of its pore structure , with corresponding changes in the local volume fractions ( see [ ss : volumefractions ] ) . throughout, we use the terms `` the solid grains '' or `` the solid '' to refer to the solid phase and `` the interstitial fluid '' or `` the fluid '' to refer to the fluid phase .although the term `` grain '' is inappropriate in the context of porous materials with fibrous microstructure , such as textiles , polymeric gels , or tissues , we use it generically for convenience .we assume here that the two constituent materials are incompressible , meaning that the mass densities of the fluid , , and of the solid , , are constant .note that this does not prohibit compression of the solid skeleton variations in the macroscopic mass density of the porous material are enabled by changes in its pore volume .the assumption of incompressible constituents is standard in soil mechanics and biomechanics , where fluid pressures and solid stresses are typically very small compared to the bulk moduli of the constituent materials .however , some caution is merited in the context of deep aquifers where , at a depth of a few kilometers , the hydrostatic pressure and lithostatic stress themselves approach a few percent of the bulk moduli of water and mineral grains . in these cases, it may be appropriate to allow for fluid compressibility while retaining the incompressibility of the solid grains , in which case much of the theory discussed here still applies .the theory of poroelasticity can be generalized to allow for compressible constituents ( _ e.g. _ , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) , but this is beyond the scope of this paper .what follows is , in essence , a brief and superficial introduction to continuum mechanics in the context of a porous material .we have minimized the mathematical complexity where possible for the sake of clarity , and to preserve an emphasis on the fundamental physics .for the more mathematically inclined reader , excellent resources are available for further study ( _ e.g. _ , * ? ? ?* ; * ? ? ?* ; * ? ? ?a core concept in continuum mechanics is the distinction between a lagrangian reference frame ( fixed to the material ) and an eulerian one ( fixed in space ) .these two perspectives are rigorously equivalent , so the choice is purely a matter of convention and convenience .a lagrangian frame is the natural and traditional choice in solid mechanics , where displacements are typically small and where the current state of stress is always tied to the reference ( undeformed ) configuration of the material through the current state of strain ( displacement gradients ) .an eulerian frame is the natural and traditional choice in fluid mechanics , where displacements are typically large and where the current state of stress depends only on the instantaneous rate of strain ( velocity gradients ) .more complex materials such as viscoelastic solids and non - newtonian fluids can have elements of both , with a dependence on both strain and strain rate ( _ e.g. _ , * ? ? ?* ; * ? ? ?problems involving fluid - solid coupling lead to a clear conflict between the eulerian and lagrangian approaches . in classical fluid - structure interaction problems , such as air flow around a flapping flag or blood flow through an artery ,the fluid and the solid exist in separate domains that are coupled along a shared moving boundary . in a porous material , in contrast , the fluid and the solid coexist in the same domain and are coupled through bulk conservation laws . as a result ,the entire problem must be posed either in a fixed eulerian frame or in a lagrangian frame attached to the solid skeleton .one major advantage of the latter approach is that it eliminates moving ( solid ) boundaries since the skeleton is stationary relative to a lagrangian coordinate system ; this feature is particularly powerful and convenient in the context of computation .however , the eulerian approach leads to a simpler and more intuitive mathematical model in the context of fluid flow and transport , and it is straightforward and even advantageous when boundary motion is absent or geometrically simple .this conflict can be avoided altogether when the deformation of the skeleton is small , such that the distinction between an eulerian frame and a lagrangian one can be ignored , which is a core assumption of linear ( poro)elasticity . in the present context ,we consider two model problems where the fluid and the skeleton are tightly coupled , the deformation of the skeleton is large , and there is a moving boundary .we pose these problems fully in an eulerian frame and write all quantities as functions of an eulerian coordinate , fixed in the laboratory frame .accordingly , we adopt the notation where the are the components of the eulerian coordinate system , , with the associated unit vectors , and adopting the einstein summation convention . for reference and comparison ,we summarize the key aspects of the lagrangian framework in appendix [ app : s : lagrangian ] .we denote the local volume fractions of fluid and solid by ( the porosity , fluid fraction , or void fraction ) and ( the solid fraction ) , respectively .these are the _ true _ volume fractions in the sense that they measure the current phase volume per unit current total volume , such that .as such , the true porosity is the relevant quantity for calculating flow and transport through the pore structure .however , changes in at a spatial point reflect both deformation and motion of the underlying skeleton , so the relevant state of stress must be calculated with some care .alternatively , it is possible to define _ nominal _ volume fractions that measure the current phase volume per unit reference total volume .these nominal quantities are convenient in a lagrangian frame where , if the solid phase is incompressible , the nominal solid fraction is constant by definition and the nominal porosity is linearly related to the local volumetric strain .however , the nominal porosity is not directly relevant to flow and transport .in addition , the nominal volume fractions do not sum to unity ; rather , they must sum to the jacobian determinant ( see [ ss : kinematics_solid ] ) .here , we avoid these nominal quantities and work strictly with the true porosity . note that denotes the true porosity ( `` eulerian porosity '' ) by and the nominal porosity ( `` lagrangian porosity '' ) by , whereas we denote the true porosity by and the nominal porosity by ( see appendix [ app : s : lagrangian ] ) .the most primitive quantity for calculating deformation is the displacement field , which is a map between the current configuration of the solid skeleton and its reference configuration .in other words , the displacement field measures the displacement of material points from their reference positions . in an eulerian frame , the solid displacement field is given by where is the reference position of the material point that sits at position at time ( _ i.e. _ , it is the lagrangian coordinate in our eulerian frame ) .we adopt the convention that such that ; this is not required , but it simplifies the analysis . the displacement field is not directly a measure of deformation because it includes rigid - body motions . the deformation - gradient tensor , which is the jacobian of the deformation field , excludes translations by considering the spatial gradient of the displacement field . in an eulerian frame , is most readily defined through its inverse , where denotes the inverse and is the identity tensor .the deformation - gradient tensor still includes rigid - body rotations , but these can be excluded by multiplying by its transpose .hence , measures of strain are ultimately derived from the right cauchy - green deformation tensor or the left cauchy - green deformation tensor , where denotes the transpose .the eigenvalues of ( or , equivalently , of ) are the squares of the principal stretches , with .the stretches measure the amount of elongation along the principal axes of the deformation , which are themselves related to the eigenvectors of and : in the reference configuration , they are the normalized eigenvectors of ; in the current configuration , they are the normalized eigenvectors of . the jacobian determinant measures the amount of local volume change during the deformation , where denotes the determinant .the jacobian determinant is precisely the ratio of the current volume of the material at point to its reference volume . for an incompressible solid skeleton , . for a compressible solid skeleton made up of incompressible solid grains , as considered here ,deformation occurs strictly through rearrangement of the pore structure .the jacobian determinant is then connected directly to the porosity , where is the reference porosity field . in an eulerian frame ,the reference porosity field depends on because it refers to the initial porosity of the material that is currently located at but that was originally located at .note that unless is spatially uniform , in which case is simply a constant and this distinction is unimportant .lastly , local continuity for the incompressible solid phase is written =0,\ ] ] where is the solid velocity field .the solid velocity is the material derivative of the solid displacement , equations provide an exact kinematic description of the deformation of the solid skeleton , assuming only that the solid phase is incompressible .this description is valid for arbitrarily large deformations and , because it is simply a geometric description of the changing pore space , it _ makes no assumptions about the fluid that occupies the pore space_. this description remains rigorously valid when the fluid phase is compressible , and in the presence of multiple fluid phases .further , this description _ makes no additional assumptions about the constitutive behavior of the solid skeleton_it remains rigorously valid for any elasticity law , and in the presence of viscous dissipation or plasticity .we assume that the pore space of the solid skeleton is saturated with a single fluid phase . for a compressible fluid phase , local continuityis written where is the fluid velocity field .this expression remains valid for multiple fluid phases if and are calculated as fluid - phase - averaged quantities , in which case eq .must also be supplemented by a conservation law for each of the individual fluid phases .for simplicity , we focus here on the case of a single , incompressible fluid phase , for which we have there is no need to introduce a fluid displacement field since we assume below that the fluid is newtonian .the constitutive law for a newtonian fluid , and also for many non - newtonian fluids , depends only on the fluid velocity .we assume that the fluid flows relative to the solid skeleton according to darcy s law . for a single newtonian fluid , darcy s lawcan be written where is the permeability of the solid skeleton , which we have taken to be an isotropic function of porosity only , is the dynamic viscosity of the fluid , is the body force per unit mass due to gravity , and we have neglected body forces other than gravity .darcy s law is an implicit statement about the continuum - scale form of the mechanical interactions between the fluid and solid ( _ e.g. _ , 3.3.1 of * ? ? ?we simply adopt it here as a phenomenological model for flow of a single fluid through a porous material . in the presence of multiple fluid phases , eq .can be replaced by the classical multiphase extension of darcy s law ( _ e.g. _ , * ? ? ?for a single but non - newtonian fluid phase ,eq . must be modified accordingly ( _ e.g. _ , * ? ? ?generally , the permeability will change with the pore structure as the skeleton deforms , although this dependence is neglected in linear poroelasticity , where it is assumed that deformations are infinitesimal .the simplest representation of this dependence is to take the permeability to be a function of the porosity , as we have done above , and here again the true porosity is the relevant quantity .a common choice is the kozeny - carman formula , one form of which is where is the typical pore or grain size .although derived from experimental measurements in beds of close - packed spheres , this formula is commonly used for a wide range of materials .one reason for this wide use is that the kozeny - carman formula respects two physical limits that are important for poromechanics : the permeability vanishes as the porosity vanishes , and diverges as the porosity approaches unity .the former requirement ensures that fluid flow can not drive the porosity below zero , and the latter prevents the flow from driving the porosity above unity .we use a normalized kozeny - carman formula here , where is the relaxed or undeformed permeability .this expression preserves the qualitative characteristics of the original relationship while allowing the initial permeability and the initial porosity to be imposed independently .clearly , it is straightforward to design other permeability laws that have the same characteristics .note that the particular choice of permeability law will dominate the flow and mechanics in the limit of vanishing permeability since the pressure gradient , which is coupled with the solid mechanics , is inversely proportional to the permeability . since porosity is strictly volumetric ,writing neglects the impacts of rotation and shear .this form of deformation dependence is overly simplistic for materials with inherently anisotropic permeability fields , the axes of which would rotate under rigid - body rotation and would be distorted in shear .it is also possible that permeability anisotropy could emerge through anisotropic deformations , or through other effects creating orthotropic structure .we neglect these effects here for simplicity . one convenient way of combining eqs . , , and is by defining the total volume flux as this measures the total volume flow per unit total cross - sectional area per unit time .the total flux can also be viewed as a phase - averaged , composite , or bulk velocity . from this , it is then straightforward to derive [ eq : nonlinearadeeulerian ] = 0~,\label{eq : nonlinearadeeulerian_pde } \\\textrm{and}\qquad & \boldsymbol{\nabla}\cdot\mathbf{q } = 0~,\label{eq : nonlinearadeeulerian_divq } \end{aligned}\ ] ] with [ eq : vf_vs_nonlinear ] equations and embody darcy s law and the kinematics of the deformation , describing the coupled relative motion of the fluid and the solid skeleton .it remains to enforce mechanical equilibrium , and to provide a constitutive relation between stress and deformation within the solid skeleton .mechanical equilibrium requires that the fluid and the solid skeleton must jointly support the local mechanical load , and this provides the fundamental poromechanical coupling .the total stress is the total force supported by the two - phase system per unit area , and can be written where and are the solid stress and the fluid stress , respectively .the solid stress is the force supported by the solid per unit solid area , and is then the force supported by the solid per unit total area .similarly , the fluid stress is the force supported by the fluid per unit fluid area , and is then the force supported by the fluid per unit total area . note that it is implicitly assumed here and elsewhere that the phase area fractions are equivalent to the phase volume fractions .any stress tensor can be decomposed into isotropic ( volumetric ) and deviatoric ( shear ) components without loss of generality . for a fluid within a poroussolid , it can be shown that the shear component of the stress is negligible relative to the volumetric component at the continuum scale ( _ e.g. _ , 3.3.1 of * ? ? ?* ) , so that , where is the fluid pressure with the trace .note that we have adopted the sign convention from solid mechanics that tension is positive and compression negative .the opposite convention is usually used in soil mechanics , rock mechanics , and geomechanics since geomaterials are almost always in compression . because the fluid permeates the solid skeleton, the solid stress must include an isotropic and compressive component in response to the fluid pressure .this component is present even when the fluid is at rest , and/or when the skeleton carries no external load , but this component can not contribute to deformation unless the solid grains are compressible . subtracting this component from the solid stress leads to terzaghi s effective stress , which is the force per unit total area supported by the solid skeleton through deformation ( _ e.g. _ , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?we can then rewrite eq . in its more familiar form , which can be modified to allow for compressibility of the solid grains ( _ e.g. _ , * ? ?? * ; * ? ? ?* ) . neglecting inertia , and in the absence of body forces other than gravity , mechanical equilibrium then requires that where is the phase - averaged , composite , or bulk density . a useful but nonrigorous physical interpretation of eq .is that the fluid pressure gradient acts as a body force within the solid skeleton .the stress tensors in eq . are cauchy or _true _ stresses .these are eulerian quantities , and eq . is an eulerian statement : _ the current forces on current areas in the current configuration must balance . _we assume that the solid skeleton is an elastic material , for which the state of stress depends on the displacement of material points from a relaxed reference state .this behavior distinguishes the theory of poroelasticity from , for example , the poroviscous framework traditionally used in magma / mantle dynamics , where the skeleton is assumed to behave as a viscous fluid over geophysical time scales ( _ e.g. _ , * ? ? ?this assumption greatly simplifies the mathematical framework for large deformations by eliminating any dependence on displacement , but we can not take advantage of it here .the constitutive law for an elastic solid skeleton typically links the effective stress to the solid displacement via an appropriate measure of strain or strain energy . for large deformations ,elastic behavior is nonlinear for two reasons .first , the kinematics is inherently nonlinear because the geometry of the body evolves with the deformation ( kinematic nonlinearity ) .second , most materials harden or soften under large strains as their internal microstructure evolves that is , the material properties change with the deformation ( material or constitutive nonlinearity ) . to capture the kinematic nonlinearities introduced by the evolving geometry , relevant measures of finite strain are typically derived from one of the cauchy - green deformation tensors .a wide variety of finite - strain measures exist , each of which is paired with an appropriate measure of stress through a stress - strain constitutive relation that includes at least two elastic parameters . in modern hyperelasticity theory ,this constitutive relation takes the form of a strain - energy density function .selection of an appropriate constitutive law and subsequent tuning of the elastic parameters can ultimately match a huge variety of material behaviors , but our focus here is simply on capturing kinematic nonlinearity .for this purpose , we consider a simple hyperelastic model known as hencky elasticity .the key idea in hencky elasticity is to retain the classical strain - energy density function of linear elasticity , but replacing the infinitesimal strain with the hencky strain .hencky strain , also known `` natural strain '' or `` true strain '' , is an extension to three dimensions of the one - dimensional concept of logarithmic strain .hencky elasticity is a generic model in that it does not account for material - specific constitutive nonlinearity , but it captures the full geometric nonlinearity of large deformations and thus provides a good model for the elastic behavior of a wide variety of materials under moderate to large deformations .it is also very commonly used in large - deformation plasticity .hencky strain has some computational disadvantages , but these are not relevant here .hencky elasticity can be written [ eq : hencky_elasticity ] where is the hencky strain tensor and the on the left - hand side of eq .accounts for volume change during the deformation .hencky elasticity reduces to linear elasticity for small strains and , conveniently , it uses the same elastic parameters as linear elasticity ( see [ s : linearporoelasticity ] ) . for compactness, we work in terms of the oedometric or -wave modulus and lam s first parameter , where and are the bulk modulus and shear modulus of the solid skeleton , respectively .note that lam s first parameter is often denoted , but we use here to avoid confusion with the principal stretches .all of these elastic moduli are `` drained '' properties , meaning that they are mechanical properties of the solid skeleton alone and must be measured under quasistatic conditions where the fluid is allowed to drain ( leave ) or enter freely . poromechanics describes flow and deformation within a porous material , so the boundaries of the spatial domain typically coincide with the boundaries of the solid skeleton .these boundaries may move as the skeleton deforms ; in an eulerian framework , this constitutes a moving - boundary problem .this feature is the primary disadvantage of working in an eulerian framework as it can be analytically and numerically inconvenient .one noteworthy exception is in infinite or semi - infinite domains , in which case suitable far - field conditions are applied ; this situation is common in geophysical problems , which are often spatially extensive . to close the model presented above, we require kinematic and dynamic boundary conditions for the fluid and the skeleton .kinematic conditions are straightforward : for the fluid , the most common kinematic conditions are constraints on the flux through the boundaries ; for the solid , kinematic conditions typically enforce that the boundaries of the domain are _ material _ boundaries , meaning that they move with the skeleton .the simplest dynamic conditions are an imposed total stress , an imposed effective stress , or an imposed fluid pressure . at a permeable boundary , any two of these three quantities can be imposed . at an unconstrained permeable boundary ,for example , the normal component of the total stress will come from the fluid pressure and the shear component must vanish ; this then implies that both the normal and shear components of the effective stress must vanish . at an impermeable boundary , in contrast, only the total stress can be imposed the decomposition of the load into fluid pressure and effective stress within the domain will arise naturally through the solution of the problem ( although imposed shear stress can only be supported by the solid skeleton , via effective stress ) .some care is required with more complex dynamic conditions that provide coupling with a non - darcy external flow ( _ e.g. , _ * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) , but this is beyond the scope of this paper .we now briefly derive the theory of linear poroelasticity by considering the limit of infinitesimal deformations . for a deformation characterized by typical displacements of size varying over spatial scales of size ,the characteristic strain is of size .the assumption of infinitesimal deformations requires that .we develop the well - known linear theory by retaining terms to first order in , neglecting terms of order and higher .note that the deformation itself enters at first order by definition .we have from eqs . and that this result motivates rewriting eq . in terms of the normalized change in porosity , , =0,\ ] ] where we have taken the initial porosity field to be uniform .we then eliminate in favor of using eq . , =0.\ ] ]equation implies that , and therefore that . simplifying eq .accordingly , we arrive at one form of the well - known linear poroelastic flow equation : \approx{}0,\ ] ] where is the relaxed / undeformed permeability , and where we have reverted from to . comparing eq . with eqs .highlights the fact that exact kinematics render the model nonlinear and also introduce a fundamentally different mathematical character : equation can be written as a linear diffusion equation after introducing linear elasticity in the solid skeleton , whereas eqs .feature an additional , advection - like term related to the divergence - free total flux .it is straightforward to show that hencky elasticity ( [ ss : henckyelasticity ] ) reduces to classical linear elasticity at leading order in , as do many other ( but not all ) finite - deformation elasticity laws .linear elasticity can be written as [ eq : linear_elasticity ] , \end{aligned}\ ] ] where is the infinitesimal ( `` small '' ) strain tensor . by linearizing the strain in the displacement ( ) and the stress in the strain ( ), linear elasticity neglects both kinematic nonlinearity and constitutive nonlinearity as well as the distinction between the deformed configuration and the reference configuration .a closed linear theory is provided by combining the linear flow equation ( eq . [ eq : lineardiffusioneulerian ] ) with mechanical equilibrium ( eq .[ eq : equilibrium ] ) , linear elasticity ( eqs .[ eq : linear_elasticity ] ) , and the linearized statement of volumetric compatibility ( eq . [ eq : phi_to_u_linear ] ) .the resulting model is valid to first order in .a discussion of the various forms of the linear theory commonly used in hydrology , hydrogeology , and petroleum engineering can be found in ref . , and reviews of numerous classical results in linear poroelasticity can be found in refs . and .note that variations in permeability do not enter at this order because eqs . and togetherimply that .this latter scaling should also be viewed as a constraint : imposing pressure or stress gradients of size approaching will drive a deformation that violates the assumption , invalidating the linear theory .the linear theory can alternatively be derived from a lagrangian perspective ( ref . and appendix [ app : s : lagrangian ] ) .this must necessarily result in the same model , but in terms of the lagrangian coordinate instead of the eulerian coordinate .these coordinates themselves differ at first order , , but all quantities related to the deformation are also first order and this implies , for example , that . as a result , replacing with in eqs ., , , and will result in a lagrangian interpretation of the linear model that is still valid to first order in .these two models are equivalent in the limit of , but they will always differ at order and diverge from each other as the deformation grows .this conceptual ambiguity is one awkward aspect of linear ( poro)elasticity ( see appendix [ app : s : strainambiguity ] ) . here , our interest is in the behavior of the linear theory as the deformation becomes non - negligible .we next consider two model problems involving uniaxial flow and deformation , using these as a convenient setting for comparing the predictions of linear poroelasticity with the large - deformation theory .we now consider the uniaxial deformation of a deformable porous material , as shown schematically in fig .[ fig : rect ] . .its right edge is attached to a rigid permeable barrier ( , thick dashed black line ) , but the rest is free to move .we denote the instantaneous position of the left edge by , taking ( dashed orange line ) .the material can be compressed against the barrier ( ) by ( b ) an applied effective stress ( dark gray arrows ) , in which case the rate of deformation is set by the rate of fluid outflow ( wiggly blue arrows ) and/or by ( c ) an applied fluid pressure drop , in which case the deformation is driven by a net flow from left to right ( straight blue arrows ) .[ fig : rect ] , width=325 ] provided that the material properties are uniform in the lateral directions , both the flow and the deformation will be restricted to one spatial dimension , and as a result , the analysis is tractable even when the deformation is large , which allows for the exploration of a variety of complex material models and loading scenarios , including mechanical compression , forced infiltration , and spontaneous imbibition . here, we consider two canonical problems : mechanical compression ( the consolidation problem ) and fluid - driven compression .these differ only in the boundary conditions , so we develop a single model that applies to both cases .we assume that gravity is unimportant . for the solid, a one - dimensional displacement field implies that the material is either laterally confined or laterally infinite , otherwise the poisson effect would lead to lateral expansion or contraction .our model and results are independent of the shape and size of the - cross section as long as the lateral boundaries are rigid , frictionless , and impermeable .for example , the material could be a rectangular slab within a duct ( _ e.g. _ , * ? ? ? * ; * ? ? ? * ) or a cylinder within a tube ( _ e.g. _ , * ? ? ? * ) .although we focus here on compression , our models and solutions remain valid if we reverse the sign of the effective stress and/or the pressure gradient ; this will reverse the direction of the displacement and/or the flow , stretching the skeleton to the left in a state of tension .poromechanical phenomena are highly coupled . in order to highlight the nonlinear interactions between the various physical mechanisms at play , as well the qualitative and quantitative behavior of the error introduced by linearizing these mechanisms, we consider five different models below : a fully linear model ( [ s : linearporoelasticity ] ) , two fully nonlinear models , and two intermediate models .the nonlinear models combine rigorous large - deformation kinematics with hencky elasticity ( [ ss : henckyelasticity ] ) and one of two permeability laws : constant ( ) or deformation - dependent ( ) via the normalized kozeny - carman formula , eq . .the intermediate models are the same as the nonlinear models , but replacing hencky elasticity with linear elasticity ( [ ss : linearelasticity ] ) while retaining all other nonlinearity .we refer to these models as : 1 .`` linear '' : linear poroelasticity ; 2 .`` nonlinear- '' : nonlinear kinematics with hencky elasticity and constant permeability ; 3 .`` nonlinear- '' : nonlinear kinematics with hencky elasticity and deformation - dependent permeability ; 4 .`` intermediate- '' : nonlinear kinematics with linear elasticity and constant permeability ; and 5 .`` intermediate- '' : nonlinear kinematics with linear elasticity and deformation - dependent permeability .note that although the intermediate approach retains most of the kinematic nonlinearity of the fully nonlinear model , it is not kinematically rigorous because the nonlinearity of hencky elasticity is also kinematic in origin .the intermediate approach should also be considered with caution because it is asymptotically mixed , which can lead to nonphysical behavior at large deformations . however , it is useful for illustration .we derive and discuss below the fully nonlinear models ( [ ss : rect_large ] ) and the linear model ( [ ss : rect_linear ] ) , but we present results from all five models .we adopt the shorthand names given above for conciseness .we first consider the exact kinematics of flow and deformation with a hencky - elastic response in the solid skeleton .the results from this section provide the nonlinear- and nonlinear- models by introducing the appropriate permeability function , and can be readily modified to provide the intermediate- and intermediate- models by replacing hencky elasticity with linear elasticity in any steps involving the elasticity law .we assume that the porosity in the initial state is spatially uniform and given by , where is a known constant , thereby giving the deformation - gradient tensor can be written as ( _ c.f ._ , eq . [ eq : fdef ] ) ,\ ] ] where the jacobian determinant is the displacement field is linked to the porosity field via eq . , for uniaxial flow , eqs . , , and become [ eq : fluidmech_rect ] & = 0 , \label{eq : fluidmech_rect_pde } \end{aligned}\ ] ] with and where the total volume flux is a function of time only. equations and constitute a kinematically exact model for any constitutive behavior in the solid skeleton . this model has been derived previously ( _ e.g. _ , eq . ( 44 ) of ref .we take the constitutive response of the solid skeleton to be hencky elastic , in which case the associated effective stress is .\ ] ] although the displacement and the strain are uniaxial , the stress has three nontrivial components due to the poisson effect under lateral confinement .if the material were laterally unconfined , the stress would be uniaxial and the strain would have three nontrivial components .we link the mechanics of the skeleton with those of the fluid by combining eq . with eqs . and to obtain \\ & = \frac{\partial}{\partial{x}}\left [ \mathcal{m}\,\left(\frac{1-\phi_f}{1-\phi_{f,0}}\right)\ln\left(\frac{1-\phi_{f,0}}{1-\phi_f}\right)\right ] .\end{split}\ ] ] with appropriate boundary conditions , eqs . provide a closed model for the evolution of the porosity .for uniaxial deformation , the hencky stress and strain depend only on and can therefore be written directly in terms of .in fact , this is the case for any constitutive law since itself depends only on is , the deformation can be completely characterized by the local change in porosity .this is a special feature of uniaxial deformation : the effective stress can be written exclusively as a function of porosity , , _ for any constitutive law_. as a result , the framework of large - deformation elasticity can be avoided in a uniaxial setting by simply positing or measuring the function ( _e.g. _ , * ? ? ?this approach is simple and appealing , but has the obvious disadvantage that it can not be readily generalized to more complicated loading scenarios .it also has the more subtle disadvantage that even in the uniaxial case it is unable to provide answers to basic questions about the 3d state of stress within the material .for example : how much stress does the material apply to the lateral confining walls ? what is the maximum shear stress within the material ?the left and right boundaries of the solid skeleton are located at and , respectively , and we take without loss of generality ( fig . [fig : rect ] ) .we then have four kinematic boundary conditions for the skeleton from the fact that the left and right edges are material boundaries : two on displacement , and two on velocity , we use the former pair in calculating the displacement field from the porosity field , and the latter pair in deriving boundary conditions for porosity .we take the pressure drop across the material to be imposed and equal to , and without loss of generality we then write we further assume that a mechanical load is applied to the left edge in the form of an imposed effective stress . the effective stress at the right edge can then be derived by integrating eq . from to to arrive at , which is simply a statement of macroscopic force balance in the absence of inertia or body forces . from result and the pressures at and , we then have that since the effective stress is directly related to the porosity in this geometry ( see [ ss : mech_rect_large ] ) , eqs .provide and and constitute dirichlet conditions . for hencky elasticity, these values can be readily calculated from where denotes the lambert w function ( solves ) .when the pressure drop is imposed , the volume flux through the material will vary in time and this appears explicitly in eqs . .one approach to deriving an expression for is to rearrange and integrate eq ._ , eqs .( 21)(23 ) of * ? ? ?* ) , but this is awkward in practice since it requires explicit calculation of from via eq . , which is otherwise unnecessaryalternatively , we can evaluate eq . at to obtain \bigg|_{x = l } = -\left[\frac{k(\phi_f)}{\mu}\,\frac{\mathrm{d}\sigma^\prime_{xx}}{\mathrm{d}\phi_f}\,\frac{\partial{\phi_f}}{\partial{x}}\right]\bigg|_{x = l},\ ] ] which we supplement with the dirichlet condition above on .equation is straightforward to implement . for fluid - driven deformation, an imposed pressure drop will eventually lead to a steady state in which the solid is stationary , the fluid flow is steady , and the volume flux is constant , .imposing instead this same flux from the outset and allowing the pressure drop to vary must eventually lead to precisely the same steady state , in which . as a result ,the only difference between these two conditions is in the dynamic approach to steady state .we focus on the pressure - driven case below , but we provide analytical and numerical solutions that are valid for both cases and we explore the relationship between and at steady state . note that , for an imposed flux , the pressure at is unknown and the dirichlet condition at must be replaced by the neumann condition that . for an incompressible solid skeleton , conservation of solid volumerequires that and it is straightforward to confirm that this is identically satisfied by eqs . and .if any of these relationships are approximated , the resulting model will no longer be volume conservative .conservation of mass or volume is typically not a primary concern in solid mechanics because most engineering materials are only slightly compressible and typically experience very small deformations .it becomes more important in poromechanics because porous materials are much more compressible than nonporous ones since the skeleton can deform through rearrangement of the solid grains .this rearrangement allows for large volume changes through large changes in the pore volume , which are then strongly coupled to the fluid mechanics .we now derive the linear model .we do this by linearizing the nonlinear model above , so we write the results in terms of the eulerian coordinate . as described in [ ss : lpe_discussion ] , however , the spatial coordinate in the linear model is ambiguous : simply replacing the eulerian coordinate with the lagrangian coordinate in the expressions below will result in a model that is still accurate to leading order in .whereas the eulerian interpretation of this model ( with ) will satisfy the boundary conditions at only at first order , the resulting lagrangian interpretation ( with ) will satisfy them exactly at .however , the eulerian interpretation will respect the relationship between porosity and displacement exactly since this relationship is linear in the eulerian coordinate ( _ c.f ._ , eq . [ eq : phi_to_ux_linear ] ) , whereas the lagrangian interpretation will respect this relationship only at first order . adopting the assumption of infinitesimal deformations and linearizing in the strain , eqbecomes note that this is identical to eq . , and is therefore exact .this is another special feature of uniaxial deformation : the exact relationship between and is linear .this does not hold for even simple biaxial deformations . from eqs . and, we further have [ eq : fluidmech_rect_linear ] &\approx{}0 , \label{eq : fluidmech_rect_pde_linear}\\ \frac{\partial{p}}{\partial{x}}&=\frac{\partial{\sigma^\prime_{xx}}}{\partial{x}}. \label{eq : equilibrium_rect_linear } \end{aligned}\ ] ] comparing eq . with eq .again highlights the fundamentally different mathematical character of the linear model as compared to the nonlinear model .we take the constitutive response of the solid skeleton to be linear elastic , in which case the associated effective stress tensor is .\ ] ] combining this with eqs . and , we obtain = \frac{\partial}{\partial{x } } \left[\mathcal{m}\,\left(\frac{\phi_f-\phi_{f,0}}{1-\phi_{f,0}}\right)\right].\ ] ] with appropriate boundary conditions , eqs . provide a closed linear model for the evolution of the porosity .the kinematic conditions on the solid displacement ( eqs .[ eq : u_bc ] ) become where the distinction between and does not enter at first order .the latter condition is used when calculating the displacement field from the porosity field , and the former then provides an expression for .neither is necessary when solving for the porosity field itself . for an imposed pressure drop ,the dynamic conditions on the pressure and the stress become and these again provide dirichlet conditions on the porosity via the elasticity law , with these conditions on porosity , the linear model is fully specified .it is not necessary to calculate the total flux because it does not appear explicitly in the linear conservation law , but the flux can be calculated at any time from when the flux is imposed instead of the pressure drop , eq .can be rearranged to provide a neumann condition at that replaces the dirichlet condition above .the pressure drop is then unknown , and must be calculated by rearranging eqs . and .we consider the natural scaling where the characteristic permeability is and the classical poroelastic time scale is .the problem is then controlled by one of two dimensionless groups that measure the strength of the driving stresses relative to the stiffness of the skeleton : for deformation driven by an applied mechanical load , or for deformation driven by fluid flow with a constant pressure drop . for an imposed flux ,the relevant dimensionless group is instead .the problem also depends on the initial porosity .when the permeability is constant , can be scaled out by working instead with the normalized change in porosity , when the permeability is allowed to vary , the initial value can not be eliminated because the permeability must depend on the current porosity rather than on the change in porosity .the discussion below uses dimensional quantities for expository clarity , but we present the results in terms of dimensionless parameter combinations to emphasize this scaling .each of the models described above can ultimately be written as a single parabolic conservation law for ; this will be linear and diffusive for the linear model , and nonlinear and advective diffusive for the intermediate and nonlinear models .the boundary condition at the left is a dirichlet condition for all of the cases considered here , and the boundary condition at the right is either dirichlet for flow driven by a imposed pressure drop or neumann for flow driven by an imposed fluid flux . for the nonlinear and intermediate models, we must also solve for the unknown position of the free left boundary .below , we study these models dynamically and at steady state in the context of two model problems .we now consider the uniaxial mechanical compression of a porous material ( fig .[ fig : rect]b ) , in which an effective stress is suddenly applied to the left edge of the material at and the fluid pressure at both edges is held constant and equal to the ambient pressure , .the process by which the material relaxes under this load , squeezing out fluid as the pore volume decreases , is known as _consolidation_. the consolidation problem is a classical one , with direct application to the engineering of foundations ; it has been studied extensively in that context and others ( _ e.g. _ , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?force balance requires that the total stress everywhere in the material must immediately support the applied load , for .however , the effective stress can only contribute through strain in the solid skeleton , and the solid skeleton can only deform by displacing fluid , and this is not instantaneous . as a result , the fluid pressure must immediately jump to support the entire load : . in soil and rockmechanics , this is known as an _ undrained _ response : the mechanical response of a fluid - solid mixture under conditions where the fluid content is fixed . over time , this high pressure relaxes as fluid flows out at the boundaries , and the effective stress supports an increasing fraction of the load as the material is compressed .when the process is finished , the effective stress will support the entire load and the fluid pressure will have returned to its ambient value .this is classical consolidation theory .when the consolidation process is finished , the solid and fluid are both stationary , , and the fluid pressure is uniform , . as a result , the steady state is determined entirely by the boundary conditions and the elastic response of the skeleton ; the fluid plays no role . in soil and rock mechanics ,this is known as the _ drained _ response of the material . without a fluid pressure gradient, mechanical equilibrium implies that the effective stress and the porosity must be uniform , and ( eqs .[ eq : p_to_phi_large_rect ] and [ eq : p_to_phi_small_rect ] ) .since the fluid plays no role , the nonlinear- and nonlinear- models are identical at steady state . for both of these, we have that [ eq : mech_rect_large ] where the jacobian determinant is found by inverting with the aid of eq .[ eq : sig_to_phi ] , and the deflection is the change in length per unit reference length , usually known as the `` engineering '' or _ nominal_ strain . for the linear model , we instead have that [ eq : mech_rect_small ] we compare these results in fig .[ fig : mech_rect_ss_v_x ] , showing the linear model ( lagrangian interpretation ) , the nonlinear- model , and the nonlinear- model ( see [ ss : models ] ) .we include the latter for completeness but , as mentioned above , it is identical to the nonlinear- model at steady state since there is no flow . in all cases ,the only nontrivial component of the deformation is the displacement , and this is simply linear in .the difference between the models lies in the amount of deformation that results from a given load : the nonlinear and intermediate models deform much less than the linear model , and increasingly so for larger compressive loads ( fig .[ fig : mech_rect_ss_v_sig ] ) .the relative error between the linear and nonlinear models is of size , which is consistent with the assumptions of linear ( poro)elasticity . to highlight the origin of this error , we further compare these two models with the intermediate model , in which we replace hencky elasticity with linear elasticity in the nonlinear kinematic framework ( see [ ss : models ] ; fig .[ fig : mech_rect_ss_v_sig ] ) .this comparison illustrates the fact that the majority of the error associated with the linear model results in this case from the kinematics of the deformation , and not from nonlinearity in the elasticity law .one source of kinematic nonlinearity at steady state is the cumulative nature of strain , where increments of displacement correspond to increasingly larger increments of strain as the material is compressed because the overall length decreases .the opposite occurs in tension : the nonlinear model deforms much more than the linear model because increments of displacement correspond to increasingly smaller increments of strain as the material is stretched .another source of kinematic nonlinearity is the moving boundary , since the linear model satisfies the boundary conditions there only at leading order in .the nonlinear model implies that the material can support an arbitrarily large compressive stress , with approaching unity ( _ i.e. _ , the length of the deformed solid approaching zero ) as the compressive stress diverges .closer inspection reveals that the porosity will vanish when the deflection reaches , which occurs at a finite compressive stress .one would expect the stiffness of the skeleton to change relatively sharply across the transition from compressing pore space to compressing solid grains , and significant microstructural damage would likely occur _ en route _ ( _ e.g. _ , grain crushing)a material - specific constitutive model would be necessary to capture this behavior .this behavior is also important and problematic from the perspective of the fluid mechanics , which can become nonphysical unless the permeability law accounts appropriately for the changing porosity ( see [ ss : darcy ] above ) . to explore the dynamics of consolidation ,we solve the nonlinear and intermediate models numerically using a finite - volume method with an adaptive grid ( appendix [ app : s : fv_rect ] and ) , and we solve the linear model analytically via separation of variables . the well - known analytical solution can be written [ eq : consolidation_solution ] ^{-\frac{(n\pi)^2t}{t_\mathrm{pe}}}\sin\left(\frac{n\pi{}x}{l}\right ) \quad\mathrm{and } \\ \frac{u_s(x , t)}{l } & = -\left(\frac{\phi_f^\star-\phi_{f,0}}{1-\phi_{f,0}}\right)\left\{1-\frac{x}{l}+\sum_{n=1}^\infty \frac{2}{(n\pi)^2}\big[1+(-1)^{n+1}\big]e^{-\frac{(n\pi)^2t}{t_\mathrm{pe}}}\left[(-1)^n-\cos\left(\frac{n\pi{}x}{l}\right)\right ] \right\ } , \end{aligned}\ ] ] where , as in eq . , and all other quantities of interestcan readily be calculated from the porosity and displacement fields .note that , as in the steady state , the eulerian interpretation of eqs .( as written ) satisfies the boundary conditions at the moving boundary only to leading order in .the lagrangian interpretation ( replacing with ) rigorously satisfies the boundary conditions at , but at the expense of exact conservation of mass ( eq .[ eq : phi_to_ux_linear ] ) . however , both interpretations predict the same deflection , which is often the quantity of primary interest in engineering applications ( eulerian : ; lagrangian : ) . in fig .[ fig : mech_rect_dy_v_x ] , we compare the dynamics of consolidation for the linear model ( lagrangian interpretation ) , the nonlinear- model , and the nonlinear- model . in all cases, the skeleton is initially relaxed in the middle and very strongly deformed at the edges , from which the fluid can easily escape .the deformation propagates inward toward the middle from both ends over time , and the pressure decays as the skeleton supports an increasing fraction of the total stress .the nonlinear- model exhibits a more rounded deformation profile than either the linear model or the nonlinear- model , which is a result of the fact that the reduced permeability in the compressed outer regions slows and spreads the relaxation of the pressure field .the two nonlinear models ultimately arrive at the same steady state , which is determined strictly by the elasticity law ( _ c.f ._ , fig .[ fig : mech_rect_ss_v_x ] ) .the nonlinear models deform much less than the linear model overall .we examine the rate of deformation in fig .[ fig : mech_rect_dy_v_t ] .all three models relax exponentially toward their respective steady states , but the rate of relaxation depends very strongly on the magnitude of the applied effective stress and on the nonlinearities of the model .specifically , the nonlinear- model relaxes much faster than the linear model , whereas the nonlinear- model relaxes much more slowly than the linear model .the relaxation time scale , which is the characteristic time associated with the decaying exponentials shown in fig .[ fig : mech_rect_dy_v_t ] , is constant for the linear model , but decreases with for the nonlinear- model and increases strongly with for the nonlinear- model ( fig .[ fig : mech_rect_dy_v_sig ] ) .the time scales of the nonlinear models differ from that of the linear model by severalfold for moderate strain .we now consider the uniaxial deformation of a porous material driven by a net fluid flow through the material from left to right ( fig .[ fig : rect]c ) , which compresses the material against the rigid right boundary .this problem has attracted interest since the 1970s for applications in filtration and the manufacturing of composites ( _ e.g. _ , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) , in tissue mechanics ( _ e.g. _ , * ? ? ?* ; * ? ? ?* ) , and as a convenient model problem in poroelasticity ( _ e.g. _ , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?we assume that a pressure drop is suddenly applied across the material at , and we write this as and without loss of generality .we also assume for simplicity that the left edge is unconstrained , , but our models and solutions do not require this .force balance then leads to , implying that the right edge of the skeleton is compressed against the right boundary .as in the consolidation problem , the deformation will evolve toward a state in which the solid is stationary .unlike in the consolidation problem , fluid flow is central to this steady state because the flow drives the deformation .the resulting deformation field is highly nonuniform because it must balance the internal pressure gradient . as discussed in [ sss : rect_large_bcs ] above , the same steady state can be achieved when the flow is instead driven by an imposed fluid flux ; we focus on the case of an applied pressure drop here for simplicity , but our models and solutions are general and can also be used for the case of an imposed flux .the deformation will eventually reach a state in which the flow is steady ( and ) and the solid is stationary ( and ) .we present in appendix [ app : s : proc_steady_rect ] a general procedure for constructing steady - state solutions to the kinematically exact model for arbitrary elasticity and permeability laws , and we provide the key results for the two nonlinear models and the two intermediate models in appendix [ app : s : ss_integrals ] .below , we discuss the results for the nonlinear- model and the linear model . for the nonlinear- model , the pressure and effective stress fieldscan be calculated by integrating eq . or with , since the permeability is constant , the pressure drops linearly from to . the effective stress must then also vary linearly in , rising in magnitude from to .the total stress is uniform and equal to , and this is supported entirely by the fluid at the left and entirely by the skeleton at the right .the unknown flux can be calculated directly from ( see appendix [ app : s : proc_steady_rect ] ) where is the jacobian determinant at , which is readily calculated by inverting using the elasticity law ( eq . [ eq : sig_to_phi ] ) . for an imposed flux , eq .should instead be solved for , which will then provide .the unknown deflection can then be calculated by evaluating the pressure at or the effective stress at , both of which lead to we can then calculate the jacobian determinant field from the effective stress field using eq . , ^{-1}\mathrm{w}\left[-\frac{\mu{}q^\star{}l}{k_0\mathcal{m}}\left(\frac{x}{l}-\frac{\delta^\star}{l}\right)\right],\ ] ] where is again the lambert w function .the porosity field is again given by , and , finally , the displacement field is .\end{split}\ ] ] the linear model is , of course , much simpler .the pressure and effective stress fields are similar to those for the nonlinear- model , evaluating the pressure at or the effective stress at immediately provides the relationship between the flux and the pressure drop , the porosity field is calculated from the effective stress field and linear elasticity , and the displacement field is calculated by integrating the porosity field , since the stress and the strain increase linearly from left to right , the displacement is quadratic .finally , the deflection is simply given by , we compare these predictions qualitatively in fig .[ fig : flow_rect_ss_v_x ] , including also the results for the nonlinear- model . as with consolidation ,the nonlinear models deform less than the linear model in all cases .unlike with consolidation , the permeability law has a strong impact on the steady state : the nonlinear- model deforms less than the nonlinear- model , and exhibits more strongly nonlinear behavior .we compare the predictions for the final deflection and the resulting flux in fig .[ fig : flow_rect_ss_v_dp ] , including also the two intermediate models .although all of the nonlinear and intermediate models predict a much smaller deflection than the linear model , the nonlinear- and intermediate- models predict a larger steady - state flux than the linear model , whereas the nonlinear- and intermediate- models predict a much smaller steady - state flux .this occurs because the steady - state flux results from two competing physical effects .as the driving pressure drop increases , we expect the deflection to increase . as the deflection increases , the overall length of the skeleton decreases and , since the pressure drop is fixed , the pressure gradient across the material increases . as a result, we expect from darcy s law that the flux will scale like . for constant permeability, we then expect the flux to increase faster than linearly with , and this is indeed what we see for the nonlinear- and intermediate- cases .the changing length is a kinematic nonlinearity that is neglected in the linear model , so is simply proportional to despite the fact that is actually larger than the nonlinear or intermediate predictions .however , these models ignore the fact that the porosity decreases as the deformation increases .when the permeability is deformation dependent , this decreases very strongly with the porosity and overwhelms the effect of the changing length , leading to a strongly slower - than - linear growth of with , and this is indeed what we see for the nonlinear- and intermediate- models .we next focus on the dynamic evolution of the deformation .we again solve the nonlinear and intermediate models numerically ( appendix [ app : s : fv_rect ] and ) , and we again solve the linear model analytically via separation of variables . the analytical solution can be written ^{-\frac{(n\pi)^2t}{t_\mathrm{pe}}}\sin\left(\frac{n\pi{}x}{l}\right)\right\ } , \\\frac{u_s(x , t)}{l } & = -\left(\frac{\phi_f^\star-\phi_{f,0}}{1-\phi_{f,0}}\right)\left\{\frac{1}{2}\left[1-\left(\frac{x}{l}\right)^2\right]-\sum_{n=1}^\infty \frac{2}{(n\pi)^2}\big[(-1)^n\big]e^{-\frac{(n\pi)^2t}{t_\mathrm{pe}}}\left[(-1)^n-\cos\left(\frac{n\pi{}x}{l}\right)\right ] \right\ } , \end{aligned}\ ] ] where and .we compare these solutions qualitatively in fig .[ fig : flow_rect_dy_v_x ] , including also the results for the nonlinear- case .note once again that the spatial coordinate in the linear model is ambiguous , and we again adopt a lagrangian interpretation .when the flow starts , the fluid and the solid initially travel together to the right .the pressure remains uniform throughout most of the skeleton since there is no net flux of fluid _ through _ the skeleton , but there is a very sharp pressure gradient at the right edge where the solid is necessarily stationary .the motion of the solid toward the right boundary gradually compresses the right edge of the skeleton against the boundary , and this motion slows over time as the effective stress builds from right to left .the motion of the solid eventually stops and the deformation reaches steady state when the strain in the skeleton is such that the gradient in effective stress balances the gradient in pressure . in this steady state ,the skeleton remains completely relaxed at the left edge and is the most compressed at the right edge . from left to right , there is a gradual increase in deformation and magnitude of effective stress , and a gradual decrease in pressure and porosity .both here and in the consolidation problem , the deformation evolves with a classic boundary - layer structure that may be susceptible to a matched asymptotic approach with the small parameter .the prospect of more accurately capturing the kinematic nonlinearity while retaining some degree of analytical tractability is a promising one for future work . to examine the time scale of the deformation , we plot the evolution of the deflection toward its final value as a proxy for the global approach to steady state ( fig . [fig : flow_rect_dy_v_t ] ) . as for consolidation , we find that the deflection approaches steady state exponentially in all cases , and that the nonlinear- and nonlinear- models evolve more quickly and more slowly than the linear model , respectively .we also investigate the impact of on the time scale ( fig .[ fig : flow_rect_dy_v_dp ] ) .we find that the general trend is the same as in consolidation ( _ c.f ._ , fig .[ fig : mech_rect_dy_v_sig ] ) , but that the magnitude of the effect is smaller that is , the time scale during fluid - driven deformation depends less strongly on than the time scale during consolidation depends on .this is most likely due to the fact that the steady state is uniform in consolidation , with potentially large compression throughout the entire material , whereas the steady state in fluid - driven deformation is highly nonuniform , with completely relaxed material at the left and highly compressed material at the right .we have provided an overview and discussion of a complete eulerian framework for the arbitrarily large deformation of a porous material . in doing so ,our main goals were to ( a ) elucidate the key aspects of the rigorous model , ( b ) provide physical insight into the subtleties of poromechanical coupling , and ( c ) investigate the qualitative and quantitative nature of the error introduced by linearizing this model .these points are often obscured by the powerful mathematical and computational machinery that is typically brought to bear on these problems .we intend that our approach here can serve as a concise , coherent , and approachable introduction to a large body of work in classical continuum and poromechanics .we believe that this overview now provides a rostrum to facilitate further theoretical advances and new applications in soil mechanics , hydrogeology , biophysics , and biomedical engineering .we have also applied this theory to two canonical model problems in uniaxial deformation , one in which deformation drives fluid flow and one in which fluid flow drives deformation . in the former ,the consolidation problem , an applied effective stress squeezes fluid from a porous material .although the steady state is simple and controlled entirely by the solid mechanics , the evolution of the deformation is controlled by the rate at which fluid can flow through the material and out at the boundaries ; we showed that the resulting rate of relaxation is impacted strongly by kinematic nonlinearity and even more strongly by deformation - dependent permeability . in the latter problem , fluid - driven deformation, a net throughflow compresses the material against a rigid permeable boundary .the steady state is highly nonuniform , controlled by the steady balance between the gradient in pressure and the gradient in stress .we showed that both the evolution of the deformation and the deflection and fluid flux at steady state are impacted strongly by kinematic nonlinearity and , again , even more strongly by deformation - dependent permeability .in the interest of emphasizing the nonlinear kinematics of large deformations , we have avoided complex , material - specific constitutive models .hencky elasticity captures the full kinematic nonlinearity of large deformations in a very simple form , and we believe that it provides a reasonable compromise between rigor and complexity for moderate deformations .however , real materials will always behave in a complex , material - specific way when subject to sufficiently large strains , and the framework considered here is fully compatible with other constitutive models .similarly , we have considered one specific case of deformation - dependent permeability : the normalized kozeny - carman formula .we have shown that this typically amplifies the importance of kinematic nonlinearity and has striking qualitative and quantitative impacts on poromechanical behavior .although this example captures the key qualitative features of the coupling between deformation and permeability , material - specific relationships will be needed to provide quantitative predictions for real materials . in describing the kinematics of the solid skeleton , we have adopted the single assumption that the constituent material is incompressible .this has clear relevance to soil mechanics , biophysics , and any other situation where the pressure and stress are small compared to the bulk modulus of the solid grains ( _ e.g. _ , about 3040 gpa for quartz sand ) .this assumption can be relaxed , although doing so substantially complicates the large - deformation theory ( _ e.g. , _ * ? ? ?* ; * ? ? ?* ; * ? ? ?we have also focused on the case of a single , incompressible pore fluid , but the theory is readily generalized to a compressible or multiphase fluid system ( _ e.g. _ , * ? ? ?* ; * ? ? ?uniaxial deformations have provided a convenient testbed for our purposes here , but they are unusual in several respects that do not readily generalize to multiaxial scenarios .first , a uniaxial deformation can be fully characterized by the change in porosity , ; this simplifies the analysis , but it is not the case for even simple biaxial deformations .second , the cross section normal to the flow does not deform or rotate , which greatly simplifies the nonlinearity of poromechanical coupling .finally , the exact relationship between displacement and porosity is linear ; this is again not the case for even simple biaxial deformations .we expect kinematic nonlinearity to play an even stronger role for multiaxial deformations .the authors gratefully acknowledge support from the yale climate & energy institute .erd acknowledges support from the national science foundation ( grant no .cbet-1236086 ) .jsw acknowledges support from yale university , the swedish research council ( vetenskapsrdet grant 638 - 2013 - 9243 ) , and a royal society wolfson research merit award .here we briefly summarize the lagrangian approach to large - deformation poroelasticity , a thorough discussion and derivation of which is provided by . in a lagrangian frame ,it is natural to work with so - called _ nominal _ quantities , which measure the current stresses , fluxes , _ etc . _acting on or through the reference areas or volumes .for example , the nominal porosity measures the current fluid volume per unit reference total volume , and is related to the true porosity via .we denote the gradient and divergence operators in the lagrangian coordinate system by and , respectively , to distinguish them from the corresponding operators in the eulerian coordinate system .the lagrangian displacement field is where is the lagrangian ( material ) coordinate and is the current position of the skeleton that was initially at position .the corresponding deformation - gradient tensor is the jacobian determinant is then related to by where is the reference porosity field , which we again take to be undeformed .continuity requires that where is the nominal flux of fluid through the solid skeleton .the nominal flux is related to the pressure gradient via darcy s law , where the portion of the prefactor converts the true flux to the nominal flux , and the remaining factor of converts the eulerian gradient to the lagrangian one .mechanical equilibrium requires that where is the nominal total stress , which is related to the true total stress via the nominal effective stress is then given by combining eqs . , we finally have [ eq : lagrangian ] & = 0 \quad\mathrm{and } \\\mathrm{div}(\mathbf{s}^\prime ) & = \mathrm{div}(j\mathbf{f}^{-\mathsf{t}}p ) . \end{aligned}\ ] ] supplemented with a constitutive law for the solid skeleton ( relating to ) and appropriate boundary conditions , eqs .constitute a complete formulation of poroelasticity in a lagrangian framework .the lagrangian formulation is more suitable for computation than the eulerian formulation since the domain is fixed , but the underlying physical structure is substantially more opaque . note that the permeability must remain a function of the true porosity , .linearizing eqs . in the strain and reverting from the nominal porosity tothe true porosity leads to & \approx{}0 \quad\mathrm{and } \\\mathrm{div}(\mathbf{\sigma}^\prime ) & \approx{}\mathrm{grad}(p ) , \end{aligned}\ ] ] which coincide with eqs . and , respectively , but replacing with .note that the nominal porosity and the true porosity differ _ at leading order _ : where the reference fields are always precisely equivalent , , because they must necessarily refer to the same reference state .the eulerian ( eulerian - almansi ) and lagrangian ( green - lagrange ) finite - strain tensors are and respectively .linear elasticity , as described above , is effectively a linearized eulerian constitutive law , where stress is linear in the eulerian infinitesimal strain =\mathbf{i}-\frac{1}{2}\big(\mathbf{f}^{-1}+\mathbf{f}^{-\mathsf{t}}\big).\ ] ] however , it is equally valid to write a linearized lagrangian constitutive law , where stress is linear in the lagrangian infinitesimal strain =\frac{1}{2}\big(\mathbf{f}+\mathbf{f}^{\mathsf{t}}\big)-\mathbf{i}.\ ] ] the former quantity is nonlinear in a lagrangian frame whereas the latter quantity is nonlinear in an eulerian frame .we used the linearized eulerian law above , but in a lagrangian frame it would be more appropriate to use the linearized lagrangian law .the results are equivalent at leading order in the strain ( ) , but they diverge as strains become non - negligible .to solve eq . numerically , we formulate a finite - volume method on an adaptive grid .we provide a reference implementation in the supplemental material . at any time , the domain extends from to .we divide this domain into cells of equal width /n12 & 12#1212_12%12[1][0] link:\doibase 10.1115/1.3138202 [ * * , ( ) ] link:\doibase 10.1115/1.2894880 [ * * , ( ) ] link:\doibase 10.1016/0021 - 9290(91)90291-t [ * * , ( ) ] link:\doibase 10.1016/s0021 - 9290(98)00161 - 4 [ * * , ( ) ] link:\doibase 10.1016/j.jmps.2006.05.004 [ * * , ( ) ] link:\doibase 10.1126/science.1107976 [ * * , ( ) ] link:\doibase 10.1038/nature03550 [ * * , ( ) ] link:\doibase 10.1146/annurev - fluid-120710 - 101200 [ * * , ( ) ] link:\doibase 10.1038/nmat3517 [ * * , ( ) ] link:\doibase 10.1093/petrology/25.3.713 [ * * , ( ) ] link:\doibase 10.1080/03091928508245423 [ * * , ( ) ] link:\doibase 10.1017/s0022112093000369 [ * * , ( ) ] link:\doibase 10.1029/2000jb900430 [ * * , ( ) ] link:\doibase 10.1093/petrology / egn058 [ * * , ( ) ] link:\doibase 10.1111/j.1365 - 246x.2011.05177.x [ * * , ( ) ] link:\doibase 10.1029/wr017i004p00937 [ * * , ( ) ] _ _ ( , , ) link:\doibase 10.1073/pnas.1115347109 [ * * , ( ) ] link:\doibase 10.1007/s10706 - 011 - 9491 - 0 [ * * , ( ) ] link:\doibase 10.1002/wrcr.20197 [ * * , ( ) ] link:\doibase 10.1073/pnas.1302156110 [ * * , ( ) ] link:\doibase 10.1093/gji / ggt199 [ * * , ( ) ] link:\doibase 10.1002/2013wr015175 [ * * , ( ) ] link:\doibase 10.1093/gji / ggu344 [ * * , ( ) ] link:\doibase 10.1017/jfm.2015.361 [ * * , ( ) ] link:\doibase 10.1115/1.3101926 [ * * , ( ) ] * * , ( ) link:\doibase 10.1121/1.1908239 [ * * , ( ) ] link:\doibase 10.1063/1.1728759 [ * * , ( ) ] * * , ( ) * * , ( ) _ _ ( , ) link:\doibase 10.1016/0021 - 9290(90)90007-p [ * * , ( ) ] link:\doibase 10.1016/0021 - 9290(90)90348 - 7 [ * * , ( ) ] link:\doibase 10.1115/1.3121397 [ * * , ( ) ] link:\doibase 10.1016/0021 - 9290(96)00035 - 8 [ * * , ( ) ] link:\doibase 10.1016/j.mechmat.2011.07.010 [ * * , ( ) ] link:\doibase 10.1093/imamat / hxu039 [ * * , ( ) ] link:\doibase 10.1016/j.cma.2014.08.018 [ * * , ( ) ] link:\doibase 10.1002/nag.1610030202 [ * * , ( ) ] link:\doibase 10.1016/0045 - 7825(94)00720 - 8 [ * * , ( ) ] link:\doibase 10.1016/j.cma.2004.02.014 [ * * , ( ) ] link:\doibase 10.1002/nag.1061 [ * * , ( ) ] link:\doibase 10.1016/0020 - 7225(95)00089 - 5 [ * * , ( ) ] link:\doibase 10.1016/s0020 - 7225(96)00119-x [ * * , ( ) ] link:\doibase 10.1016/j.jmps.2007.11.010 [ * * , ( ) ] link:\doibase 10.1016/j.jmps.2010.01.009 [ * * , ( ) ] link:\doibase 10.1016/j.jmps.2010.07.020 [ * * , ( ) ] * * , ( ) \doibase 10.1061/(asce)0733 - 9399(2006)132:11(1205 ) [ * * , ( ) ] _ _ , ed .( , ) link:\doibase 10.1098/rspa.2010.0018 [ * * , ( ) ] _ _ ( , , ) _ _ ( , ) _ _ ( , ) _ _ ( , ) _ _ ( , , ) in link:\doibase 10.1016/s1873 - 975x(96)80004 - 7 [ _ _ ] , vol .( , ) chap . , pp . in _ , vol .link:\doibase 10.1029/jb076i026p06414 [ * * , ( ) ] link:\doibase 10.1115/1.3424532 [ * * , ( ) ] link:\doibase 10.1007/bf01182154 [ * * , ( ) ] link:\doibase 10.1016/0022 - 5096(86)90021 - 9[ * * , ( ) ] link:\doibase 10.1115/1.2807001 [ * * , ( ) ] link:\doibase 10.1017/s0022112067001375 [ * * , ( ) ] link:\doibase 10.1007/s11242 - 009 - 9414 - 1 [ * * , ( ) ] link:\doibase 10.1029/2011wr010685 [ * * , ( ) ] link:\doibase 10.1016/0021 - 9290(90)90164-x [ * * , ( ) ] link:\doibase 10.1016/0020 - 7462(91)90020-t [ * * , ( ) ] link:\doibase 10.1016/0301 - 9322(96)00035 - 3 [ * * , ( ) ] link:\doibase 10.1115/1.3138633 [ * * , ( ) ] link:\doibase 10.1115/1.2912940 [ * * , ( ) ] link:\doibase 10.1017/s002211209600256x [ * * , ( ) ] link:\doibase 10.1063/1.2000247 [ * * , ( ) ] link:\doibase 10.1063/1.3068194 [ * * , ( ) ] link:\doibase 10.1115/1.3240807 [ * * , ( ) ] link:\doibase 10.1115/1.3240810 [ * * , ( ) ] link:\doibase 10.1115/1.3173119 [ * * , ( ) ] link:\doibase 10.1115/1.3423648 [ * * , ( ) ] link:\doibase 10.1016/0268 - 0033(91)90048-u [ * * , ( ) ] link:\doibase 10.1007/bf00626661 [ * * , ( ) ] link:\doibase 10.1016/s0020 - 7225(97)00024 - 4 [ * * , ( ) ] link:\doibase 10.1115/1.2892010 [ * * , ( ) ] link:\doibase 10.1051/meca/2011115 [ * * , ( ) ] link:\doibase 10.1103/physreve.93.023116 [ * * , ( ) ] _ _ ( , ) link:\doibase 10.1017/s0334270000008936 [ * * , ( ) ]
compressing a porous material will decrease the volume of the pore space , driving fluid out . similarly , injecting fluid into a porous material can expand the pore space , distorting the solid skeleton . this poromechanical coupling has applications ranging from cell and tissue mechanics to geomechanics and hydrogeology . the classical theory of linear poroelasticity captures this coupling by combining darcy s law with terzaghi s effective stress and linear elasticity in a linearized kinematic framework . linear poroelasticity is a good model for very small deformations , but it becomes increasingly inappropriate for moderate to large deformations , which are common in the context of phenomena such as swelling and damage , and for soft materials such as gels and tissues . the well - known theory of large - deformation poroelasticity combines darcy s law with terzaghi s effective stress and nonlinear elasticity in a rigorous kinematic framework . this theory has been used extensively in biomechanics to model large elastic deformations in soft tissues , and in geomechanics to model large elastoplastic deformations in soils . here , we first provide an overview and discussion of this theory with an emphasis on the physics of poromechanical coupling . we present the large - deformation theory in an eulerian framework to minimize the mathematical complexity , and we show how this nonlinear theory simplifies to linear poroelasticity under the assumption of small strain . we then compare the predictions of linear poroelasticity with those of large - deformation poroelasticity in the context of two uniaxial model problems : fluid outflow driven by an applied mechanical load ( the consolidation problem ) and compression driven by a steady fluid throughflow . we explore the steady and dynamical errors associated with the linear model in both situations , as well as the impact of introducing a deformation - dependent permeability . we show that the error in linear poroelasticity is due primarily to kinematic nonlinearity , and that this error ( i ) plays a surprisingly important role in the dynamics of the deformation and ( ii ) is amplified by nonlinear constitutive behavior , such as deformation - dependent permeability .
to decrease the power loss , we often design the shortest paths to connect with each components in electric integrated circuits . not only the electric loss but also any kinds of the cost are demanded to be minimized as far as possible in industrial products .such problems can be formulated into a more generic task to minimize or maximize a real single - valued function of multivariables , which is called optimization problems .several solvers for optimization problems have been invented in the context of the dynamical process in statistical physics . in these methods ,the system is driven to be trapped at the global minimum of the complicated valley structure . in the present study, we approach this issue in a non - standard way with the recent progress in statistical mechanics .we propose a method to measure the difference between the approximate result given by solvers of optimization problems and the true answer . in practice, we do not always know the ground state energy , however the minimum value of the entropy is known trivially .therefore the entropy can be an indicator for the deviation of the approximate solution from the accurate answer .to goal of our study , we extend the identity proposed by adib , which is useful to estimate the entropy difference in changing the hamiltonian in artificial dynamics .this identity is inspired by the jarzynski equality , which plays a key role to connect equilibrium states at beginning and end with a nonequilibrium process .however the original formulation given by adib is considered only for the isoenergy process , on which the energy is fixed to be a constant value , while the hamiltonian changing .we extend the identity of the entropy difference in the isoenergy process to an energy - controlled process .in order to arbitrarily control the energy , we introduce an artificial field to the hamilton dynamics . the equations of the modified dynamics are the above describes a point on the phase space .the energy follows an arbitrary function of time , while the hamiltonian changing from to , if we choose the functional form of as here is an arbitrary vector on the phase space satisfying .> from equations ( [ eq : hamilton dynamics ] ) and ( [ eq : energy reservoir ] ) , we can easily confirm that the energy is accurately controlled as since we have under the special dynamics ( [ eq : hamilton dynamics ] ) , the ensemble density evolves following the liouville equation : where equation ( [ eq : lambda bar ] ) is the time average of the phase space compression factor " along the trajectory that connects to .this factor appearing in the dynamics of the ensemble density will play the central role to estimate the entropy as shown below . similarly to the original jarzynski equality , the system is assumed to be in an equilibrium state at the initial time .the distribution at the initial time is set to be the microcanonical distribution at : where is the number of states at .let us consider the average of over all possible realizations from to : in the second transformation , equations ( [ eq : liouville eq ] ) and ( [ eq : inital ] ) have been used . note that we control the energy of the system as , and we thus find for any $ ] .therefore in the dirac delta function of equation ( [ average ] ) can be replaced by .equation ( [ average ] ) can be reduced to above is the entropy difference between the equilibrium states at the different energy values and .therefore we can obtain the entropy difference after the nonequilibrium procedure following the schedule of the energy .if we estimate the entropy difference with such a naive method as directly calculating the entropy at the initial and final times , we have to investigate the entropy twice . on the other hand , our formula ( [ eq : myje ] )gives the entropy difference by taking just a single average of .this means that we can examine the entropy difference by the direct use of the resulting ensemble after the nonequilibrium process .we here show that our equality can be used for a quantitative estimation , which indicates how much an approximate solution differs from the true solution .let us consider an arbitrary potential energy with continuous variables which has no degeneracy at the ground state , and assume that a solution with the energy has been obtained .we consider the following hamiltonian at the first stage of the dynamics , the system can visit all the locations since the potential energy is small . by increasing the coefficient of the potential energy , the particle can recognize the energy barriers and be trapped into local minima .let us consider the schedule of the energy from to , where .the entropy at the initial time can be calculated easily since the system equals to non - interacting particles with the mass at . on the other hand, the entropy at the final time can be estimated by calculating from our identity .the detailed protocol is described below .first , we randomly choose an initial condition from the set , . note that the initial hamiltonian depends only on .we obtain the path toward a phase point with the lower energy following the energy - controlled dynamics ( [ eq : hamilton dynamics ] ) under the given initial condition .some initial conditions result in the divergence of the factor appearing on the right - hand side of equation ( [ eq : energy reservoir ] ) when the system is trapped in a local minimum with .such a divergence means that the energy can not decrease toward and the system evolving from such initial conditions are unable to reach any points on the phase space at . therefore such samples should be excluded in taking the average of .finally , we take the average of only in the case of the absence of such divergences , and we obtain appearing on the right - hand side of our equality ( [ eq : myje ] ) . in the case of optimization problems ,the value of the energy at the ground state can not be known in advance . on the other hand ,the minimum value of the entropy is trivial as in the case of classical systems .therefore implies that the obtained solutions after the nonequilibrium process for the given potential energy are close to the minimum point . in other words , estimating the difference of the entropy can be an indicator how much the approximate solution differs from the actual minimum .the detailed estimation by use of a simple instance is given in the reference .let us here emphasize another advantage in use of the entropy difference to measure the deviation from the true solution .as described in figure 1 , if takes a common value in both case , the entropy gap can become quite different , since the entropy rapidly decreases when the system breaks through local minima of the potential energy .the energy can not lead us to any information on the difference coming from the energy structure .therefore the entropy difference can be a good measure of the deviation from the true solution in optimization problems .but different . ]we have extended the adib identity inspired by the jarzynski equality to an energy - controlled system , and shown the possibility as the measure of the deviation of the approximate solution from the true answer .however our study is still short of consideration , since the efficient dynamics to search for the ground state have not considered .in addition we have trouble in numerical computation , since the value of the entropy of an approximate solution is compared with an infinite value in our protocol . to solve these problems, we might establish the quantum version of our identity . in the case of a quantum system ,the entropy at the ground state is equal to which is clearly more suitable for the quantitative estimation than of classical systems .furthermore , the tunneling effect can be useful , as quantum annealing , for searching for the global minimum of the complicated energy landscape as in spin glasses , with which most of the optimization problems are closely related .we also remark that the quantum version of our formulation can also open the possibility of quantum computation . by use of the quantum degrees of freedom, we can implement our identity in the quantum computation .the same techniques as in the literature would be available to suggest the algorithm to estimate the entropy as well as the free energy .99 garey m r and johnson d s _ computers and intractability : a guide to the theory of np - completeness _( san francisco : freeman ) hartmann a k and weigt m 2005 _ _ p__hase transitions in combinatorial optimization problems : basics , algorithms and statistical mechanics ( weinheim : wileyvch ) kirkpatrick s , gelett c d and vecchi m p 1983 _ science _ * 220 * 671 kadowaki t and nishimori h 1998 _ phys .e _ * 58 * 5355 adib a b _ phys . rev ._ e * 71 * jarzynski c _ phys .* 78 * katsuda h and ohzeki m 2011 _ preprint _ cond - mat/1101.3826 evans d j and morris g p _ statistical mechanics of nonequilibrium liquids _ ( london : academic ) .nishimori h _ statistical physics of spin glasses and information processing : an introduction _( new york : oxford university press ) chapter ohzeki m _ phys .lett . _ * 105 *
we extend the jarzynski equality , which is an exact identity between the equilibrium and nonequilibrium averages , to be useful to compute the value of the entropy difference by changing the hamiltonian . to derive our result , we introduce artificial dynamics where the instantaneous value of the energy can be arbitrarily controlled during a nonequilibrium process . we establish an exact identity on such a process corresponding to the so - called jarzynski equality . it is suggested that our formulation is valuable in a practical application as in optimization problems .
the main purpose of this paper is to study an extension of the dirac theory of constraints ( ) and the gotay - nester theory ( ) for the case in which the equation of motion is a dirac dynamical system on a manifold , where is a given _integrable _ dirac structure on , which gives a foliation of by presymplectic leaves , and is a given function on , called the _ energy_.one of the features of systems like ( [ 11drracds ] ) is that they can represent some constrained systems where the constraints appear even if the lagrangian is nonsingular , as it happens in the case of integrable nonholonomic systems .we will work under explicit hypotheses about regularity , locality and integrability , although some of our results can be applied in more general cases , as indicated in the paragraph _hypotheses _ , below in this section . in the gotay - nester theorythe starting point is an equation of the form where is a presymplectic form on .in fact , in the gotay - nester theory the more general case where is replaced by an arbitrary closed 1-form is considered , but this will not be relevant here .[ [ example - euler - lagrange - and - hamiltons - equations . ] ] example : euler - lagrange and hamilton s equations .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + let be a lagrangian , degenerate or not .let and let be the presymplectic form on the pontryagin bundle .then the euler - lagrange equations are written equivalently in the form of equation ( [ 11gn ] ) with .in fact , we have , in local coordinates , using equations ( [ e - l])([e - l2 ] ) we can easily see that ( [ 11gn ] ) becomes which is clearly equivalent to the euler - lagrange equations . the case , where is a given hamiltonian on of course gives hamilton s equations .the idea of using the pontryagin bundle to write such fundamental equations of physics appears in. + equation ( [ 11gn ] ) is equivalent to equation ( [ 11drracds ] ) in the case in which the dirac structure on is the one naturally associated to , denoted , in which case the foliation of has obviously only one presymplectic leaf . in general , equation ( [ 11drracds ] ) may be considered as a collection of equations of the type ( [ 11gn ] ) , one for each presymplectic leaf of the dirac structure .however , in order to study the initial condition problem this approach might not be appropriate , because different initial conditions might belong to different presymplectic leaves and therefore correspond to different equations of the type ( [ 11gn ] ) , which is not the usual situation .the algorithm of the gotay - nester theory generates a finite sequence of secondary constraint submanifolds .the final constraint submanifold has the property that every solution curve of ( [ 11gn ] ) is contained in ( in fact, it is the smallest submanifold having that property ) . equations of motion are given by restricting the variable in equation ( [ 11gn ] ) to , and existence of solution curves for a given initial condition is guaranteed , under the condition that the kernel of has locally constant dimension .an important point in the gotay - nester approach is that the equations defining the secondary constraint submanifolds are written in terms of ( see formula ) , which makes the whole algorithm invariant under a group of transformations preserving . in order to solve, we will develop a constraint algorithm that we call cad ( _ constraint algorithm for dirac dynamical systems _ ) .the cad extends the gotay - nester algorithm and gives a sequence of secondary foliated constraint submanifolds .the secondary foliated constraint submanifolds can be written in terms of the dirac structure using formula , which generalizes .let be an embedding of into a symplectic manifold , in such a way that the presymplectic leaves of are presymplectic submanifolds of and let , by slight abuse of notation , denote an arbitrary extension of the given energy on .this kind of framework can be naturally constructed in many examples .then one can consider , at least locally , that is a primary foliated constraint submanifold defined by equating certain constraints to , which generalizes the situation of the dirac theory of constraints .one of the results of this paper establishes that there exists a dirac bracket whose symplectic leaves are _ adapted _ to the foliations of the primary and final foliated constraint submanifolds .we also prove that the equations of motion on the final foliated constraint submanifold can be nicely written in hamiltonian form with respect to this dirac bracket and the _ abridged total energy _ , which we introduce in section [ subsectionequationsof motion ] .our extension of the gotay - nester and dirac theories has especially simple and interesting features when applied to lc circuits .we show that the algorithm stops either at or , and concrete physically meaningful formulas for the first three constraint submanifolds and for the evolution equations can be given for this case .other geometric treatments of lc circuits can be found , for instance , in .systems of the type ( [ 11drracds ] ) are important also in the not necessarily integrable case since they represent , for instance , the lagrange - dalembert equations of nonholonomic mechanics .more references will be given in section [ subsectiondiracdynamicalsystems ] .we remark that even though the main focus of this paper is the case where the dirac structure is integrable , many results , most notably the constraint algorithm cad and the treatment of nonholonomic systems in section [ sectionconstalgforrdirac ] , are proven for a not necessarily integrable dirac structure .+ gotay - nester s theory is more geometric than dirac s and provides a certain notion of duality or correspondence between the two , in the spirit of the dualities between submanifolds and the equations defining them , manifolds and their ring of functions , hamiltonian vector fields and their hamiltonian or poisson brackets and the collection of their symplectic leaves .emphasizing this point of view in a common geometric language and showing that a combination of results from both theories may have advantages in a given example , rather than using one or the other , is an aspect of the paper . from this point of view , the notion of second class constraints is related to the notion of second class constraint submanifold , the latter being determined in this paper as a submanifold that is tangent to a second class subbundle of a certain tangent bundle .they are symplectic submanifolds ( ) .the first class constraint submanifolds are coisotropic submanifolds ( ) .the presence of primary and secondary first class constraints , which in dirac s work is connected to the principle of determinacy and to the notion of physical variables , is explicitly related here to certain manifolds and in a commutative diagram , in theorem [ lemmmma ] .the diagonal arrow in this diagram , which is a submersion , is dual to the diagonal arrow in the diagram in lemma [ lemalema ] , which is a surjective poisson algebra homomorphism . in some examples , instead of applying the dirac method to write equations of motion , it may be simpler to proceed in two stages .first apply the gotay - nester algorithm to find the final constraint submanifold , which , as proven in the gotay - nester theory , coincides with the submanifold defined by equating the final constraints obtained in the dirac algorithm to . then switch to the dirac viewpoint andwrite equations of motion using the dirac bracket . also , in order to calculate the dirac bracket at points of the final constraint submanifold , using the symplectic leaf that contains that submanifold may be easier than using dirac s expression .[ [ the - point - of - view - of - ides . ] ] the point of view of ides .+ + + + + + + + + + + + + + + + + + + + + + + + + + in order to study the problem of finding solution curves of general dirac dynamical systems ( [ 11drracds ] ) the more general theory of _ implicit differential equations _ may be useful .an implicit differential equation ( ide ) on a manifold is written as a solution of ( [ defsolide ] ) _ at a point _ is a vector satisfying ( [ defsolide ] ) .solution curve _ , say , , must satisfy , by definition , that is a solution at for all .basic questions such as existence , uniqueness or extension of solutions for a given initial condition are not completely answered yet , although many partial results have been established for certain classes of ide .one approach is to use a _ constraint algorithm _ , which consists of a decreasing sequence of constraint submanifolds defined as follows , with .this algorithm , which obviously uses only the differentiable structure regardless of any other structure which may be present , like presymplectic , poisson or dirac structures , represents a differential geometric aspect underlying the algorithms of gotay - nester , dirac or cad . to ensure that each is a submanifold and that the algorithm stops after a finite number of steps one may choose to assume certain conditions of the type `` locally constant rank '' conditions . then the original ide is reduced to an equivalent ode depending on parameters on the _ final constraint submanifold _ . in fact , by construction , is characterized by the property that it is the smallest submanifold that contains all solutions curves of the given ide . therefore , if is empty , there are no solution curves .dirac s original work has a wide perspective from the physical point of view , with connections to classical , quantum and relativistic mechanics .however , from the point of view of abstract ide theory , and in a very concise way , we may say that a combination of the dirac and the gotay - nester methods shows how to transform a given gotay - nester equation ( [ 11gn ] ) into an equivalent ode depending on parameters ( [ eq_dirac_evolution_thm ] ) on a final constraint submanifold , while what we show in this paper is how to transform a given dirac dynamical system ( [ 11drracds ] ) into an equivalent ode ( [ eq : vector_field_foliated ] ) depending on parameters on a final foliated constraint submanifold .some more comments on the connection between dirac s and gotay - nester ideas and ide are in order .we can compare , to see how the idea of applying a constraint algorithm works in different contexts . in ,one works in the realm of subanalytic sets ; in and one works with presymplectic manifolds ; in one works with complex algebraic manifolds ; uses poisson brackets ; in some degree of differentiability of the basic data is assumed , and , besides , some constant rank hypotheses are added , essentially to ensure applicability of a certain constant rank theorem. some relevant references for general ides connected to physics or control theory , which show a diversity of geometric or analytic methods or a combination of both are .[ [ hypotheses . ] ] hypotheses .+ + + + + + + + + + + as for regularity , we will work in the category . throughout the paperwe assume that the subsets appearing in sequences of the type that are generated by some constraint algorithm , are submanifolds regularly defined by equating some functions ( constraints ) to , and that the sequence stops .more regularity conditions like assumptions [ k1 ] , [ k2 ] , [ lambda ] , [ 2s_constant ] , [ assumptionfoliation ] , and some others will be introduced when needed along the paper .our results will be of a local character , but for some of them , like the notion of second class submanifolds , it is indicated how to define them globally .the usage of local coordinates is almost entirely avoided and basic facts in symplectic geometry or poisson algebra arguments are used instead .the condition of integrability of the dirac structure appearing in ( [ 11drracds ] ) has its own interest .however , the cad does not assumes integrability and can be applied for instance to general nonholonomic systems . on the other hand , in the non - integrable case certain brackets that we study would not satisfy the jacobi identity , andwill not be studied in this paper .[ [ structure - of - the - paper . ] ] structure of the paper .+ + + + + + + + + + + + + + + + + + + + + + + the first part of the paper , which includes sections [ section7 ] and [ mainresultsofdiracandgotaynester ] , contains a review of the dirac and gotay - nester methods . in sections [ sectiondiracstructures ] to [ sectionanextensionofetc ]we develop our main results , extending the gotay - nester and dirac theories .section [ sectiondiracstructures ] is devoted to a review of basic facts on dirac structures and dirac dynamical systems ( ) .the notion of dirac structure ( ) gives a new possibility of understanding and extending the theory of constraints , which is the main purpose of the present paper . in section [ sectionconstalgforrdirac ]we develop our cad algorithm .we do it for the general case of a not necessarily integrable dirac structure so that one can apply cad to general noholonomic systems . in section [ sectionexamples ]we study the example of integrable nonholonomic systems , and lc circuits are viewed as a particular case . in section [ sectionanextensionofetc ]we show how to extend the dirac theory for the case of a dirac dynamical system ( [ 11drracds ] ) .in this section we review , without proof , basic facts belonging to the dirac and the gotay - nester theories .the dirac method starts with a given submanifold , called the primary constraint submanifold , of a symplectic manifold defined by equating the primary constraints to , and a given energy . in the original work of dirac , is the canonical symplectic form , is the image of the legendre transformation and is the hamiltonian defined by dirac .however , locally , any symplectic manifold is an open subset of a cotangent bundle , therefore the original dirac formalism can be applied to the , seemingly more general , case described above , at least for the local theory .+ an interesting variant of the original situation considered by dirac is the following .consider the canonical symplectic manifold with the canonical symplectic form , and let the primary constraint be , canonically embedded in via the map given in local coordinates of by .in particular , is defined regularly by the equation .if is the presymplectic form on obtained by pulling back the canonical symplectic form of using the natural projection , then one can show that .the embedding is globally defined ( see appendix [ pontryagin_embedding ] for details ) .the number is a well - defined function on and it can be naturally extended to a function on a chart with coordinates , but this does not define a global function on consistently . on the other hand , it can be extended to a smooth function on using partitions of unity and any such extension will give consistent equations of motion . in this paperwe will not consider global aspects . in any case , for a given lagrangian we can take .for a given presymplectic manifold one can always find an embedding into a symplectic manifold such that .moreover , this embedding can also be chosen such that it is coisotropic , meaning that is a coisotropic submanifold of ( see ) .however , we should mention that the embedding given above is not coisotropic .the dirac and the gotay - nester algorithms can be studied independently . on the other hand, they are related as follows . for a given system ( [ 11gn ] ) choose a symplectic manifold in such a way that is a presymplectic submanifold and ( using a slight abuse of notation ) is an arbitrary extension of to .moreover , assume that is defined regularly by a finite set of equations , , where each is a_ primary constraint_. the dirac algorithm gives a sequence of secondary constraints , , which defines regularly a sequence of secondary constraint submanifolds , , by equations , , which coincide with the ones given in the gotay - nester algorithm .dirac s theory of constraints has been extensively studied from many different points of view and extended in several directions .part of those developments in the spirit of geometric mechanics is contained in the following references , but the list is far from being complete , .as we have explained above we will work in a general context of a given symplectic manifold , and , where the primary constraint submanifold is regularly defined by equations , on .the dirac constraint algorithm goes as follows .one defines the _ total energy _the preservation of the primary constraints is written , , , or then is defined by the condition that if and only if there exists such that the system of equations , , , is satisfied . the submanifold is defined by equations , , where each is a _ secondary constraint _ , by definition . by proceeding iteratively one obtains a sequence .then there are _ final constraints _ , say , , defining a submanifold by equations , , called the _ final constraint submanifold _ , and the following condition is satisfied : for each there exists such that for each the space of solutions of the linear system of equations ( [ equationequationforlambda ] ) in the unknowns is an affine subspace of , called , whose dimension is a locally constant function .one can locally choose unknowns as being free parameters and the rest will depend affinely on them. then the solutions of ( [ equationequationforlambda ] ) form an affine bundle over .after replacing in the expression of the total energy , the corresponding hamiltonian vector field , , which will depend on the free unknowns , will be tangent to .its integral curves , for an arbitrary choice of a time dependence of the free unknowns , will be solutions of the equations of motion , which is the main property of the final constraint submanifold from the point of view of classical mechanics .the lack of uniqueness of solution for a given initial condition in , given by the presence of free parameters , indicates , according to dirac , the nonphysical character of some of the variables . in our context the physical variables will be given a geometric meaning .dirac introduces the notion of weak equality for functions on .two such functions are _ weakly equal_ , denoted , if .then , for instance . if then , for some functions on and conversely .since we use the notion of a constraint submanifold , in particular the final constraint submanifold , we prefer not to use the notation .now let us make some comments on the notions of first class and second class constraints .the rank of the skew - symmetric matrix is necessarily even , say , , and it is assumed to be constant on , as part of our regularity conditions ; for our results we will assume later a stronger condition ( assumption [ 2s_constant ] ) .one can choose , among the , , functions , , and replace the rest of the by with appropriate functions in such a way that , , define regularly and , besides , , , , for , and .the are linear combinations with smooth coefficients of the and , and conversely .the functions , , are called _ second class constraints _ and the functions , , are called _ first class constraints_. more generally , any function on satisfying , , , is a first class constraint with respect to the submanifold , by definition .any function on satisfying , , is a _first class function _ , by definition .for instance , the total energy is a first class function .now define the energy in terms of , , , , as the preservation of the constraints for the evolution generated by can be rewritten as , which is equivalent to for all , and , for all .the latter is equivalent to for all , which determines the as well - defined functions on .then the solutions form an affine bundle with base and whose fiber , parametrized by the free parameters , has dimension .any section of this bundle determines as a first class function .this means that , for each , and therefore a solution curve of is contained in provided that the initial condition belongs to .the function is essentially the _ extended hamiltonian _ defined by dirac .dirac defines an interesting bracket , now called the _ dirac bracket _ , which is defined on an open set in containing , where , which by definition is the inverse matrix of , is defined .the dirac bracket is a poisson bracket and has the important property that for _ any _ function on , the condition , , is satisfied on a neighborhood of , which implies that , for any function .besides , , , on .because of this , one may say that , with respect to the dirac bracket , all the constraints , and , , behave like first class constraints , with respect to .we will need just a few basic facts from the gotay - nester theory . in order to find solution curves to ( [ 11gn ] )we can apply the general algorithm for ides described in the introduction , and the final ide can be written let be the pullback of to . if is symplectic , one obtains the simpler equivalent equation on , which is in hamiltonian form .however , one must be aware that if is degenerate then ( [ eq : ham_form_sympl_omegac ] ) is not equivalent in general to ( [ gngnfinal ] ) .the equation on , where , defines an affine distribution on , more precisely , one has an affine bundle with base whose fiber at a given point is , by definition , the equivalence between the dirac and the gotay - nester algorithms can be made explicit as an isomorphism of affine bundles , as follows .the affine bundles and are isomorphic over the identity on , more precisely , the isomorphism is given by , using equation ( [ x_dirac_classical ] ) . in particular , the rank of is .[ [ describing - the - secondary - constraints - using - omega . ] ] describing the secondary constraints using .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the constraint manifolds defined by the algorithm can be described by _equations written in terms of the presymplectic form _ , which is a simple but important idea , because those equations will obviously be invariant under changes of coordinates preserving . depending on the nature of onemay obtain analytic , smooth , linear , etc. , equations , which may simplify matters in given examples .the context of reflexive banach manifolds is used in and .the condition defining the subsets , namely , is equivalent to or , since , this section we will describe some results on first class and second class constraints and constraint submanifolds and also equations of motion which will be useful for our extension of those notions , to be developed in section [ sectionanextensionofetc ] .the following results about linear symplectic geometry are an essential part of many of the arguments that we use in the paper , since under our strong regularity assumptions many of them are essentially of a linear character .[ lemma6.1 ] let be a symplectic vector space of dimension , a given subspace .for a given basis , of , let .then the rank of the matrix ] are linearly independent and the rest are zero , we must have , for , and , are arbitrary .this means that is generated by , .[ cordim ] .immediate from lemma [ lemma6.2 ] .let be the pullback of to via the inclusion .then is a presymplectic space . in what follows ,the and operators are taken with respect to unless specified otherwise .[ lemmaperp ] . iff iff .this is equivalent to .[ lemma6.5 ]let be a given basis of and let .let be given .then the following conditions are equivalent .* . *the linear system has solution .let us show that ( [ linsyst ] ) has solution iff the system has solution , where and is a basis satisfying the conditions of lemma [ lemma6.1 ] . since and are both bases of there is an invertible matrix ] be the inverse of ] in lemma [ lemma6.1 ] , the proof that ( [ linsyst2 ] ) has solution iff is easy and is left to the reader .[ lemma6.6 ] consider the hypotheses in lemma [ lemma6.5 ] .then the solutions to ( if any ) are precisely , where is a solution to ( [ linsyst ] ) .a solution to ( [ presymplecticsharp ] ) exists if and only if .if is symplectic then ( [ linsyst ] ) and ( [ presymplecticsharp ] ) have a unique solution and if , in addition , , then , , and coincides with defined by ( [ presymplecticsharp ] ) . since , form a basis of we have that is a solution to ( [ linsyst ] ) iff iff iff .now , let , where satisfies ( [ linsyst ] ) .then we have as we have just seen and we also have since generate .we have proven that is a solution to ( [ presymplecticsharp ] ) . to prove that every solution to ( [ presymplecticsharp ] ) can be written as before, we can reverse the previous argument . using this ,it is clear that if is symplectic then ( [ linsyst ] ) has unique solution , in particular , we have that . if , in addition , then .since , is a basis of , using lemma [ lemmaperp ] and the fact that we get that for .[ corollary6.7 ] let .then . , which has dimension from corollary [ cordim ] and lemma [ lemmaperp ] . on the other handthe dimension of the subspace of satisfying ( [ linsyst ] ) is clearly also , since the coefficient matrix has rank .we shall start with the constrained hamiltonian system , where is a symplectic manifold , is the energy and is the primary constraint submanifold .the equation to be solved , according to the gotay - nester algorithm , is where and , being _ the final constraint ._ let be the pullback of via the inclusion of in .since is presymplectic , is an involutive distribution . from now on we will assume the following .[ k1 ] the distribution has constant rank and defines a regular foliation , that is , the natural map , where is a submersion .[ lemma7.1 ] the following assertions hold : * ( a ) * there is a uniquely defined symplectic form on such that . *( b ) * let be a given vector field on . then there is a vector field on that is -related to . * ( c ) * let .then there exists a vector field on such that is -related to , and for any such vector field the equality holds for all . *( d ) * let . then one can choose the function and the vector field in * ( c ) * in such a way that . *( a ) * by definition , the leaves of the foliation are connected submanifolds of , that is , each , , is connected . for ,let such that . for , as is a submersion , there are such that .we define . to prove that this is a good definition observe first that it is a consistent definition for fixed , which is easy to prove , using the fact that .now choose a darboux chart centered at , say , such that , in this chart , and , where and are independent of .this shows that is well defined on the chart . using this and the fact that one can cover the connected submanifold with chartsas explained above , one can deduce by a simple argument that is well defined . *( b ) * let be a riemannian metric on . then for each there is a uniquely determined such that is orthogonal to and , for all .this defines a vector field on which is -related to .* ( c ) * given and using the result of * ( b ) * we see that there is a vector field on that is -related to . then , for every and every , * ( d ) * one can proceed as in * ( b ) * and * ( c ) * , choosing such that and , besides , the metric such that is perpendicular to .[ definitionfirstclassconstr ] * ( a ) * for any subspace define the distribution by . *( b ) * the space of * _ first class functions _ * is defined as in other words , is the largest subset of satisfying .dirac was interested in classical mechanics , where states are represented by points in phase space , as well as in quantum mechanics where this is not the case .from the point of view of classical mechanics , among the constraint submanifolds and constraints the only ones that seem to play an important role are , and the constraints , defining them by equations , , respectively .[ lemmai6 ] * ( a ) * is a poisson subalgebra of . *( b ) * is an integral submanifold of .moreover , for any vector field on that is -related to a vector field on there exists a function such that and .in particular , any vector field on satisfying for all -related to the vector field on the symplectic manifold , which is associated to the function , therefore there exists a function , which satisfies , such that , . *( a ) * let .then and are both tangent to at points of which implies that (x) ] it is easy to prove that the iterated poisson brackets of the give functions whose hamiltonian vector fields are tangent to the leaves of .each one of these functions , let us call it , satisfies that for all , but not necessarily is a linear combination of the , on a neighborhood of , so it might not be a primary first - class constraint .one can prove that such a function is zero on , but since for all , it can not be included in a set of constraint functions defining regularly and containing the , in other words , it is not an _ independent _ secondary first class constraint .there are examples in the literature where the poisson bracket of two primary first class constraints is an independent secondary first class constraint , for instance , that is the case in ; for that example assumption [ k2 ] ( a ) does not hold ( but [ k2 ] ( b ) does ) . according to , pages 2324 , primary first class constraints andtheir iterated brackets represent transformations that do not change the physical state of the system , which in this case is implied directly from the requirement of determinism .we have seen that under our regularity conditions , those iterated brackets do not really add any new such transformations besides the ones given by the primary first class constraints .in addition , note that is also the dimension of the kernel of the presymplectic form on ( see theorem [ mainlemma1section7 ] ( a ) ) , which is the number of first class constraints , and also the dimension of the leaves of ( see assumption [ k1 ] ) .since the hamiltonian vector fields corresponding to the first class constraints generate the integrable distribution associated to , then the transformations that preserve the leaves of are those which preserve the physical state .so , points of would represent exactly the physical states . introduces the notion of extended hamiltonian . in our context, we should define the notion of * _ extended energy _ * , where the represent the secondary first class constraints .both and give the same dynamics on the quotient manifold .if is symplectic , there are only second class constraints among the , , and conversely . in this case ,the rest of the paper is devoted to generalizing the dirac and gotay - nester theories of constraints . in this sectionwe review a few basic facts related to dirac structures and dirac dynamical systems following and , which are of direct interest for the present paper .we introduce the flat and the orthogonal operators with respect to a given dirac structure , which is important to describe the algorithm , in the next section .in fact , we obtain a technique which imitates the gotay - nester technique , by replacing by .[ [ dirac - structures - on - vector - spaces . ] ] dirac structures on vector spaces .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + let be an -dimensional vector space and let be its dual space . define the _ symmetric pairing _ on by for where is the natural pairing between and .a _ linear dirac structure _ on , ( sometimes called simply a _ dirac structure _ on ) , is a subspace such that , where is the orthogonal of relative to the pairing .one can easily check the following result .[ defdiracstr ] a vector subspace is a dirac structure on if and only if it is maximally isotropic with respect to the symmetric pairing .a further equivalent condition is given by and for all .one of the main examples is the dirac structure naturally defined on a presymplectic vector space by here the definition of is the standard one , namely , is defined by , for all .we also recall the definition of orthogonality on associated to a given presymplectic form . for a given subset define the -orthogonal complement by we recall some standard facts in the following lemma , omitting the proof .[ circperp ] let be a presymplectic form on a vector space and let be any vector subspace. then , where the right hand side denotes the annihilator of .the following proposition is a direct consequence of propositions 1.1.4 and 1.1.5 in .[ diracpresymplectic ] given a dirac structure define the subspace to be the projection of on . also , define the 2-form on by , where .( one checks that this definition of is independent of the choice of ) .then , is a skew form on .conversely , given a vector space , a subspace and a skew form on , one sees that is the unique dirac structure on such that and . is the dirac structure associated to a presymplectic form on , as explained before , if and only if and . [ [ the - operators - dflat - and - d . ] ] the operators and .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + there is a natural extension , which is important in the present paper , of the previous definition of for the case of a general dirac structure . for a given dirac structure and given define the _ set _ , sometimes denoted for short , by note that is the dirac structure associated to a presymplectic form on , that is , , if and only if and for each , the set has a single element , more precisely , . in this sense, generalizes . for a given subset define , also denoted , by if is a subspace , then is a subspace of .it is straightforward to check that for all , the notion of orthogonal complement with respect to can be generalized as follows . for any subset define by clearly , it is easy to check that for any subspace one has we recall also that , since is a presymplectic form on , one has , according to lemma [ circperp ] and with a self - explanatory notation , the following proposition generalizes lemma [ circperp ] and is one of the ingredients of the constraint algorithm described in section [ sectionconstalgforrdirac ] .[ circperpd]let be a given dirac structure and let be a given subspace . then .we first show that .let , say for some .since for all by definition of , it follows that . to prove the converse inclusion, we first observe that it is equivalent to prove that .let , then , for all , which is an immediate consequence of the definitions .but , also by definition , this implies that .[ [ dirac - structures - on - manifolds . ] ] dirac structures on manifolds .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we will give the definition and some basic properties of dirac manifolds , following and , using the notation of the latter . a _ dirac structure _ on a manifold is a subbundle of the whitney sum such that for each , is a dirac structure on the vector space manifold _ is a manifold with a dirac structure on it . from proposition [ diracpresymplectic ]we deduce that a dirac structure on yields a distribution whose dimension is not necessarily constant , carrying a presymplectic form , for all .we can also deduce the following theorem , whose detailed proof appears in .[ diracpresymplecticmanifold ]let be a manifold and let be a 2-form on . given a distribution on ,define the skew - symmetric bilinear form on by restricting to . for each then is a dirac structure on .it is the only dirac structure on satisfying and , for all . using lemma [ circperp ] and proposition [ circperpd ] we can easily deduce the following proposition .[ circperpmanifold ] let be a dirac structure on and let be a subspace of for each ; then , with a self - explanatory notation , the following equalities hold for each : a dirac structure on is called _ integrable _ if the condition is satisfied for all pairs of vector fields and 1-forms , , that take values in and where denotes the lie derivative along the vector field on .this definition encompasses the notion of closedness for presymplectic forms and jacobi identity for brackets .the following fundamental theorem was proven in .[ presymplfoliation ] let be an integrable dirac structure on a manifold . then the distribution is involutive .if , moreover , the hypotheses of the stefan - sussmann theorem ( ) are satisfied , for each there exists a uniquely determined embedded submanifold of such that and for all . in other words , is an integral submanifold of .each integral submanifold carries a presymplectic form defined by , for each .lagrangian and hamiltonian mechanics has been developed in interaction with differential , symplectic , and poisson geometry ; a few references are .dirac dynamical systems in the integrable case represent a synthesis and a generalization of both .nonholonomic mechanics represents a generalization of lagrangian and hamiltonian mechanics and is a long - standing branch of mechanics , engineering and mathematics .some references and historical accounts on the subject are .some references that are more closely related to this paper are .dirac dynamical systems ( [ 11drracds ] ) in the not necessarily integrable case may be viewed as a synthesis and a generalization of nonholonomic mechanics from the lagrangian and the hamiltonian points of view .they can be written equivalently as a collection of systems of the type a related approach has been studied in . as it was shown in , one can write nicely the equations of nonholonomic mechanics , on the lagrangian side , using the dirac differential . we will show how this is related to system ( [ 11drracds ] ) .it was also shown in and references therein how to use dirac structures in lc circuit theory , on the lagrangian side .on the hamiltonian side , poisson brackets for lc circuits were written in , see also . in the hamiltonian structure for nonlinear lc circuits in the framework of dirac structures is investigated and simple and effective formulas are described .in a further unification , including reduction , is presented , which is consistent with mechanics on lie algebroids ( ) .some results in this section , in particular the cad and examples of nonholonomic mechanics , are proven for not necessarily integrable dirac structures .first , we shall briefly consider the case of an integrable dirac structure .this includes the case of a constant dirac structure on a vector space , that is , one that is invariant under translations , which includes the one used for lc circuits .later on we will consider the general case .[ [ the - case - of - an - integrable - dirac - manifold . ] ] the case of an integrable dirac manifold .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + assume that is an integrable dirac structure on a manifold .each solution curve of the system ( [ 11drracds ] ) must satisfy for all , which implies , according to theorem [ presymplfoliation ] and equation ( [ classicalm122 ] ) , that it must be a solution curve to the equation on a presymplectic leaf , where is the inclusion , which can be solved using the gotay - nester algorithm .such a procedure to solve ( [ 11drracds ] ) might be useful in those cases where the presymplectic leaves can be found easily .this occurs for instance if is a vector space and is a constant ( i.e. translation - invariant ) dirac structure , as we will show next .however , this procedure has the drawback that in order to find a solution of ( [ 11drracds ] ) for a given initial condition , one must first find the leaf containing that initial condition and then solve .for an initial condition on a different leaf , one has to repeat the constraint algorithm for a different corresponding equation . because of this , even in these simple cases , working directly with the dirac structure , using the constraint algorithm to be developed in this section , rather than with the associated presymplectic form on a presymplectic leaf , is not only possible but also convenient , since this leads to obtaining a single equation on a final foliated constraint submanifold , as we will see .[ [ the - case - of - a - constant - dirac - structure . ] ] the case of a constant dirac structure .let be a vector space and a given linear dirac structure .then we have the presymplectic form on and the associated linear map .we consider the dirac structure on the _ manifold _ defined as , where we have used the natural identification .this dirac structure is integrable and constant , that is , invariant under translations in a natural sense , as we will show next . for each the presymplectic leaf containing ( in the sense of theorem [ presymplfoliation ] )is . for each , induces the constant presymplectic form given by where , so represents any point in the symplectic leaf , or , equivalently , consider the system this system , for a given initial condition , is equivalent to the following equation on the presymplectic leaf that contains keeping fixed and writing , the system on the presymplectic leaf becomes therefore equation is equivalent to the following equation on the subspace with initial condition .equation ( [ equation3 ] ) can be solved by the algorithm described in and , and sketched in section [ the gotay - nester algorithm ] .notice that , since the presymplectic form on the vector space determines naturally a translation - invariant dirac structure on the same space considered as a manifold , equation ( [ equation3 ] ) is also a dirac dynamical system in the same way ( [ equation ] ) is , but with the dirac structure given by the presymplectic form considered as a constant form on the manifold . because of this and also because , equation ( [ equation3 ] ) is , in essence , simpler than system ( [ equation ] ) .[ [ the - case - of - a - general - dirac - manifold . ] ] the case of a general dirac manifold .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + now let be a dirac structure on that needs not be integrable .in order to explain our algorithm for dirac manifolds we need the following auxiliary result , involving a given subspace of , which is easy to prove using results from section [ sectiondiracstructures ] .[ lemmaequivalence ] for each we have the following equivalent conditions , where is a given subspace of .* there exists such that ( [ 11drracds ] ) is satisfied .* there exists such that . * . * .* there exists such that .* .* .now we can describe the constraint algorithm for dirac dynamical systems , called _cad_. following the same idea of the gotay - nester algorithm described in section [ the gotay - nester algorithm ] we should construct a sequence of constraint submanifolds . to define the first constraint submanifold we may use either one of the equivalent conditions of lemma [ lemmaequivalence ] , with .we want to emphasize the role of the two equivalent conditions ( iv ) and ( vii ) , as they represent a formal analogy between the cad and the gotay - nester algorithm. of course , the gotay - nester algorithm , by definition , corresponds to the case .define let us assume that is a submanifold .then we define the second constraint submanifold by either of the following equivalent conditions , in agreement with ( iv ) and ( vii ) of lemma [ lemmaequivalence ] , with , more generally we define recursively for , by either of the conditions with .the algorithm stops and the final constraint submanifold is determined by the condition .the solutions are in .we have proven that solution curves of ( [ 11drracds ] ) are exactly solution curves of in some examples , it is sometimes easier to write the constraint submanifolds using condition ( vi ) in lemma [ lemmaequivalence ] , that is , with , , where .[ importantremark ] formula ( [ constraintsubmf3 ] ) has a special meaning in the case of an integrable dirac structure .in fact , let be an integral leaf of the distribution ; then by applying the gotay - nester algorithm , encoded in the recursion formula ( [ equationalgorithgn ] ) , to the system ( [ equation15 ] ) one obtains a sequence of secondary constraints given by the recursion formula but it is clear that equation ( [ sequationalgorithgn ] ) coincides with equation ( [ constraintsubmf3 ] ) since , and the presymplectic form on is defined by for all .we are assuming regularity conditions that ensure that the gotay - nester algorithm applied for each stops after a number of steps which does not depend on and which is at the same time the number of steps after which the cad stops .as a conclusion , the final constraint submanifold of the cad is foliated by leaves , where varies on the set of integral leaves of the distribution .we may say that in the case of an integrable dirac structure the cad is equivalent to a collection of gotay - nester algorithms , one for each leaf of the distribution .the final equation given by ( [ gotaygen1ccc])([gotaygen1ccc2 ] ) becomes which is equivalent to the collection of equations in section [ sectionanextensionofetc ] we will extend the dirac theory of constraints .for that purpose , we will use an embedding of in a symplectic manifold such that the presymplectic leaves of are presymplectic submanifolds of .the submanifold plays the role of a primary _ foliated _ constraint submanifold .the case in which there is only one leaf gives the dirac theory .[ [ solving - the - equation . ] ] solving the equation .+ + + + + + + + + + + + + + + + + + + + + the constraint algorithm cad gives a method to solve the ide ( [ 11drracds ] ) which generalizes the gotay - nester method .assume that the final constraint submanifold has been determined , and consider , for each , the affine space which is nonempty if is nonempty , a condition that will be assumed from now on .let be the dimension of .the following theorem is one of the ingredients of our main results , and generalizes the gotay - nester algorithm for the case of dirac dynamical systems rather than gotay - nester systems ( [ 11gn ] ) .its proof is not difficult , using the previous lemma , and is left to the reader .[ maintheorem1 ] let be a given manifold , a given dirac structure on and a given energy function on , and consider the dirac dynamical system assume that for each , the subset of defined recursively by the formulas ( [ constraintsubmf1])([constraintsubmf3 ] ) is a submanifold , called the * _-constraint submanifold_*. the decreasing sequence stops , say ( which implies , for all ) , and call the * _ final constraint submanifold_*. then the following hold : _ ( a ) _ for each , there exists such that ( [ againdirac ] ) is satisfied .the dirac dynamical system ( [ againdirac ] ) is equivalent to the equation that is , both equations have the same solution curves ._ ( b ) _ for each , equals the dimension of . _if is a locally constant function of on then is an affine bundle with base .each section of is a vector field on having the property that , for all .solution curves to such vector fields are solutions to the dirac dynamical system ( [ 11drracds ] ) .more generally , one can choose arbitrarily a time - dependent section , then solution curves of will be also solutions to ( [ 11drracds ] ) and those are the only solutions of ( [ 11drracds ] ) .solution curves to ( [ 11drracds ] ) are unique for any given initial condition if and only if , for all . [ [ local - representation - for - sc . ] ] local representation for .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + one can find a local representation for by just choosing a local parametrization of .let , where is an open set and is the dimension of , be such a local parametrization .then substitute this expression for in ( [ 11drracds ] ) to obtain an ide in , namely the local representation of is given by the trivial affine bundle [ [ some - results - concerning - uniqueness - of - solution . ] ] some results concerning uniqueness of solution .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + under assumption [ k2 ] ( b ) , we can prove the following lemma .[ lemmalemma ] ( a ) existence and uniqueness of a solution curve of ( [ redagaindirac ] ) for any initial condition , and therefore also of the dirac dynamical system ( [ againdirac ] ) , is equivalent to any of the conditions \(i ) , for each , \(ii ) , for each , \(iii ) , for each . + ( b ) if , or , equivalently , , is symplectic for each then there is existence and uniqueness of solution of ( [ redagaindirac ] ) for any initial condition , and therefore also of the dirac dynamical system ( [ againdirac ] ) .one also has that is symplectic for each if and only if any one of the following conditions is satisfied : \(i ) , \(ii ) .we first recall the argument from theorem [ maintheorem1 ] .uniqueness of a solution of equation ( [ redagaindirac ] ) for any given initial condition ( which , as we know from the cad , must satisfy and for all ) holds if and only if for each there is a uniquely determined such that in other words , if equation ( [ redagaindirac ] ) defines a vector field on .in fact , if this is the case , by the general theory of ode on manifolds we have existence and uniqueness of solution .we also recall that under assumption [ k2 ] ( b ) , the equation ( [ redagaindirac ] ) defines a family of vector fields on , which defines an affine distribution of constant rank whose space at the point is the affine space of all solutions as indicated above . using thiswe can deduce that uniqueness of solution of ( [ redagaindirac ] ) for any initial condition , is equivalent to the affine distribution defined above having dimension .now we shall prove the equivalence between uniqueness of solution and _( a)_. let satisfying equation ( [ redagaindirac ] ) , that is , let be such that . then also satisfies ( [ redagaindirac ] ) , which shows that the affine distribution described above has dimension greater than . using this , the proof of the equivalence between uniqueness of solution and _ ( a)(i ) _ follows easily . the rest of the proof of the equivalence with _ ( a )_ follows from the fact that for each , which can be proved directly using the definitions .note that the equivalence between _( i ) , ( ii ) , ( iii ) _ of _ ( a ) _ holds for _ any _ subspace .now we shall prove _( b)_. first of all , if satisfies ( [ redagaindirac ] ) then , since , it clearly also satisfies we can conclude that if , which as we know coincides with , is symplectic then equation ( [ redagaindirac ] ) defines a vector field on , which is , in fact , given by equation ( [ equationequation111 ] ) . then existence and uniqueness of solution of ( [ redagaindirac ] ) for any initial condition guaranteed .finally , the equivalence between symplecticity of and _ ( b)(i ) _ and _ ( b)(ii ) _ is easy to prove using basic linear symplectic geometry .in fact , is is easy to prove that symplecticity of is equivalent to . using lemma [ lemmaperp ]we can deduce that from which we obtain . with the method just described , one can deal with many examples of interest , such as nonholonomic systems and circuits , provided that one chooses the manifold and the dirac structure properly .we show in the next section how a nonholonomic system given by a distribution on the configuration space can be described by a dirac dynamical system on the pontryagin bundle and how one can apply the constraint algorithm cad to this example , although we will not perform a detailed calculation of the sequence of constraint submanifolds .we also show how lc circuits can be treated with the same formalism as nonholonomic systems . the main point for doing thisis , again , to choose the manifold as being the pontryagin bundle , where , this time , is the charge space , and a canonically constructed dirac structure on , where , this time , represents kirchhoff s current law .we also show how this approach using is related to the approach used in .in this section we deal with two examples , namely , nonholonomic systems , and lc circuits , showing that dirac dynamical systems give a unified treatment for them .we will perform the detailed calculation of the constraint submanifolds for the case of lc circuits only .[ [ nonholonomic - systems . ] ] nonholonomic systems .+ + + + + + + + + + + + + + + + + + + + + recall that a nonholonomic system is given by a configuration space , a distribution , called the nonholonomic constraint , and a lagrangian .equations of motion are given by lagrange - dalembert s principle .inspired by the hamilton - poincar principle given in , we can write a convenient equivalent form of the lagrange - dalembert principle as where is defined by , and with the restriction on variations , for , along with the kinematic restriction . the resulting equations are we are going to show that equations ( [ nonholoeq])([nonholoeq3 ] ) can be written in the form ( [ 11drracds ] ) .for this purpose we must construct an appropriate dirac structure associated to the nonholonomic constraint .inspired by several results in and by the hamilton - poincar s point of view we define a dirac structure on associated to a given distribution on a manifold by the local expression * note .* we shall accept both equivalent notations for an element of . by checking that , that is , and that for all we can conclude using lemma [ defdiracstr ] that is a dirac structure on .we should now prove that is well defined globally , in other words , that it does not depend on the choice of a local chart .let and be the natural maps that in local coordinates are given by and . for a given distribution consider the distribution and also the 2-form , on the manifold , where is the canonical 2-form on .we have the local expressions and .now we can apply theorem [ diracpresymplecticmanifold ] replacing by , by and by and then we can easily check that the dirac structure coincides with . in other words , by using theorem [ diracpresymplecticmanifold ] we have obtained a coordinate - independent description of given in terms of , and , which proves in particular that it is well defined globally .it is straightforward to check that the condition where , is equivalent to which is clearly equivalent to equations ( [ nonholoeq])([nonholoeq3 ] ) .now that we have written the equations of motion as a dirac dynamical system , we can proceed to apply the cad .first , we can easily prove the following formulas using the definitions and proposition [ circperpd ] : then we have we could continue applying the algorithm as explained in general in section [ sectionconstalgforrdirac ] , and we would obtain specific formulas for , etc .[ [ relationship - with - implicit - lagrangian - systems . ] ] relationship with implicit lagrangian systems . in description of a nonholonomic system as an _ implicit lagrangian system _ was introduced and equations of motion were shown to be a _ partial vector field _ written in terms of the _ dirac differential_. there is a close and simple relationship between this approach and the one of the present paper , which we shall explain next .first , we should recall the notion of an implicit lagrangian system .let be a given lagrangian ; then is a -form on , , which is locally expressed by define a differential operator , called the _ dirac differential _ of , by where is the diffeomorphism defined by here is the canonical -form on and is the canonical isomorphism which is given in a local chart by .we have the local expression we must now recall the definition and properties of the dirac structure on the manifold , where is a given distribution on , studied in propositions 5.1 and 5.2 of . let be the canonical projection. then is defined by the dirac structure is the one given by theorem [ diracpresymplecticmanifold ] , with , the canonical symplectic form on and , and it is described by now we will define the notion of an implicit lagrangian system .let be a given lagrangian ( possibly degenerate ) and let be a given regular _ constraint distribution _ on a configuration manifold .denote by the induced dirac structure on that is given by the equation ( [ eqmahi ] ) and write for the dirac differential of .let be the image of under the legendre transformation .an _ implicit lagrangian system _ is a triple where represents a vector field defined at points of , together with the condition in other words , for each point let and then , by definition , must satisfy a _ solution curve _ of an implicit lagrangian system is a curve , , which is an integral curve of where .the following proposition is essentially proposition 6.3 of .[ idemahi ] the condition defining an implicit lagrangian system is given locally by the equalities it is clear that equations ( [ nonholomahi])([nonholomahic ] ) are equivalent to ( [ nonholoeq])([nonholoeq3 ] ) and also to ( [ nnhhooll1])([nnhhooll4 ] ) .this leads immediately to a precise link between the approach to nonholonomic systems given in and the one in the present paper , which is given in the next proposition whose proof in local coordinates is easy and will be omitted .recall that is the first constraint manifold given by ( [ constraintnonholo1 ] ) .[ link ] let be a given lagrangian , a given distribution and let be given by . then the following assertions hold .1 . .let be a solution to ( [ nonholodirac ] ) ; then in particular one has , and let , that is , .then is a solution to ( [ diracdiffeq ] ) .a curve is a solution curve of ( [ nonholodirac ] ) if and only if is a solution curve of the implicit lagrangian system given by ( [ diracdiffeq ] ) .[ [ lc - circuits . ] ] lc circuits .+ + + + + + + + + + + + this is a case of a constant dirac structure of the type explained in section [ sectionconstalgforrdirac ] .our approach is closely related and equivalent to the one described in .there are many relevant references where the structure of kirchhoff s laws has been studied from different points of view ; some of them emphasize the geometry behind the equations , see .we first briefly recall the description of lc circuits given in .let be a vector space representing _ the charge space _ ; then is the _ current space _ and is the _ flux linkage space ._ there is a constant distribution ( that is , is invariant under translations ) , which represents the _ kirchhoff s current law _ ( kcl ) .the subbundle represents the _kirchhoff s voltage law _ ( kvl ) .one has a dirac structure on the cotangent bundle , given by which clearly does not depend on , in other words it is a constant dirac structure , so for each base point we have where dynamics of the system is given by a lagrangian .this lagrangian is given by a quadratic form on representing the difference of the energies in the inductors and the capacitors , say , where is the number of branches of the circuit .of course , some of the terms in the previous sum may be zero , corresponding to the absence of an inductor or a capacitor in the corresponding branch .+ one approach would be to use the time evolution of the circuit in terms of the dirac differential , given by equation ( [ diracdiffeq ] ) .this shows that dirac structures provide a unified treatment for nonholonomic systems and lc circuits . in other wordsthe equation is with and , where represents the dirac differential of .+ however , we will follow the philosophy of this paper and choose to work on rather than working with the dirac differential .[ [ applying - the - cad - to - lc - circuits . ] ] applying the cad to lc circuits .it should be now clear how we can apply the constraint algorithm cad developed in the present paper to deal with lc circuits exactly as we did with the case of nonholonomic systems .more precisely , it should now be clear that an lc circuit can be described by the methods described in the first part of the paragraph _ nonholonomic systems _ ( beginning of section [ sectionexamples ] ) , by taking , and defining and as indicated in that paragraph .as we have already said , in the case of circuits the dirac structure is constant and therefore integrable .however , we prefer not to work with the system restricted to a presymplectic leaf , as explained before , but to apply directly the algorithm for a general dirac structure , as we will explain next .+ define the linear maps and by if there is no capacitor in a circuit branch , the corresponding component of will be zero ( infinite capacitance ) , while zero capacitance is ruled out since it would represent an electrically open circuit branch .the physically relevant case corresponds to , , for .however , the constraint algorithm applies to the general case where capacitances and inductances can be negative .the evolution equations ( [ nnhhooll1])([nnhhooll4 ] ) for a general nonholonomic system become now we will apply the cad . first , note that the first constraint submanifold is calculated as follows , taking into account the general expression ( [ e_d2 ] ) , now we simply continue applying the algorithm .let then , denoted simply as , is and therefore in the same way we can calculate . in fact , then where we have called now we shall calculate . we will show later that for all lc circuits with either a positive inductance or a non - infinite positive capacitance on every branch ( lemma [ lemmafordeltaoneequaldeltatwo ] and theorem [ thm_positive_nu ] ) .first we compute thus , one finds that and hence similarly we calculate as follows . let ; then and we can rewrite as then we have from this , one sees how to recursively define . for all define where by definition .we have the following expressions for the constraint submanifolds . in order to solve the system we may apply the method to solve the general equation ( [ gotaygen122 ] ) explained before .it is clear that the parametrization , where is the dimension of the final constraint , can be chosen to be a linear map and the ide ( [ gotaygen122 ] ) will be a linear system .+ this system is of course still an ide .one of the purposes of this paper is to write hamilton s equations of motion , using an extension of the dirac procedure extended to primary foliated constraint submanifolds , developed in section [ sectionanextensionofetc ] . for this purpose, we will need to consider as being the primary foliated constraint embedded naturally in the symplectic manifold .[ [ physical - interpretation - of - the - constraint - equations - for - lc - circuits . ] ] physical interpretation of the constraint equations for lc circuits .the equations defining the constraint submanifolds have an interesting interpretation in circuit theory terms. from now on we will assume the physically meaningful situation where , , .we will show that in this case , the algorithm stops either at or at .a circuit has an underlying directed graph , since each branch has a direction in which the current flow will be regarded as positive .a _ loop _ is a closed sequence of different adjacent branches .it also has a direction , which does not have to be compatible with the directions of the branches involved .the set of loop currents is a set of generators for the kcl subspace .we shall call a circuit branch _ inductive _ ( resp ._ capacitive _ ) if there is an inductor ( resp .capacitor ) present on that branch . a loop will be called _ inductive _ ( resp ._ capacitive _ ) if it has an inductor ( resp .capacitor ) on at least one of its branches . a branch or loop that is capacitive but not inductive will be called _purely capacitive_. a branch or loop that is neither inductive nor capacitive will be called _ empty _ , since no capacitors or inductors are present .thus , a non - inductive branch or loop can be either empty or purely capacitive , and a purely capacitive loop must have at least one capacitor , no inductors , and possibly some empty branches . the first one of the equations defining is , which is , .the quantity is the voltage corresponding to an inductor through which the current is .then can be interpreted as a time integral of the voltage on branch due to the inductor in that branch ( the _ flux linkage _ of the inductor ) .the second equation , , is just kcl for the currents .the next constraint submanifold incorporates the equation , which represents the kvl equations for the purely capacitive loops , as explained in the next theorem .[ thm_kvlpc ] the subspace represents kirchhoff s voltage law for non - inductive loops ( kvlni for short ) .that is , its elements are precisely those branch voltage assignments on the circuit that satisfy the subset of kvl equations corresponding to the non - inductive loops .the equation represents the condition that on every purely capacitive loop , the branch voltages of the corresponding capacitors satisfy kvl .we call these the kvlpc equations ( kvl for purely capacitive loops ) , and they form a subset of the kvlni equations .let be the basis of where is associated to branch of the circuit , and let be its dual basis .recall that the branch voltage assignments on the circuit are elements of , and represents kirchhoff s voltage law in the sense that each element of is a branch voltage assignment satisfying kvl .the kvl equations are linear equations on the branch voltages and are therefore represented by elements of .in fact , they are precisely the elements of . consider a maximal set of independent , non - inductive oriented loops on the circuit , labeled .each loop gives rise to a linear equation on the corresponding branch voltages according to kvl .these kvl equations are represented by , and are a subset of the full kvl equations for the circuit .that is , these linear equations hold on any branch voltage assignment that is compatible with kvl , and therefore .note that each does not involve any inductive branches ; that is , if then . also , the kvlni equations can be seen as the kvl equations for the circuit that is obtained by removing the inductive branches from the given circuit .let us call this _ the non - inductive subcircuit _ , whose charge space is a subspace of in a natural way .also , is generated by where ranges over all non - inductive branches .let us denote the kcl distribution for the non - inductive subcircuit .note that , since any kcl - compatible current assignment on the non - inductive subcircuit corresponds to a kcl - compatible current assignment on the original circuit for which no current flows through the inductive branches .reciprocally , elements of that are zero on the inductive branches can be regarded as elements of .note that are loop currents for the non - inductive subcircuit , and they form a basis of . for any , .then for each , .this is true in particular for , so .therefore .in other words , each , seen as a linear equation on , holds on .let us see that is precisely the vector subspace of defined by the equations .let be another equation that holds on . then .since all inductances are nonnegative , this implies that the components of corresponding to the inductive branches must be zero .also , , therefore .this means that is a linear combination of .the equation means that for .only those corresponding to purely capacitive loops give a condition on .the remaining correspond to empty loops , so amounts to the equation .the third constraint submanifold is obtained from by incorporating the equation , which means that the currents on the branches of the purely capacitive loops satisfy the same kvlpc equations as the charges .it is clear that it must hold when we consider the dynamics , which includes the equation .suppose that there exists at least one purely capacitive loop so that the algorithm does not stop at .then it is possible to show that . in order to do that ,consider a purely capacitive loop , which by definition must involve at least one capacitor , and recall that all the capacitances are positive .the corresponding kvlpc equation is represented by , whose components are or . as in the proof of the previous theorem , ,so it can be interpreted as a nonzero loop current , that is , a current that is on all the branches of that loop ( depending on the relative orientation of the branches with respect to the loop ) and zero on the remaining branches .note that does not satisfy kvlpc , since in particular , where the last sum is over the branches in the chosen loop .therefore and .+ now we will prove the interesting fact that since inductances are greater than or equal to zero and capacitances are positive numbers or the algorithm always stops at ( or earlier ) .we shall start by proving the following lemma .[ lemmafordeltaoneequaldeltatwo ] * ( a ) * is equivalent to the condition that there exists such that .+ * ( b ) * the condition is equivalent to the following condition : + * condition * for any there exist such that the following hold : equivalently , we can say that for each there exists such that .+ * ( c ) * condition implies that if then .the proof of * ( a ) * is immediate taking into account the definitions . to prove * ( b ) * take any given , then it satisfies the condition , where . by condition there exists such that , and we can conclude that , which shows that . the converse can be easily verified .now we shall prove * ( c)*. since the inclusion is immediate by construction we will only prove the converse .let then , in particular , there exists such that . by condition there exists such that which implies that , that is , , from which we can deduce that .the following theorem gives a sufficient condition under which condition in the previous lemma holds , which considers the standard physically meaningful case .also , we assume for simplicity that there are no empty branches . we should mention that it can be proven that even with empty branches , the algorithm stops at or .[ thm_positive_nu ] condition in lemma [ lemmafordeltaoneequaldeltatwo ] holds if in every branch there is either a positive inductance or a non - infinite positive capacitance .condition can be reformulated as follows : for any there exists such that the following equalities are satisfied all .this formulation has the advantage that is nonsingular .the basis , of the charge space defines naturally an euclidean metric by the condition and an identification by the condition , , in particular obtains .the linear map is self - adjoint and positive definite while is self - adjoint and positive semi - definite .system ( [ lemmadeltasuboneaprime])([lemmadeltasubonebprime ] ) can be written in the form is the orthogonal projection on .one can choose an orthonormal basis of and then the system of equations ( [ lemmadeltasuboneaprime])([lemmadeltasubonebprime ] ) can be written in matrix form .moreover we have an orthogonal decomposition , where and are the image and the kernel of .each vector is decomposed as .we can choose the basis in such a way that the block decomposition of the matrix representing has the form ,\ ] ] where and is a diagonal matrix with positive eigenvalues .the map also has a block decomposition , and since it is self - adjoint and positive definite , the blocks and are symmetric and positive definite matrices .we obtain the following system of equations , which is equivalent to ( [ lemmadeltasuboneaprime])([lemmadeltasubonebprime ] ) , \left [ \begin{array}{c } \delta_1^{(1)}\\ \delta_1^{(2)}\end{array}\right ] & = \left[\begin{array}{cc } \varphi^{(1,1)}&\varphi^{(1,2)}\\ \varphi^{(2,1)}&\varphi^{(2,2 ) } \end{array } \right ] \left[\begin{array}{c } \delta^{(1)}\\ \delta^{(2)}\end{array}\right]\\ % \label{positiveinductancesand capacities2 } \left[\begin{array}{cc } \nu^{(1,1)}&\nu^{(1,2)}\\ \nu^{(2,1)}&\nu^{(2,2 ) } \end{array } \right ] \left [ \begin{array}{c } \delta_1^{(1)}\\ \delta_1^{(2)}\end{array}\right ] - \left[\begin{array}{cc } \varphi^{(1,1)}&\varphi^{(1,2)}\\ \varphi^{(2,1)}&\varphi^{(2,2 ) } \end{array } \right ] \left[\begin{array}{c } \delta_0^{(1)}\\ \delta_0^{(2)}\end{array}\right ] & = \left[\begin{array}{cc } \varphi^{(1,1)}&\varphi^{(1,2)}\\ \varphi^{(2,1)}&\varphi^{(2,2 ) } \end{array } \right ] \left[\begin{array}{c } \delta^{(1)}\\ \delta^{(2)}\end{array}\right]\end{aligned}\ ] ] for given , equation ( [ positiveinductancesand capacities1 ] ) fixes and imposes no condition on . using the fact that and are invertible we can find and to satisfy ( [ positiveinductancesand capacities2 ] ) . as a conclusion , if ( and only if ) there are no purely capacitive loops in the circuit , then and the final constraint submanifold is , defined by the conditions that ( where some might be zero ) and satisfies kcl .otherwise , the final constraint submanifold is , defined by the conditions that , satisfies kcl , and and satisfy kvlpc . recall that is the voltage of the capacitor in branch , and the absence of a capacitor on branch means .the rest of the kvl equations , which involve the branch voltages on the inductors , will appear when considering the dynamics . in this sense , the kvlpc equations can be regarded as `` static '' kvl equations . the hypothesis that no capacitances or inductances are negative , besides being physically meaningful , is crucial to ensure that the algorithm stops at ( or ) .for example , for a circuit with one inductor , one capacitor and a negative capacitor , all of them in parallel , it stops at .[ [ geometry - of - the - final - constraint - submanifold . ] ] geometry of the final constraint submanifold .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we know that the solution curves will have tangent vectors in , where or depending on the case .assume the physically meaningful situation where , , .the distribution is constant and therefore integrable , and its integral leaves are preserved by the flow .denote the leaf through by , so these leaves can be written as , where .each integral leaf is a symplectic manifold with the pullback of the presymplectic form , if and only if there are no empty loops . as mentioned before , if there are no purely capacitive loops , then , otherwise .for the first case , using , we have then is a constant and therefore integrable distribution whose rank is .write for the pullback of the presymplectic form to each integral leaf .let us compute .consider , where are arbitrary .assume , that is , for any .this means that , but since they also belong to then .that is , is defined by , .the subspace is generated by the loop currents on the non - inductive loops ( theorem [ thm_kvlpc ] ) .since there are no purely capacitive loops , then it is generated by the loop currents on the empty loops . as a conclusion, is symplectic if and only if there are no empty loops on the circuit .if there is at least one purely capacitive loop , then the final constraint submanifold is .then again , is a constant and therefore integrable distribution whose rank is .write for the pullback of the presymplectic form to each integral leaf , and let us compute .consider , where are arbitrary .assume , that is , for any .this means that . if there is an empty loop , represented by , then , and , , so is a nonzero element of and is not symplectic .if there are no empty loops , we apply the following argument to , and the same conclusion holds for . since , then , which means that the components of corresponding to the inductive branches are zero , as we reasoned in the proof of theorem [ thm_kvlpc ] .also , , so , that is , it can be regarded as a branch current assignment on the non - inductive subcircuit , satisfying kcl .in addition , satisfy kvlni , which means that satisfy kvl for the non - inductive subcircuit .now we will apply the fact that the dynamics of a circuit with no inductors and no empty loops consists only of equilibrium points .indeed , kvl for such a circuit is a homogeneous linear system on the charges of the capacitors , so it has a solution . setting and , we have that satisfies kcl trivially .if there was a nonzero loop current , then it would produce a change in the charges of the capacitor on that loop .then they would no longer satisfy kvl for that loop , since .then the currents must therefore be zero , that is , .of course , this is not true if there are empty loops , which can have arbitrary loop currents . by the same reasoning , , so and is symplectic .let us now write equations of motion in hamiltonian form , by working on a symplectic leaf .this means that the initial conditions must belong to that particular leaf .this is related to the comments on equations ( [ equation15 ] ) and ( [ sequationalgorithgn ] ) made before .an equation of motion in poisson form that is valid for all initial conditions in will be given in theorem [ thm321extended ] by the equations and .first , let us represent the kcl equations by , in the sense that represents the kcl equation corresponding to node .the number is the number of independent nodes ( usually the number of nodes minus one ) .as before , denote the kvlpc equations by , where is the number of independent purely capacitive loops . seeing each as a row vector and each as a column vector ,write ,\quad\quad\quad \gamma=\left[\begin{array}{ccc } \eta_1 & \dots & \eta_b \end{array } \right].\ ] ] the space with the presymplectic form is embedded in with the canonical symplectic form ( see appendix [ pontryagin_embedding ] ) , where the variables are . the integral leaf , where or as before , and , is defined as a subspace of regularly by the equations and the matrix of poisson brackets of these constraints is , in block form , ,\ ] ] where and stand for the diagonal matrices with and along their diagonals , respectively . [ [ a - concrete - example . ] ] a concrete example .+ + + + + + + + + + + + + + + + + + + we shall illustrate our method with the simple lc circuit studied in which is shown in figure [ circuitfigure ] .it is a -port lc circuit where the configuration space is .we shall use the notation , , and .the lagrangian of the lc circuit is , we will assume the physically meaningful case where and , .then , by theorem [ thm_positive_nu ] , the cad algorithm will stop at most at .also , theorem [ thm_kvlpc ] gives a direct description of the specific equations that define , and .this circuit has a purely capacitive loop , so .however , the algorithm could also be applied for the case where negative values for or are allowed . for this particular circuit ,the algorithm stops at regardless of the signs of the inductance and capacitances .the kcl constraints for the current are therefore the constraint kcl space is defined , for each , by where on the other hand , the annihilator of is the constraint kvl space , defined , for each , by note that is a basis of .taking into account that the dirac structure on associated to the space is , for each , given by and the corresponding energy , is given by , we can easily verify that the dirac dynamical system is equivalent to the ide system that is given in coordinates by ( [ cnnhhooll1])([cnnhhooll4 ] ) , which gives we now apply the constraint algorithm cad for dirac dynamical systems .we calculate the expressions of to get from now on we will consider the constant distribution as a subspace of . as we have seen before , so using ( [ kclconstcoord1])([kclconstcoord2 ] ) and the expression for we get we could calculate using the expression ( [ mdos ] ) .however , according to theorem [ thm_kvlpc ] , adds one more constraint , corresponding to the kvl for the purely capacitive loop that involves capacitors and .then as explained right after the proof of theorem [ thm_kvlpc ] , the final constraint submanifold adds the constraint : to solve the system we can simply parametrize , which has dimension , for instance by taking some appropriate of the variables as being independent parameters and then replace in equations ( [ idecircuiti])([idecircuitiii ] ) to obtain an ode equivalent to equations of motion .for instance , if we choose , , , , as independent variables we get the ode we know that solutions should be tangent to where , according to ( [ e_d1 ] ) , is defined by the conditions , that is , therefore is defined by ( [ deltaequation ] ) together with the equations defining which are obtained by differentiating with respect to time the equations ( [ mtresej ] ) defining , that is , then is a constant and therefore integrable distribution of rank 2 .its integral leaves are preserved by the flow .one can check that the pullback of the presymplectic form to each leaf is nondegenerate , so they are symplectic manifolds .denote , so these leaves can be written as . for a given ,say we have that elements of the symplectic leaf are characterized by the conditions defining plus the condition obtained by integrating with respect to time the conditions ( [ kclconstcoord1 ] ) and ( [ kclconstcoord2 ] ) , applied to , that is for simplicity , we will assume from now on that the conditions are satisfied . while this is not the most general case , it is enough to illustrate the procedure .+ we can conclude that has dimension and in fact the projection is simply the subspace of defined by the conditions and .therefore one can use the variables to parametrize , and , in fact , we obtain let be defined by , , where we have chosen we can calculate the matrix as , \ ] ] which is invertible , with inverse , then .this means that is a second class submanifold , or equivalently , that all the constraints are second class .in particular , is a symplectic submanifold , and we will call the symplectic form .then we have two ways of writing equations of motion on this submanifold .on one hand , we can write the equations of motion on the symplectic manifold in the form , for this we are going to use the previous parametrization of with coordinates , , so . since the lagrangian is given by in terms of the parametrization the energy restricted to then the vector field is given in coordinates , by with , for short .we remark that the previous equations can easily be deduced by elementary rules of circuit theory .namely , one can use the formulas for capacitors in parallel and series and replace the three capacitors by a single one .the resulting circuit is very easy to solve , and the corresponding system is equivalent to equations ( [ simplest1 ] ) and ( [ simplest2 ] ) .however , we must remark that this simplified system no longer accounts for the currents and voltages on the original capacitors .+ on the other hand , we can find the evolution of all the variables using the extended energy ( [ extendedenergy ] ) rather than the total energy , since is symplectic and using the last paragraph of remark [ physicalstate ] .namely , we will calculate the vector field associated to the extended energy .we have , , where the column vector ] in , page 42 . as we will show next, we can extend this equation to the foliated case by using adapted constraints and the abridged total energy . in the definition of and .then we can apply the procedure of section [ subsectionequationsof motion ] with and , and obtain second class constraints among , adapted to and also we can choose some primary constraints among , say w.l.o.g . , .observe that here has the same meaning as in section [ subsectionequationsof motion ] , this time considering the constraints .then the equations of motion on can be written in the form ( [ eq : poisson_dirac_evolution_thm ] ) .now , for any small enough , which defines , one can readily see that are second class constraints adapted to where are the components of corresponding to the choice .this implies immediately that the dirac bracket for does not depend on . on the other hand ,each , , is a primary constraint for . using the previous facts and the fact that the are arbitrary ( even time - dependent ) parameters ,as it happens with the in equation ( [ eq : poisson_dirac_evolution_thm ] ) , one can conclude that the equation of motion on each is given by sum over .this means in particular that each is preserved by the motion , or that the hamiltonian vector fields defined on , are tangent to the .note that is the abridged total energy for and as defined in ( [ abridged_total_energy ] ) .in conclusion , using adapted constraints , the abridged total energy and the dirac bracket yields a very simple procedure for writing the equations of motion , because one only needs to write the abridged total energy for one leaf , say , and gives the equation of motion for all nearby leaves .we have proven the following theorem , which extends theorem [ theorem321 ] .[ thm321extended ] let be a symplectic manifold and let be given primary and final foliated constraint submanifolds , with leaves and , respectively , as described by ( [ defofc])([cals ] ) .assume that all the hypotheses of theorem [ theorem321 ] are satisfied for the submanifolds .in addition , suppose that the number appearing in assumption [ lambda ] is the same for all for close enough to .from the validity of assumption [ 2s_constant ] for , it is immediate to see that the number is also the same for all for close enough to . choose second class constraints adapted to and also , , as in theorem [ theorem321 ] .let be an energy function , and consider the dirac dynamical system .define the abridged total energy for and as in theorem [ theorem321 ] .then , each has an open neighborhood such that the vector field ( equation ) defined on , when restricted to the final foliated constraint submanifold , represents equations of motion of the dirac dynamical system on .moreover , the evolution of the system preserves the leaves .in addition , equation gives equations of motion on each .since is tangent to , then the evolution of a function on is given by for any such that .also , each is contained in a unique symplectic leaf of the dirac bracket , defined by . , wrote the equation which represents the dynamics on the final constraint submanifold , in terms of the total hamiltonian and the dirac bracket .this equation does not gives the correct dynamics for the case of foliated constraint submanifolds . as we have seen , for the foliated case , one has , instead a similar equations of motion ( [ eq : poisson_dirac_evolution_thmfoliated ] ) in terms of the dirac bracket and the abridged total energy .[ [ the - concrete - example - of - an - lc - circuit - of - section - sectionexamples - revisited . ] ] the concrete example of an lc circuit of section [ sectionexamples ] revisited .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + consider the symplectic leaf of defined parametrically by equations ( [ parametrizationofsymplleaf1])([parametrizationofsymplleaf5 ] ) .the parameter space carries the canonical symplectic form , which coincides with the pullback of the canonical symplectic form on to the parameter space .since the matrix is invertible , all constraints are second class constraints .the number in equation ( [ eq : poisson_dirac_evolution_thmfoliated ] ) is , then the abridged total energy is simply the energy .then it is easy to calculate the dirac bracket as the canonical bracket on the parameter space of the functions ( [ parametrizationofsymplleaf1])([parametrizationofsymplleaf5 ] ) .let , then the matrix of dirac brackets of these variables can be written in block form as \ ] ] where % q_c1 \frac{c_1}{(c_1 + c_3)l } & \frac{c_1 ^ 2}{(c_1 + c_3)^2l } & \frac{c_1}{(c_1 + c_3)l } & \frac{c_1 c_3}{(c_1 + c_3)^2 l } & \frac{c_1}{(c_1 + c_3 ) } \\[6pt ] % q_c2 \frac{1}{l } & \frac{c_1}{(c_1 + c_3)l } & \frac{1}{l } & \frac{c_3}{(c_1 + c_3)l } & 1 \\[6pt ] % q_c3 \frac{c_3}{(c_1 + c_3)l } & \frac{c_1 c_3}{(c_1 + c_3)^2 l } & \frac{c_3}{(c_1 + c_3)l } & \frac { c_3 ^ 2}{(c_1 + c_3)^2l } & \frac{c_3}{(c_1 + c_3 ) } \end{array}\right].\ ] ] the evolution equations for the vector with initial condition belonging to , that is , satisfying ( [ mtresej ] ) : , is given by the product of the matrix by the vector have shown that dirac s work on constrained systems can be extended for cases where the primary constraint has a given foliation .this extends the applicability of the theory for dirac dynamical systems , like lc circuits where the constraints may come from singularities of the lagrangian and , besides , from kirchhoff s current law . throughout the paperwe combine ideas of an algebraic character from dirac s theory with some more geometrically inspired ideas from gotay - nester work .in particular , we use dirac brackets adapted to the foliation as well as a constraint algorithm for dirac dynamical systems ( cad ) , which is an extension of the gotay - nester algorithm .the results were proven locally and under regularity conditions .it is our purpose to study in the future the globalization of the results of the paper as well as the singular cases , using ide techniques , with applications .this paper has been inspired by jerry marsden s invaluable participation in part of it .we thank the following institutions for providing us with means to work on this paper : universidad nacional del sur ( projects pgi 24/l075 and pgi 24/zl06 ) ; universidad nacional de la plata ; agencia nacional de promocin cientfica y tecnolgica , argentina ( projects pict 2006 - 2219 and pict 2010 - 2746 ) ; conicet , argentina ( projects pip 20102012 11220090101018 and x553 ecuaciones diferenciales implcitas ) ; european community , fp7 ( project irses `` geomech '' 246981 ) .there is a canonical inclusion . in addition , consider the canonical two - forms and on and respectively , the canonical projection , and define the presymplectic two - form on . then the inclusion preserves the corresponding two - forms , that is , .if and are the tangent projections , we can consider the dual tangent rhombic {\tau_{tq}}\ar[dr]^{t\tau_q}&\\ tq\ar[dr]_{\tau_q}&&tq\ar[dl]^{\tau_q}\\ & q&}\ ] ] define by , for . here denotes an element in the pontryagin bundle over the point .note that the following diagram commutes .^\varphi\ar[d]_{\operatorname{pr}_{tq}}&t^*tq\ar[d]^{\pi_{tq}}\\ tq\ar[r]^{\operatorname{id}_{tq}}&tq } \ ] ] let us see that is an injective vector bundle map from the bundle to the cotangent bundle , over the identity of .the last part of this assertion follows from the commutative diagram above .first , if then both sides are in the same fiber , so .also , for all we have or since is a submersion , we have and is injective . forthe second part of the lemma , let us recall the definition of the canonical one - form on . for , is an element of such that for , where is the cotangent bundle projection . with a similar notation , the canonical one - form given by pulling back these forms to the pontryagin bundle by and the projection , we obtain the same one - form . indeed , for , we get on one hand and on the other hand however , the following diagram commutes ^\varphi\ar[d]_{\operatorname{pr}_{t^*q}}\ar[drr]_{\operatorname{pr}_{tq } } & & t^*tq\ar[d]^{\pi_{tq}}\\ t^*q\ar[dr]_{\pi_q}&&tq\ar[dl]^{\tau_q}\\ & q & } \ ] ] so and therefore . taking ( minus ) the differential of this identity, we obtain .ralph abraham and jerrold e. marsden ._ foundations of mechanics_. benjamin / cummings publishing co. inc .advanced book program , reading , mass .isbn 0 - 8053 - 0102-x .second edition , revised and enlarged , with the assistance of tudor ratiu and richard cushman .vladimir i. arnold ._ mathematical methods of classical mechanics _ , volume 60 of _ graduate texts in mathematics_. springer - verlag , new york , second edition , 1989 .isbn 0 - 387 - 96890 - 3 . translated from the russian by k. vogtmann and a. weinstein .p. balseiro , m. de len , j. c. marrero , and d. martn de diego .the ubiquity of the symplectic hamiltonian equations in mechanics ._ j. geom ._ , 10 ( 1):0 134 , 2009 .issn 1941 - 4889 .doi : 10.3934/jgm.2009.1.1 .url http://dx.doi.org/10.3934/jgm.2009.1.1 .mara barbero - lin and miguel c. muoz - lecanda .constraint algorithm for extremals in optimal control problems .methods mod ._ , 60 ( 7):0 12211233 , 2009 .issn 0219 - 8878 .doi : 10.1142/s0219887809004193 .url http://dx.doi.org/10.1142/s0219887809004193 .guido blankenstein and tudor s. ratiu . singular reduction of implicit hamiltonian systems ._ , 530 ( 2):0 211260 , 2004 .issn 0034 - 4877 .doi : 10.1016/s0034 - 4877(04)90013 - 4 .url http://dx.doi.org/10.1016/s0034-4877(04)90013-4 . anthony m. bloch ._ nonholonomic mechanics and control _ , volume 24 of _ interdisciplinary applied mathematics_. springer - verlag , new york , 2003 .isbn 0 - 387 - 95535 - 6 . with the collaboration of j. baillieul , p. crouch and j. marsden , with scientific input from p. s. krishnaprasad , r. m. murray and d. zenkov , systems and control .anthony m. bloch and peter e. crouch .representations of dirac structures on vector spaces and nonlinear l - c circuits . in _ differential geometry and control ( boulder ,co , 1997 ) _ , volume 64 of _ proc . sympos .pure math ._ , pages 103117 .soc . , providence , ri , 1999 .a. v. borisov and i. s. mamaev . on the history of the development of the nonholonomic dynamics .. chaotic dyn ._ , 70 ( 1):0 4347 , 2002. issn 1560 - 3547 .doi : 10.1070/rd2002v007n01abeh000194 .url http://dx.doi.org/10.1070/rd2002v007n01abeh000194 .henrique bursztyn .dirac structures and applications .la reconqute de la dynamique par la gomtrie aprs lagrange .conference at ihes , march 2010 .url http://www.ihes.fr/document?id=2396&id_attribute=48 . retrieved july 2012henrique bursztyn and marius crainic .dirac geometry , quasi - poisson actions and -valued moment maps ._ j. differential geom ._ , 820 ( 3):0 501566 , 2009 .issn 0022 - 040x .url http://projecteuclid.org/getrecord?id=euclid.jdg/1251122545 .f. cantrijn , j. f. cariena , m. crampin , and l. a. ibort . reduction of degenerate lagrangian systems ._ j. geom ._ , 30 ( 3):0 353400 , 1986 .issn 0393 - 0440 .doi : 10.1016/0393 - 0440(86)90014 - 8 .url http://dx.doi.org/10.1016/0393-0440(86)90014-8 .j. f. cariena , j. gomis , l. a. ibort , and n. romn .canonical transformations theory for presymplectic systems ._ j. math ._ , 260 ( 8):0 19611969 , 1985 .issn 0022 - 2488 .doi : 10.1063/1.526864 .url http://dx.doi.org/10.1063/1.526864 .jos f. cariena and manuel f. raada .blow - up regularization of singular lagrangians . _ j. math ._ , 250 ( 8):0 24302435 , 1984 .issn 0022 - 2488 .doi : 10.1063/1.526450 .url http://dx.doi.org/10.1063/1.526450 .jos f. cariena and manuel f. raada .lagrangian systems with constraints : a geometric approach to the method of lagrange multipliers ._ j. phys .a _ , 260 ( 6):0 13351351 , 1993 .issn 0305 - 4470 .url http://stacks.iop.org/0305-4470/26/1335 .jos f. cariena and manuel f. raada .comments on the presymplectic formalism and the theory of regular lagrangians with constraints ._ j. phys .a _ , 280 ( 3):0 l91l97 , 1995 .issn 0305 - 4470 .url http://stacks.iop.org/0305-4470/28/l91 . jos f. cariena , carlos lpez , and narciso romn - roy .origin of the lagrangian constraints and their relation with the hamiltonian formulation ._ j. math ._ , 290 ( 5):0 11431149 , 1988 .issn 0022 - 2488 .doi : 10.1063/1.527955 .url http://dx.doi.org/10.1063/1.527955 .hernn cendra and mara etchechoury .desingularization of implicit analytic differential equations ._ j. phys .a _ , 390 ( 35):0 1097511001 , 2006 .issn 0305 - 4470 .doi : 10.1088/0305 - 4470/39/35/003 .url http://dx.doi.org/10.1088/0305-4470/39/35/003 .hernn cendra , jerrold e. marsden , and tudor s. ratiu .geometric mechanics , lagrangian reduction , and nonholonomic systems . in _mathematics unlimited2001 and beyond _ , pages 221273 .springer , berlin , 2001 .hernn cendra , jerrold e. marsden , sergey pekarsky , and tudor s. ratiu .variational principles for lie - poisson and hamilton - poincar equations .j. _ , 30 ( 3):0 833867 , 11971198 , 2003 .issn 1609 - 3321 .hernn cendra , jerrold e. marsden , sergey pekarsky , and tudor s. ratiu .variational principles for lie - poisson and hamilton - poincar equations .j. _ , 30 ( 3):0 833867 , 11971198 , 2003 .issn 1609 - 3321 .\{dedicated to vladimir igorevich arnold on the occasion of his 65th birthday}. hernn cendra , alberto ibort , manuel de len , and david martn de diego .a generalization of chetaev s principle for a class of higher order nonholonomic constraints . _ j. math ._ , 450 ( 7):0 27852801 , 2004 .issn 0022 - 2488 .doi : 10.1063/1.1763245 .url http://dx.doi.org/10.1063/1.1763245 .leon o. chua and j. d. mcpherson . explicit topological formulation of lagrangian and hamiltonian equations for nonlinear networks ._ ieee trans .circuits and systems _ , cas-21:0 277286 , 1974 .issn 0098 - 4094 .luis a. cordero , c. t. j. dodson , and manuel de len ._ differential geometry of frame bundles _ , volume 47 of _ mathematics and its applications_. kluwer academic publishers group , dordrecht , 1989 .isbn 0 - 7923 - 0012 - 2 .jorge corts , manuel de len , juan c. marrero , d. martn de diego , and eduardo martnez .a survey of lagrangian mechanics and control on lie algebroids and groupoids .methods mod ._ , 30 ( 3):0 509558 , 2006 .issn 0219 - 8878 .doi : 10.1142/s0219887806001211 .url http://dx.doi.org/10.1142/s0219887806001211 .jorge corts , manuel de len , juan carlos marrero , and eduardo martnez .nonholonomic lagrangian systems on lie algebroids ._ discrete contin ._ , 240 ( 2):0 213271 , 2009 .issn 1078 - 0947 .doi : 10.3934/dcds.2009.24.213 .url http://dx.doi.org/10.3934/dcds.2009.24.213 .ted courant and alan weinstein . beyond poisson structures . in _action hamiltoniennes de groupes .troisime thorme de lie ( lyon , 1986 ) _ , volume 27 of _ travaux en cours _ ,pages 3949 .hermann , paris , 1988 .mike crampin and tom mestdag .the cartan form for constrained lagrangian systems and the nonholonomic noether theorem .methods mod ._ , 80 ( 4):0 897923 , 2011 .issn 0219 - 8878 .doi : 10.1142/s0219887811005452 .url http://dx.doi.org/10.1142/s0219887811005452 .mike crampin and tom mestdag .reduction of invariant constrained systems using anholonomic frames ._ j. geom ._ , 30 ( 1):0 2340 , 2011 .issn 1941 - 4889 .doi : 10.3934/jgm.2011.3.23 .url http://dx.doi.org/10.3934/jgm.2011.3.23 .manuel de len and david m. de diego . on the geometry of non - holonomic lagrangian systems . _ j. math ._ , 370 ( 7):0 33893414 , 1996 .issn 0022 - 2488 .doi : 10.1063/1.531571 .url http://dx.doi.org/10.1063/1.531571 .manuel de len and paulo r. rodrigues ._ methods of differential geometry in analytical mechanics _ , volume 158 of _ north - holland mathematics studies_. north - holland publishing co. , amsterdam , 1989 .isbn 0 - 444 - 88017 - 8 .manuel de len , david martin de diego , and paulo pitanga . a new look at degenerate lagrangian dynamics from the viewpoint of almost - product structures . _a _ , 280 ( 17):0 49514971 , 1995 .issn 0305 - 4470 .url http://stacks.iop.org/0305-4470/28/4951 .m. delgado - tllez and a. ibort . on the geometry and topology of singular optimal control problems and their solutions ._ discrete contin ._ , 0 ( suppl.):0 223233 , 2003 .issn 1078 - 0947 .dynamical systems and differential equations ( wilmington , nc , 2002 ) .mark j. gotay and james m. nester .presymplectic lagrangian systems .i. the constraint algorithm and the equivalence theorem . _h. poincar sect .a ( n.s . ) _ , 300 ( 2):0 129142 , 1979 .issn 0020 - 2339 .mark j. gotay and james m. nester .generalized constraint algorithm and special presymplectic manifolds . in _ geometric methods in mathematical physicsnsf - cbms conf .lowell , lowell , mass . , 1979 ) _ , volume 775 of _ lecture notes in math ._ , pages 78104 .springer , berlin , 1980 .mark j. gotay and jdrzej niatycki . on the quantization of presymplectic dynamical systems via coisotropic imbeddings ._ , 820 ( 3):0 377389 , 1981/82 .issn 0010 - 3616 .url http://projecteuclid.org/getrecord?id=euclid.cmp/1103920596 .mark j. gotay , james m. nester , and george hinds .presymplectic manifolds and the dirac - bergmann theory of constraints . _ j. math ._ , 190 ( 11):0 23882399 , 1978 .issn 0022 - 2488 .doi : 10.1063/1.523597 .url http://dx.doi.org/10.1063/1.523597 .j. grabowski , m. de len , j. c. marrero , and d. martn de diego .nonholonomic constraints : a new viewpoint ._ j. math . phys ._ , 500 ( 1):0 013520 , 17 , 2009 .issn 0022 - 2488 .doi : 10.1063/1.3049752 .url http://dx.doi.org/10.1063/1.3049752 .xavier grcia and josep m. pons .constrained systems : a unified geometric approach ._ internat .j. theoret ._ , 300 ( 4):0 511516 , 1991 .issn 0020 - 7748 .doi : 10.1007/bf00672895 .url http://dx.doi.org/10.1007/bf00672895 .xavier grcia and josep m. pons . a generalized geometric framework for constrained systems ._ differential geom ._ , 20 ( 3):0 223247 , 1992 .issn 0926 - 2245 .doi : 10.1016/0926 - 2245(92)90012-c .url http://dx.doi.org/10.1016/0926-2245(92)90012-c .alberto ibort , manuel de len , giuseppe marmo , and david martn de diego . non - holonomic constrained systems as implicit differential equations .sem . mat .torino _ , 540 ( 3):0 295317 , 1996 .issn 0373 - 1243 .geometrical structures for physical theories , i ( vietri , 1996 ) .alberto ibort , manuel de len , juan c. marrero , and david martn de diego .dirac brackets in constrained dynamics ._ fortschr ._ , 470 ( 5):0 459492 , 1999 .issn 0015 - 8208 .doi : 10.1002/(sici)1521 - 3978(199906)47:5<459::aid - prop459>3.0.co;2-e .url http://dx.doi.org/10.1002/(sici)1521-3978(199906)47:5<459::aid-prop459>3.0.co;2-e[http://dx.doi.org/10.1002/(sici)1521-3978(199906)47:5<459::aid-prop459>3.0.co;2-e ] .david iglesias , juan carlos marrero , david martn de diego , eduardo martnez , and edith padrn .reduction of symplectic lie algebroids by a lie subalgebroid and a symmetry lie group ._ sigma symmetry integrability geom .methods appl ._ , 3:0 paper 049 , 28 , 2007 .issn 1815 - 0659 .doi : 10.3842/sigma.2007.049 .url http://dx.doi.org/10.3842/sigma.2007.049 .olga krupkov . a geometric setting for higher - order dirac - bergmann theory of constraints . _ j. math ._ , 350 ( 12):0 65576576 , 1994 .issn 0022 - 2488 .doi : 10.1063/1.530691 .url http://dx.doi.org/10.1063/1.530691 .giuseppe marmo , giovanna mendella , and wodzimierz m. tulczyjew . constrained hamiltonian systems as implicit differential equations . _a _ , 300 ( 1):0 277293 , 1997 .issn 0305 - 4470 .doi : 10.1088/0305 - 4470/30/1/020 .url http://dx.doi.org/10.1088/0305-4470/30/1/020 .eduardo martnez .lie algebroids in classical mechanics and optimal control ._ sigma symmetry integrability geom .methods appl . _ , 3:0 paper 050 , 17 pp .( electronic ) , 2007 . issn 1815 - 0659 .doi : 10.3842/sigma.2007.050 .url http://dx.doi.org/10.3842/sigma.2007.050 .eduardo martnez .variational calculus on lie algebroids ._ esaim control optim ._ , 140 ( 2):0 356380 , 2008 .issn 1292 - 8119 .doi : 10.1051/cocv:2007056 .url http://dx.doi.org/10.1051/cocv:2007056 .b. m. maschke , a. j. van der schaft , and p. c. breedveld . an intrinsic hamiltonian formulation of network dynamics : nonstandard poisson structures and gyrators ._ j. franklin inst ._ , 3290 ( 5):0 923966 , 1992 .issn 0016 - 0032 .doi : 10.1016/s0016 - 0032(92)90049-m .url http://dx.doi.org/10.1016/s0016-0032(92)90049-m .b. m. maschke , a. j. van der schaft , and p. c. breedveld . an intrinsic hamiltonian formulation of the dynamics of lc - circuits ._ ieee trans .circuits systems i fund .theory appl ._ , 420 ( 2):0 7382 , 1995 .issn 1057 - 7122 .doi : 10.1109/81.372847 .url http://dx.doi.org/10.1109/81.372847 .giovanna mendella , giuseppe marmo , and wodzimierz m. tulczyjew .integrability of implicit differential equations ._ j. phys ., 280 ( 1):0 149163 , 1995 .issn 0305 - 4470 .url http://stacks.iop.org/0305-4470/28/149 .luc moreau and dirk aeyels . a novel variational method for deriving lagrangian and hamiltonian models of inductor - capacitor circuits ._ siam rev ._ , 460 ( 1):0 5984 ( electronic ) , 2004 .issn 0036 - 1445 .doi : 10.1137/s0036144502409020 .url http://dx.doi.org/10.1137/s0036144502409020 . f. leon pritchard . on implicit systems of differential equations ._ j. differential equations _ , 1940 ( 2):0 328363 , 2003 .issn 0022 - 0396 .doi : 10.1016/s0022 - 0396(03)00191 - 8 .url http://dx.doi.org/10.1016/s0022-0396(03)00191-8 .patrick j. rabier and werner c. rheinboldt . a geometric treatment of implicit differential - algebraic equations ._ j. differential equations _ , 1090 ( 1):0 110146 , 1994 .issn 0022 - 0396 .doi : 10.1006/jdeq.1994.1046 .url http://dx.doi.org/10.1006/jdeq.1994.1046 .ray skinner and raymond rusk .generalized hamiltonian dynamics .i. formulation on ._ j. math ._ , 240 ( 11):0 25892594 , 1983 .issn 0022 - 2488 .doi : 10.1063/1.525654 .url http://dx.doi.org/10.1063/1.525654 .ray skinner and raymond rusk .generalized hamiltonian dynamics .ii . gauge transformations . _ j. math ._ , 240 ( 11):0 25952601 , 1983 .issn 0022 - 2488 .doi : 10.1063/1.525655 .url http://dx.doi.org/10.1063/1.525655 .a. j. van der schaft .implicit hamiltonian systems with symmetry . _ rep . math ._ , 410 ( 2):0 203221 , 1998 .issn 0034 - 4877 .doi : 10.1016/s0034 - 4877(98)80176 - 6. url http://dx.doi.org/10.1016/s0034-4877(98)80176-6 .a. j. van der schaft and b. m. maschke . on the hamiltonian formulation of nonholonomic mechanical systems ._ , 340 ( 2):0 225233 , 1994 .issn 0034 - 4877 .doi : 10.1016/0034 - 4877(94)90038 - 8 .url http://dx.doi.org/10.1016/0034-4877(94)90038-8 .hiroaki yoshimura and jerrold e. marsden .dirac structures in lagrangian mechanics .i. implicit lagrangian systems ._ j. geom ._ , 570 ( 1):0 133156 , 2006 .issn 0393 - 0440 .doi : 10.1016/j.geomphys.2006.02.009 .url http://dx.doi.org/10.1016/j.geomphys.2006.02.009 .hiroaki yoshimura and jerrold e. marsden .dirac structures in lagrangian mechanics .ii . variational structures ._ j. geom ._ , 570 ( 1):0 209250 , 2006 .issn 0393 - 0440 .doi : 10.1016/j.geomphys.2006.02.012 .url http://dx.doi.org/10.1016/j.geomphys.2006.02.012 .
this paper extends the gotay - nester and the dirac theories of constrained systems in order to deal with dirac dynamical systems in the integrable case . integrable dirac dynamical systems are viewed as constrained systems where the constraint submanifolds are foliated , the case considered in gotay - nester theory being the particular case where the foliation has only one leaf . a constraint algorithm for dirac dynamical systems ( cad ) , which extends the gotay - nester algorithm , is developed . evolution equations are written using a dirac bracket adapted to the foliations and an abridged total energy which coincides with the total hamiltonian in the particular case considered by dirac . the interesting example of lc circuits is developed in detail . the paper emphasizes the point of view that dirac and gotay - nester theories are dual and that using a combination of results from both theories may have advantages in dealing with a given example , rather than using systematically one or the other .
free boundary problems for incompressible , inviscid flows and for active scalars are mathematically challenging and physically interesting. moreover , their applications are really spread , from geothermal reservoirs ( see ) to tumor growth ( see ) , passing through weather forecasting ( see ) .in particular , the evolution of a fluid in a porous medium is important in the applied sciences and engineering ( see ) but also in mathematics ( see , for instance , ) .the effect of the medium has important consequences and the usual equations for the conservation of momentum , _ i.e. _ the euler or navier - stokes equations , must be replaced with an empirical law : darcy s law where is the dynamic viscosity of the fluid , is the permeability of the porous medium , is the acceleration due to gravity , is the velocity of the fluid , is the density and is the pressure ( see ) . in our favourite units, we can assume a very important part of the theory of flow in porous media studies the coexistence of two immiscible fluids with different qualities in the same volume .the case of two immiscible and incompressible fluids is known as the muskat o muskat - leverett problem ( see and also ) . in this casethe density is given by where is the interface between both phases .this interface is an unknown in the evolution . if , the system is in the so - called _ stable _ ( or rayleigh - taylor stable ) regime .given the depth of the porous medium , , we define the following dimensionless parameter ( see and references therein ) if the porous medium has infinite depth ( ) , the equation for the interface is where denotes principal value .this situation is known as the _ deep water regime_. this case has been extensively studied ( see and references therein ) .if the initial data verifies , in the presence of impervious boundaries ( see figure [ bandafig ] ) , the system is in the regime .this is known as the _ confined muskat problem_. the equation for the interface corresponding to this situation when is }d\eta .\label{eq0.1}\end{gathered}\ ] ] this case has been studied in .notice that the second kernel becomes singular when reaches the boundaries . in the strip .] it has been proved that plays an important role on the evolution of .for instance , if , decays slower than in the deep water regime .moreover , to ensure that for every time , one needs to impose conditions on the amplitude and the slope of the initial data related to the depth ( see ) . notice that in the deep water case , the condition is only on ( see ) .finally , we also mention that , using a computer assisted proof , the authors in proved that there exists curves , , solutions of of the confined muskat problem ( ) corresponding to the initial data , such that for a sufficiently small time ( _ i.e. _ the wave _ breaks _ ) .the same initial curve when plugged into the infinitely deep muskat problem ( ) verifies ( _ i.e. _ the wave becomes a smooth graph ) .the last case corresponds to .this case is known as the _ large amplitude regime_. in this situation , the initial interface reach the impervious walls at least in one point .let s write for the square root of the laplacian . in ,the authors proposed the problem as a model of the dynamics of an interface in the two phase , deep water muskat problem .if we define and we take the derivative to equation , we get this latter equation will be helpful because it has divergence form . in this paper , to study the effect of the boundaries in the large amplitude regime , we propose and study a model of .assume and for some . then , the second term in reduces to where denotes the hilbert transform . therefore , using , and the diffusion degenerates .to capture this crucial fact when , we introduce the equation as a model of in the large amplitude regime .let s point out that , if , formally , we recover .notice that guarantees that the model is in the so - called stable regime .in other words , if holds , the model has a nonlocal , non - degenerate diffusion .notice that the set of functions verifying is not empty . to bound the stability condition ,we define using this function , the stability condition is equivalent to .let us mention that are steady solutions in the unstable case .we write for the hilbert transform of the function and for the zygmund operator , _i.e. _ where denotes the usual fourier transform .we write for our spatial domain . from this pointonwards , we consider either or , in particular our domain is always onedimensional .we write for the usual -based sobolev spaces with norm we denote the depth and the fractional -based sobolev spaces , , are with norm notice that , , , reduces to the usual hlder continuous space .we define this is the constant appearing in the linear problem . in this sectionwe collect the statement of the results concerning the equation .we start with local well - posedness of classical solutions and a continuation criterion when the initial data is in the stable regime : [ teo1 ] let , , , and , be an initial data satisfying the stability condition . then there exists a unique solution to such that ,h^3)\cap l^2([0,t(f_0)],h^{3.5}).\ ] ] moreover , if is the maximum lifespan of the solutions , then , or notice that the continuation criteria , as is written in the previous result , does nt deal with the possibility of reaching the unstable regime .however , in proposition [ prop2 ] below we address this question .we also prove that there is a unique local smooth solution even when the rayleigh - taylor condition is not satisfied but our initial data is analytic .we prove this result complexifying the equation and using a cauchy - kowalevski theorem ( see and ) .we define the complex strip and , for .we consider the hardy - sobolev spaces ( see ) with norm these spaces form a banach scale with respect to the parameter . in the same way we define we also have , for , the complex extension of the equation can be written as where notice that the variable is a real number : .given a positive , we define (\gamma)=\frac{1}{\tau+l^2-\left({\text{re}}f(\gamma)\right)^2 } , d_2[f](\gamma)=\frac{1}{\tau-\left({\text{im}}{\partial_x}f(\gamma)\right)^2},\ ] ] given , we define the open set <r,0<d_2[f]<r , \|f\|_{h^3({\mathcal s}_r)}<r\}.\ ] ] we remark that in this set we have [ teock]let for some , be the initial data for. then , there exists and a unique solution ,h^3({\mathbb r})) ] , we obtain where is a bound for .recall that in proposition [ prop1 ] gives us that , in the case ( the interface is close to the boundary ) and our decay estimate degenerates .this fact has been observed for equation in .moreover , it has also been observed in the numerical simulations in section [ secdec ] ( see figure [ deccase1 ] ) .notice that there is not a maximum principle , but we can use backward bootstrapping to bound the norm once that we now a bound for . we prove that if the initial data is small , then there exists a global - in - time solution .furthermore , we obtain some decay estimates in a lower norm .thus , these results complement the decay rates proved in proposition [ prop1 ] .we will use the approach in .notice that , given , there exists a time of existence and the solution is on the stable regime . for any ,we define the total norm where are banach spaces .the function as and gives us the decay in the lower order norm .using duhamel s principle we write the expression for the mild solution where .\ ] ] [ teo3 ] let be an odd initial data for equation in the stable regime .then , there exist , such that if the corresponding solution is global in time and the solution verifies the oddness assumption is related to the decay estimate .we know that the odd character of the initial data propagates , so the solution will have zero mean and then the equilibrium solution is .however , as the mean is not preserved , it is not clear , and in general it is not true , that the mean will propagate for general initial data with zero mean .there are several results with limited regularity for ( see ) .in particular , the authors in this paper proved the global existence of smooth solution corresponding to initial data with small derivative in the wiener algebra .we prove that also captures these features .in particular , we study the equation when the initial data is only and we prove local existence for small initial data in both spatial domains , the real line and the torus .[ teo4 ] let , , be the initial data for equation in the stable regime .we assume that for a small enough .then , there exists at least one local solution ,h^2(\omega))\cap l^2([0,t(f_0)],h^{2.5}(\omega)).\ ] ] notice that the solution is classical , but if the initial data is only the well - posedness for arbitrary data can not be achieved by standard energy methods . in the case where the initial data is odd and periodic , we can improve the previous local - in - time result : [ teo5 ] let be an odd initial data for equation in the stable regime .then , there exist , such that if there exists at least one global in time solution .this solution verifies the two main possibilities for finite time blow up seem to be 1 . to reach the unstable regime , 2 . a blow up of the curvature for the case . to reachthe unstable regime is similar to the _ turning _ singularities presents for and in and .we discard this situation for .in particular we prove that , if the solution reaches the unstable case , the blows up first .the second source of singularity , a blow up of the curvature when the initial data reaches the boundaries may take two different forms : a corner - type singularity ( blow up of the second derivative while the first derivative is bounded ) and a cusp - type singularity ( blow up of the first and second derivatives ) .we prove that , if the second derivative blows up , then the norm blows up first .notice that , as a consequence of our proof , we get that if the initial data reaches the boundary , then the solution corresponding to this initial data reaches the boundary as long as it remains smooth .we collect these two results in the next proposition : [ prop2 ] let be the initial data for equation and be an arbitrary parameter .we assume that the corresponding solution is .then , * if is in the stable regime and is the first time where the solution leaves the stable regime , then * if is analytic and there exists such that , then and consequently , the curvature can not blow up for a solution .there are three main questions that remain open for this model : an existence theory for initial data in , a proof of finite time singularities where the curvature blows up and the existence of a geometry ( instead of a flat strip ) that enhances the similarities between the muskat problem and the model introduced in this paper .we obtain a new energy balance for .to do that , we consider the evolution of the entropy [ prop3 ] given , then the solution of verifies the following energy balance furthermore , under a positiveness hypothesis for , we can use this energy balance to obtain global existence of weak solutions with rough initial data .this energy balance fully exploits the diffusive character of the equation .notice also that , due to the positiveness of , we can not recover a smooth , periodic from this .now we define our notion of weak solution : [ definition 1 ] is a global weak solution of if ,l^\infty({\mathbb t}))\cap l^2([0,t],\dot{h}^{0.5}({\mathbb t}))\ ] ] and holds in the sense of distributions : for any , periodic in space and with compact support in time , for every .we state now our result : [ teoweak ] let be a positive initial data for equation .then , there exist at least one global weak solution ,l^\infty({\mathbb t}))\cap l^2([0,t],\dot{h}^{0.5}({\mathbb t})),\ ] ] satisfying the bounds and notice that if we study the evolution of the energy under , we find we thank the anonymous referee for pointing out this energy .this energy balance can be used to extend theorem [ teoweak ] to arbitrary ( non necessarily positive ) .the structure of the paper is as follows : in section [ secmodelodiego ] , we prove the energy balance for the solutions of equation and we use it to prove global existence of weak solutions of .the results concerning are contained from section [ secmodel ] to section [ seclarge ] . in section [ secmodel ]we obtain well - posedness in sobolev spaces and in an analytical framework for equation . in section [ secqual ]we study the qualitative properties of the solutions and we get some maximum principles for different lower order norms . in this section , using the same scheme as in , we also prove a global existence and decay in for the mild solution corresponding to small initial data in . in section [ seclim ] we obtain existence and decay in for the mild solution corresponding to initial data small in . in section [ secdec ]we present some numerics comparing the solutions to equations and .we present these simulations for the sake of completeness and to bring into comparison with the simulations corresponding to equation . in section [ secnum ]we present some numerics for equation .in particular we compute the evolution of a family of initial data reaching the boundary . in the last section we study analytically some properties of the solutions when the initial data reaches the boundary . noticethat these solutions exist due to the well - posedness result in the analytical framework .now we show a new energy balance for the derivative of .we consider the equation for the derivative of the interface evolving in the infinite depth regime .now consider the evolution of the following quantity , since .let s look at the first term in the right hand side , the second term , putting this back together into , now we can symmetrize the _ extra _ term in , in the periodic case and which is negative if we assume that . this observation will allow us to gain half a derivative from this energy identity .we fix to simplify and we consider .consequently .we also assume .in particular this implies that we use the previous energy identity to get compactness and to construct weak solutions : we define the approximate problems where is a standard mollifier .multiplying equation by , and integrating over the torus , we obtain , to estimates the remaining terms , we will use the following inequalities which are a direct consequence of gagliardo - nirenberg , these inequalities are valid for zero mean , periodic functions , but as the norm of our solution propagates with the evolution , we can adapt the argument straightforwardly . using this into our estimate , choosing , we absorb the second derivative into the left side , and integrating in time we obtain , since the -norm of is uniformly bounded , we have a global estimate for the norm of for every .we study the evolution of .we find this implies the uniform - in- bound banach - alaoglu theorem implies ,l^\infty)\cap l^2([0,t],h^{0.5}) ] .this compactness implies the convergence of the weak formulations .first , we prove local well - posedness in the stable regime : we proof the case being the other cases analogous .we define the energy \|_{l^\infty}+\|d[f]\|_{l^\infty},\ ] ] where =\frac{1}{\left(\frac{\pi}{2}\right)^2-\left(f(x)\right)^2-({\partial_x}f(x))^2},\ ] ] and =\frac{1}{\left(\frac{\pi}{2}\right)^2-\left(f(x)\right)^2}.\ ] ] the quantity ] , then , as long as the energy remains bounded , >0 ] ensures that we do nt leave the set . * estimates for : * by the basic properties of the hilbert transform and the sobolev embedding , we get and * estimates for ] : * in the same way , \|_{l^\infty}\leq ce^2\left(e+1\right)^4.\ ] ] * estimates for the higher order terms : * the higher order terms are notice that , due to , for sufficiently small time . to estimate we use the pointwise inequality this inequality andthe self - adjointness of the operator allow us _ to integrate by parts _ in the stable regime ( which is guaranteed for a short time by ) .we get with \right\|_{h^{0.6}}\nonumber\\ & \leq & \left(\left\|\frac{2f{\partial_x}f}{(1+\left(\frac{\pi}{2}\right)^2-\left(f(x)\right)^2)}\right\|_{h^{1}}+\left\|\frac{2{\partial_x}f { \partial_x}^2f}{(1+\left({\partial_x}f(x)\right)^2)}\right\|_{h^{1}}\right)\nonumber\\ & & \times c\|f\|^2_{h^3},\end{aligned}\ ] ] and the term can be bounded as in . with the same ideas , we can handle the lower order terms .we conclude * obtaining uniform estimates : * collecting the estimates ( see , , , ) , we get using gronwall s inequality , we obtain with this a priori estimate we can obtain the local existence of smooth solutions using the standard arguments ( see ) . moreover , and give us integrating in time , we conclude . *uniqueness : * let s assume that there exists , two different solutions corresponding to the same initial data and denote . then , with the same ideas , we get and using gronwall inequality , we conclude the uniqueness .* continuation criterion : * we use lemma [ lemaaux ] ( ) in to get \leq c(l)\|f\|_{w^{0.6,\infty}}^2+\frac{2f(x)}{(1+l^2-(f(x))^2)^2}\lambda f(x),\ ] ] \leq c\|{\partial_x}f\|_{w^{0.6,\infty}}^2+\frac{2{\partial_x}f(x)}{(1+({\partial_x}f(x))^2)^2}\lambda { \partial_x}f(x).\ ] ] from here we conclude the result .we start with a useful lemma : [ ckprop ] consider and the set .then , for , the spatial operator in , is continuous .moreover , the following inequalities holds : 1 .\|_{h^3({\mathcal s}_{r'})}\leq\frac{c^\tau_r}{r - r'}\|f\|_{h^3({\mathcal s}_r)}, ] for the sake of brevity , we only prove the first part .the second one is analogous .notice that by definition : \|^2_{h^3({\mathcal s}_r')}=i+ii\ ] ] where to estimate , we use hlder and the fact that we re working in the open set to get and , as a consequence , a similar bound holds for the term . hence , we have : in we compute the third derivative . the terms involving , and derivativescan be bounded using the previous ideas , the open set definition and the banach scale property . for the terms involving derivatives we use . in particular concludes the proof . the former lemma is used in the proof of theorem [ teock ]the proof follows the ideas in ( see also .we fix and and we consider the following picard s iteration scheme .\ ] ] by induction hypothesis we have for . using lemma [ ckprop ] and the ideas in we can find such that , consequently, we need to find such that <r,\;0<d_2[f^n]<r.\ ] ] we obtain for , being similar .we have by taking small enough .we define and we conclude .we assume without losing generality .* step 1 : * the proof of this part is straightforward .* step 2 : * we denote and . using rademacher theorem as in and , we obtain using the kernel representations for the flat at infinity case and for the periodic case , respectively .we conclude . with the same approachwe get .* step 3 : * the solution remains odd , so , as in , we have thus , we have we conclude therefore * step 4 : * we test the equation against , integrate in space and use the self - adjointness . recalling the definition of , we have assume again that the solution does nt leave the stable regime up to time , then testing equation against , we get and integrating in time , * step 5 : * using poincar inequality and recalling that fractional derivatives have zero mean , we get using gronwall inequality we conclude the result . * step 6 : * taking such that the solution is in the stable regime and using a previous step , we have using rademacher theorem and the decay of the amplitude , we obtain thus , for we conclude {\left(\frac{\frac{1-\mathcal{a}}{\mathcal{a}}}{1+\left(\frac{\pi}{2}\right)^2}\right)\frac{3t}{\mathfrak{c}(f_0,t)}+\frac{1}{\|f_0\|^3_{l^\infty({\mathbb r})}}}}\ ] ] in this case we have , and .we use the estimate and get using sobolev embedding , we have and .\end{gathered}\ ] ] recalling the following inequalities and using , we get putting all the estimates together , we conclude the following estimate inserting this estimate in , we obtain with the energy estimates in theorem [ teo1 ] and the definition of , we get for a polynomial with powers bigger than one . integrating in time and collecting all the estimates together ,we conclude where is a polynomial with high powers . from this latter inequality , by a standard continuation argument , we obtain the global existence if is small enough . moreover ,if we take small enough , we can ensure that explain how to obtain the good bounds , then , using mollifiers for the initial data the result follows .* step 1 : case * given , we define the energy we define as in proposition [ prop1 ] .then , using rademacher s theorem , we have thus , using proposition [ prop1 ] , if due to the form of the energy , there exist a time such that . at this step in the proof , this time may depend on the regularization parameter , but we are going to bound it uniformly .consequently , and , if we obtain in the same way we obtain reverse inequality for .so , from this decay we obtain that the solution relies in the stable regime .recalling the definition , we compute ({\partial_x}^2f(x))^2dx\\ & & + \sigma(t)\|f(t)\|^2_{\dot{h}^{2.5}}.\end{aligned}\ ] ] we use interpolation to obtain ({\partial_x}^2f(x))^2dx.\end{aligned}\ ] ] we use and .thus , using the -boundedness of the singular integral operators , where we have used the continuous embedding , and young s inequality with and . noticethat , if is small enough , let be a fixed number . inserting the latter bound we get thus , taking small enough and large enough, we obtain putting all together , we obtain this bound does nt depends on the regularization parameter , so using gronwall inequality , we obtain a time where the solution remain in a ball with radius in .moreover , due to the evolution of and the proposition [ prop1 ] , the solution does nt leave the stable regime .this concludes the result in the periodic case .* step 2 : case * given , we define such that . then we define as before . then , using rademacher s theorem , we have and , if is taken small enough , integrating in time and using proposition [ prop1 ] , we have thus , and with the same ideas we obtain the appropriate bound for and we get with this bound and proposition [ prop1 ] we conclude the existence of a time such that the solution does nt leave the stable regime . we define the energy with the same ideas as in the periodic case we get a second time such that the solution remains in the ball with center the origin and radius in .we take the time of existence and we conclude the result .if we add a symmetry hypothesis for the initial data we can improve theorem [ teo4 ] : we have with norm and . since theorem [ teo4 ] ,there exist a local solution on the interval ] we obtain for all .we compute if we evaluate at and we use the fact that is the maximum , we obtain from this ode we conclude the result .we provide a bound for the acting on the composition of two functions : we prove this result for . for the torus, the proof follows the same ideas .we define notice that , using taylor theorem , then , given , if we have consequently , for the outer part we have putting all together and taking the limit , we conclude the result .[ lemma:2.1 ] let be three banach spaces such that with continuous embedding and such that are reflexive and the injection is compact .let be a finite number and let be two finite numbers such that .then the space ,x_0),\ ; \partial_t u\in l^{\alpha_1}([0,t],x_1)\}\ ] ] is compactly embedded in ,x)$ ] .
in this paper we study a model of an interface between two fluids in a porous medium . for this model we prove several local and global well - posedness results and study some of its qualitative properties . we also provide numerics . * keywords * : muskat problem , porous medium , one - dimensional model . * msc ( 2010 ) * : 35b50 , 35b65 , 35q35 . * acknowledgments * : rgb and gn thanks prof . garving k. luli for fruitful discussions . rgb and gn gratefully acknowledge the support by the department of mathematics at uc davis where this research was performed . rgb is partially supported by the grant mtm2011 - 26696 from the former ministerio de ciencia e innovacin ( micinn , spain ) .
a polarimeter is an instrument that measures the polarization of light to gain additional information compared to what simple intensity measurements reveal . by measuring how the polarization of light is altered after being reflected from a surface ,the technique is often referred to as ellipsometry .the need for fast broadband mueller matrix ellipsometers and stokes polarimeters result in challenging design problems when using active polarization modulators that are strongly dispersive . although designs based on _e.g. _ the fresnel bi - prism and alike are nearly achromatic , these are not well suited for neither imaging application nor high speed . in the case of polarimeters and mueller matrix ellipsometers based on liquid crystal modulators ,the direct search space may become huge and standard optimization methods will evidently result in local minima far away from the optimum . an efficient genetic algorithm ( ga ) computer codewas recently developed in order to design and re - optimize complete broadband stokes polarimeters and mueller matrix ellipsometers ( mme ) .this code is now used to search systems generating and analyzing optimally selected polarization states , in order to reduce noise propagation to the measured mueller matrix .although the ga code was initially motivated by the challenging task of searching the components , states and azimuthal orientations for optimally conditioned broadband liquid crystal based polarimeters , the software is written in a versatile manner in order to handle general polarimeters based on any polarization changing components .for small scale production , we also propose that the ga algorithm can be used to re - optimize the design due to imperfect polarization components , _e.g. _ due to small deviations in thickness of retarders from manufacturer .finally , it is noted that the addition of any additional non - trivial polarization altering components in the polarimeter , such as mirrors and prisms also require a re - optimization , which can easily be handled by the ga algorithm , as long as the dispersive properties of such components have been characterized in advance .we have chosen to use a classical ga to optimize the designs .we present here the optimization of a polarimeter based on ferroelectric liquid crystals ( flc ) .such a system was first proposed by gandorfer and jensen and peterson , and has the advantage of being fast and having no moving parts , which is an advantage for imaging applications . a multichannel spectroscopic mueller matrix ellipsometer based on this technologyis also commercially available ( _ mm16 , horiba yvon jobin _ ) , where this latter system was originally designed for the 430 - 850 nm wavelength range .the flc system is based on optical components which are well described , but the overall performance of the polarimeter depends on these simple components in a complex manner and traditional optimization routines will be hampered by local minima and the large search space .a genetic optimization algorithm will move out of local minima and might find better solutions , resulting in a polarimeter design with less noise amplification on a broader spectral range .we here propose several designs , but have as an example here restricted ourselves to only implement small modifications to the commercial mm16 system .furthermore , we demonstrate how the ga algorithm may be used in small scale production , where we may simply re - optimize the design in the case of an off - specification component .a schematic drawing of a polarimeter , ( a ) shows a general polarimeter where the polarization state of incident light is analyzed by the polarization state analyzer and a light intensity detector . in ( b )the components of a polarization state analyzer is exemplified through a combination of two or three flcs and waveplates ( wp ) and a linear polarizer . ]the complete polarization state of light , including partially polarized states , can be expressed concisely using the stokes vector , which completely describes the polarization state with four real elements : where the notation denotes time average over the in general quadratic time dependent orthogonal electric field components ( and ) and phase ( ) .the change of a polarization state can be described by a real - valued transformation matrix called a mueller matrix , connecting an incoming stokes vector to an outgoing stokes vector , the mueller matrix can describe the effect of any linear interaction of light with a sample or an optical element .polarization effects contained in a mueller matrix could be diattenuation ( different amplitude transmittance or reflectance for different polarization modes ) , retardance ( _ i.e. _ changing ) , and depolarization ( which increases the random component of the electric field ) .a stokes polarimeter consists of a polarization state analyzer ( psa ) capable of measuring the stokes vector of a polarization state , by performing at least four intensity measurements at different projection states . for a given state , the polarization altering properties of the psa can be described by its mueller matrix , which can be found as the matrix product of the mueller matrices of all the optical components in the psa .these components are a linear polarizer , and a number of phase retarders ( _ e.g. _ flcs and waveplates ) .an flc is a phase retarder which can be electronically switched between two states .the difference between the states corresponds to a rotation of the fast axis by ( and ) . by using a linear polarizer and two flcs as a psa, one can generate different projection states , by using three flcs one can generate states , _ etc_. if an unknown polarization state with stokes vector passes through the psa , for a given projection state , as given in equation the detector will measure an intensity depending only on the first row of , and can be considered to be a projection of along a stokes vector equal to , where denotes the transpose .these stokes vectors are organized as rows in the system matrix , which when operated on a stokes vector gives where is a vector containing the intensity measurements at the different projection states .the unknown stokes vector can then be found by inverting , .the noise in the intensity measurements will be amplified by the condition number ( ) of in the inversion to find .therefore of a polarimeter should be as small as possible , which correspond to doing as independent measurements as possible ( _ i.e. _ to use projection states that are as orthogonal as possible ) . the condition number of is given as , which for the 2-norm is equal to the ratio of the largest to the smallest singular value of the matrix .the best condition number that can be achieved for a polarimeter is .if four optimal states can be achieved , no advantage is found by doing a larger number of measurements with different states , compared to repeated measurements with the four optimal states .if , however , these optimal states can not be produced ( ) , the condition number , and hence the error , can be reduced by measuring more than four states . for an flc based polarimeterthis can be done by using three flcs followed by a polarizer as the psa , with up to three waveplates ( wp ) coupled to the flcs to reduce the condition number ( see figure [ fig : polarimeter - sketch ] ) , or components with more than two states , such as liquid crystal variable retarders ( lcvr ) . in this case will not be a square matrix , and the _ moore - penrose pseudoinverse _ is then used to invert . to measure a mueller matrix of a sample ,it is necessary to illuminate the sample with at least four different polarization states .the stokes vectors of these states can be organized as columns in a polarization state generator ( psg ) system matrix .the product gives the resulting four stokes vectors after interaction with the sample , they are then measured by the psa , yielding the intensity matrix .the mueller matrix can then be found by multiplying the expression by and from each side , . for overdetermined psa and psg and not square , the more - penrose pseudo - inverse can then be used to find the best inverse .the psg may be constructed from the same optical components and has the same optimum configuration as the psa .we have already established that should be as small as possible in order to reduce noise in the polarimetric measurements .it is a fairly trivial exercise to optimize for a single wavelength .however , there are two sources of wavelength dependence of the optical properties of the components. one of these is the explicit wavelength dependence of the retardance , which can be calculated as where is the physical thickness of the component ( _ e.g. _ waveplate or flc ) , is the vacuum wavelength of the light , and is the birefringence of the material .birefringence is the difference in refractive index between the fast axis ( index of refraction ) and the slow axis ( ) , _ i.e. _ .there is an explicit wavelength dependence in eq . , which complicates the design of the psa .a weaker but still important effect is the wavelength dependence of the birefringence , _i.e. _ .both of these effects are taken into account by using experimental data for the retardance . to evaluate the performance of a polarimeter design, we compare the inverse condition number ( ) to the theoretically optimal value ( ) . the argument for using rather than is that while ; the latter range is more numerically convenient . in detail , we define an `` error function '' ( ) as in the above equation , we typically use , with and nm . it is , of course , possible to choose other discretization schemes for : for some applications , one can _ e.g. _ be interested in optimal performance near a few spectral lines ( wavelengths ) .we take to the power of four to punish unwanted peaks in more severely . as gasconventionally seek to maximize the fitness function , we define our fitness function as as will never be zero in practice , there is no need to add a constant term in the denominator .the fitness function does not carry any physical significance on its own ; it is simply an overall measure of how well the polarimeter can measure along orthogonal polarization states for the chosen wavelengths .the ga was based on the open python library pyevolve , and was written to handle any kind of optical components .we have however concentrated our efforts on systems based on liquid crystals ( and in particular flcs ) as polarization modulators , with fixed waveplates `` sandwiched '' between them , this coupling of waveplates and flcs enables the achromatic design .the designs we set out to find was one based on three flc retarders and three fixed waveplates , and one based on two flc retarders and two fixed waveplates .each flc has two variables , namely the normalized thickness ( defined in eq . , influencing the retardance ) and its orientation angle .the same is true for the fixed waveplates .this yields 12- and 8-dimensional search spaces : six and four components with two variables each .[ sub : representation ] in the ga , polarimeter designs are represented using a traditional binary genome .each component is assigned a number of bits for and a number of bits for . is the simplest case , as its possible values are limited : the best achievable alignment accuracy is estimated to , and .this means that bits per component for the variable is sufficient . for , one should choose a minimum and maximum value according to which components can be realistically purchased . here , too , is the experimental resolution somewhat coarse , so that one does not need a large number of bits for its representation ( - bits is sufficient ) .after determining and for each of the six or four components , we proceed by determining the full transfer matrix of the psa , for each discrete wavelength and each projection state . as described in section [ sec : theory ] , one can determine the condition number for from the transfer matrices .the first generation was initialized by generating genomes with the bits chosen randomly with a uniform distribution . initially , we let component ordering be a variable in our genome . in that case, the first few bits of the genome would determine the ordering of the components .this was done by interpreting these bits as the index in a list of components . however , the best results from initial simulation runs almost always had the same component ordering as older `` non - genetic '' designs .hence , we removed this feature to speed up convergence .[ sub : genetic operators ] the genetic operators that were used are the well known ones for binary genomes . for mutation, the simple bit - flip operator was used ; _ i.e. _ flipping or vice versa .the mutation rate per individual was typically set to per generation .crossover was performed by multi - point crossover .experience indicates that two crossover points combined with a crossover rate of gives the best convergence performance .the selection protocol we used was tournament selection with individuals in the tournament pool and probability of an `` underdog '' selection .the elitism rate was set to individual per generation .it should be noted that depending on the number of components and , hence , the genome length , the exact rates may have to be adjusted somewhat for optimal performance . in the final simulations a population of individuals evolved over generation .several equivalent simulation runs were performed with different initializations of the random number generator . as the theoretically optimal performance for realistic materials is not known ,no other convergence criteria than the maximum number of generations was used .decent results can , however , be achieved more quickly with smaller population sizes and a lower number of generations .while the ga can handle components with arbitrary dispersion relations , we limit our discussion to components whose wavelength - dependent retardance can be fitted to the following modified sellmaier equation : \end{aligned}\ ] ] here , we call the normalized thickness of the component , as is proportional to the component s physical thickness . the parameters , , , and can be found by fitting experimental data to this model .initially , for the results presented in this paper , numerical values from characterization experiments performed on quartz waveplates and flcs were used .after the initial optimization flcs with the optimal thickness were ordered from _citizen_. after receiving the components , new thickness characterizations were made , and the system was re - optimized on waveplate thickness and orientation , and flc orientation . in the end , the final orientation of all components were found after characterization of the thicknesses . as a result ,the psa and psg do not have the same design , due to differences caused by the optical component manufacturing precision .we seek an improved design for the commercially available mme using two flcs by having a lower condition number over a larger spectral range .as 1000 nm is typical the upper wavelength limit for a silicon detector based spectrograph the range in the optimization was set from 430 nm to 1000 nm , an improvement from 850 nm compared to the commercial system .we also aimed at lower noise amplification in the whole spectral range . the resulting condition number from the optimized polarimeter using two flcsare shown in figure [ fig : condpsgpsa ] .it is noted that the system design was somewhat limited by the fact that thin flcs could not be currently manufactured .only a quasi - optimal system with given flc manufacturer limitations could be designed .the results from the psg and the psa are plotted separately . in both axesthe dashed black line shows the theoretical best inverse condition number , the solid red shows the measured inverse condition number on the commercial mm16 instrument .the solid black curve shows the inverse condition number for the optimal configuration using the measured dispersion of the individual components , the dashed blue curve shows the simulated inverse condition number using the actual experimental orientations , and finally , the solid blue curve shows the actual measured , and calibrated inverse condition number for the final designs .all designs are better conditioned than the previously designed instrument , and allows for measurements of the mueller matrix across a broader spectral range .the system can operate down towards the cut - off of the silicon spectrograph detector .some interesting issues appeared in the implementation of the psa / psg , which is related to the mounting accuracy of the optical components .it is noted that certain designs may be more sensitive to small azimuthal mounting errors .one could thus envisage to include in the fitness function certain mounting inaccuracies in order to also search for the most robust design for larger scale production .inverse condition number ( ) for a ga - generated and a previously patented design .the ga - generated design is based on three flcs and three waveplates , while the previous patented design is based on three flcs and one waveplate .the ga design is significantly better than the previous design for all wavelengths . if , noise becomes very problematic , and the instrument is considered unreliable . is theoretically limited to the range . ]we also briefly recall that we have recently reported systems designed using three flcs in the psg and psa for an extended wavelength range from 430 nm to 2000 nm . here, the power of the ga design algorithm becomes even more evident , which is clearly seen by the polarimeter design shown in figure [ fig : condition - numbers ] .the red solid line shows which is our measure of performance , where a higher value of is better .the design parameters of the polarimeters , _i.e. _ the and values , for both the three and two flc design are shown in table [ tab : optimal - flc - polarimeter ] .for comparison with previous designs , we show a recently patented design in comparison with the ga generated one .the ga generated design is based on three flcs and three waveplates , while the previous patented design is based on three flcs and one waveplate .the new design is useful over a broader spectral range ( here defined as the parts of the spectrum where ) and has lower noise amplification due to a lower condition number ( higher inverse condition number ) .it should be noted that the flc technology is limited downwards in wavelength to nm , because of material degradation from ultra violet light .component & & & & & & & & + flc1 & 56.5 & 2.44 & 1991 nm & 46.0 & 1150 nm & 100.6 & 1.06 & 894 + wp1 & 172.9 & 1.10 & 493 nm & & & 10.2 & 3.37 & 1404 + flc2 & 143.3 & 1.20 & 1009 nm & -5.0 & 1050 nm & 89.9 & 1.05 & 901 + wp2 & 127.1 & 1.66 & 722 nm & 92.0 & ( achromatic ) & 18.5 & 3.75 & 1552 + flc3 & 169.4 & 1.42 & 1181 nm & 72.0 & 600 nm & & & + wp3 & 110.1 & 4.40 & 1798 nm & & & & & + [ tab : optimal - flc - polarimeter ] . is the orientation angle of flc3 and is the orientation angle of wp3 , as shown in figure [ fig : polarimeter - sketch ] .the other and parameters were set to the optimal values . ]one can get an impression of how complex the fitness landscape is from figure [ fig : fitness - landscape ] . here , a plot of is shown , where is the orientation angle of flc3 and is the orientation angle of wp3 , the two first components in figure [ fig : polarimeter - sketch ] .all other parameters , _i.e. _ and values for the other components , were set to the optimal value as given in table [ tab : optimal - flc - polarimeter ] .note that is periodic in both variables with a period of . due to the enormous number of local minima , even in only of the search dimensions , a clever optimization algorithm is required .the mueller matrix ellipsometer based on two flcs in psg and psa ( figure [ fig : condpsgpsa ] ) where inserted into the mm16 instrument from horiba and calibrated the normal way using the eigenvalue calibration method implemented in the software deltapsi 2 from horiba . to verify the precision of the instrument ,ten measurements of air were made , these are plotted in figure [ fig : mmair ] together with a measurement using the old design ( red curve ) .the mean of the ten measurements are plotted with a dark blue curve and the standard deviation is plotted as the light blue area around the curve .there is no evident difference in accuracy or precision between the measurements using the two different designs .the maximum error is approximately one per cent .the obvious improvement is the operation across an enlarged spectrum .an important application of flc based polarimeters are in addition to the spectroscopic ellipsometry the mueller matrix imaging , where the increased bandwidth allows variations important in particular for biological imaging .genetic algorithms ( ga ) are able to generate optimized designs of stokes / mueller polarimeters covering a broader spectral range with reduced noise amplification ( lower system matrix condition numbers ) . compared to previous optimization techniques used for this purpose often based on direct or gradient searches in small parts of the search space, the ga outperforms these methods when having multidimensional search spaces with many local minima .an instrument based on ferroelectric liquid crystal retarders optimized using the ga was assembled and characterized showing system properties as expected from the simulations , with extended spectral range .lmsa acknowledges financial support from the norwegian research center for solar cell technology ( project num .the authors are grateful to denis catelan , horiba for assisting and advising on the mechanical parts , and the extension of the spectrograph range of the mm16 .22 natexlab#1#1[1]`#1 ` [ 2]#2 [ 1]#1 [ 1]http://dx.doi.org/#1 [ ] [ 1]pmid:#1 [ ] [ 2]#2 , , , , , ( ) ., , , , , ( ) . , , , , ., , , , . , ( ) ., , ( ) . , , , , ( ) ., , , , ( ) . , ( ) ., , , , , , . , ( ) ., , , , , , ( ) . , , , , ., , , , , ( ) ., , , ( ) . . , . , , , , ( ) . , , , ( ) . , , , , ( ) . , , , , , ( ) . , , , ( )
the design of broad - band polarimeters with high performance is challenging due to the wavelength dependence of optical components . an efficient genetic algorithm ( ga ) computer code was recently developed in order to design and re - optimize complete broadband stokes polarimeters and mueller matrix ellipsometers ( mme ) . our results are improvements of previous patented designs based on two and three ferroelectric liquid crystals ( flc ) , and are suited for broad - band hyperspectral imaging , or multichannel spectroscopy applications . we have realized and implemented one design using two flcs and compare the spectral range and precision with previous designs . mueller matrix ellipsometer , optical design , ellipsometry , polarimetry
the need for comparing phylogenetic trees arises when alternative phylogenies are obtained using different phylogenetic methods or different gene sequences for a given set of species .the comparison of phylogenetic trees is also essential to performing phylogenetic queries on databases of phylogenetic trees .further , the need for comparing phylogenetic trees also arises in the comparative analysis of clustering results obtained using different clustering methods or even different distance matrices , and there is a growing interest in the assessment of clustering results in bioinformatics .a number of metrics for phylogenetic tree comparison are known , including the partition ( or symmetric difference ) metric , the nearest - neighbor interchange metric , the subtree transfer distance , the metric from the crossover method , the quartet metric , the metric from the nodal distance algorithm .one of the simplest and easiest to compute metrics proposed so far , the transposition distance , is only defined for fully resolved trees . but phylogenetic analyses often produce phylogenies with polytomies , that is , phylogenetic trees that are not fully resolved . as a matter of fact , at the time of this writing , more than a 66.5% of the phylogenies contained in treebase have polytomies . in this paper , we generalize to arbitrary phylogenetic trees this transposition distance , through a new definition of it .this new distance is directly inspired on the one hand by the matching representation of phylogenetic trees and on the other hand by the _ involution metric _ for rna contact structures .the matching representation of a phylogenetic tree with leaves labeled describes injectively as a partition of .if is fully resolved , which is the particular case considered in , then all members of this partition are 2-elements sets , and then , since , it defines an undirected 1-regular graph .reidys and stadler defined the _ involution metric _ on 1-regular graphs , by associating to each such a graph the permutation given by the product of the transpositions corresponding to its edges , and then using the _ canonical metric _ in the symmetric group ( the least number of transpositions necessary to transform one permutation into another ) to compare these permutations .the translation of this metric to matching representations yields twice the matching distance defined in .unfortunately , no meaningful generalization to arbitrary graphs of reidys and stadler s metric is known , the main drawback being the difficulty of associating injectively a well - defined permutation to an arbitrary graph .now , if is not fully resolved , the members of are no longer pairs of numbers , and therefore they do not define a graph , at least not directly . actually , the approach that we take in this paper can be understood as if we represented each member of , with , as a cyclic directed graph with arcs , and as the sum of these cyclic graphs .now , generalizing reidys - stadler s approach , we associate to every such a cyclic directed graph the cyclic permutation ( if , it is a transposition ) , and we describe by means of the product of the cyclic permutations associated to its members : since these members are disjoint to each other , this product is well - defined .this defines an embedding of the set of phylogenetic trees with leaves labeled into the symmetric group .the transposition distance is obtained by translating the canonical metric on into a distance for phylogenetic trees through this embedding .this transposition distance measures the least number of certain simple operations ( splitting sets of children , joining sets of children , interchanging children ) that are necessary to transform one tree into another , and it can be easily computed in linear time .therefore it satisfies the requirements of `` computational simplicity '' and `` good theoretical basis '' that are required to any distance notion on phylogenetic trees .throughout this paper , by a _phylogenetic tree _ we mean a _ rooted tree with injectively labeled leaves and without outdegree 1 nodes_. thus , a phylogenetic tree is a directed finite graph containing a distinguished node , called the _ root _, such that for every other node there exists one , and only one , path from the root to .the _ children _ of a node in a tree are those nodes such that .the _ outdegree _ of a node is the number of its children .the nodes without children are the _ leaves _ of the tree , and the remaining nodes are called _ internal _ : since we assume that no node has outdegree 1 , every internal node has at least 2 children .the set of leaves of is denoted by .height _ of a node in a tree is the length of a longest directed path from to a leaf .thus , the nodes with height 0 are the leaves , the nodes with height 1 are the nodes all whose children are leaves , and so on . the leaves of a phylogenetic tree are injectively labeled in a fixed , but arbitrary , ordered set : these labels are called _taxa_. in practice , if the tree has leaves , we shall identify their labels with , ordered in the usual increasing way .the label associated to a leaf will be denoted by .we shall denote by the set of all phylogenetic trees with leaves labeled ( up to label - preserving isomorphisms of rooted trees ) .the _ bottom - up ordering _ ( cf . ) of a phylogenetic tree is the injective mapping defined by the following properties : 1 . if , then is its label .2 . if , then .3 . if and then .it is straightforward to notice that this bottom - up ordering is unique , and it can be computed in time linear in the size of the tree by bottom - up tree traversal techniques .first , the leaves of are labeled by their label in .then , the height 1 nodes are labeled from on in the order given by the smallest label of their children : i.e. , the height 1 node with the smallest child label is assigned label , the height 1 node with the next - smallest child label is assigned label , etc . andthis procedure is continued for consecutively increasing heights .the detailed pseudocode is given in algorithm [ alg : bottomupordering ] .[ ex : primer ] fig .[ fig : example1 ] shows the tree t166c11x6x95c08c56c38 in treebase and its bottom - up ordering after sorting its taxa alphabetically .the next definition generalizes the perfect matching representation of binary , or fully resolved , trees .let be a phylogenetic tree with leaves labeled , and let be its bottom - up ordering .the _ matching representation _ of is the partition of defined as follows : the matching representation of the tree in fig .[ fig : example1 ] is the partition of given by it is clear that , once the bottom - up ordering of has been obtained , the set can be produced in linear time in the size of the tree .furthermore , the following two results are straightforward . for every , . for every , if , then every , let denote the symmetric group of permutations of . by a _ cycle _ in understand a cyclic permutation , with , that sends to , to , , to , and to , leaving fixed the remaining elements of .recall that the inverse of a cycle is : the permutation that sends to , to , , to , and to .the _ length _ of a cycle is the number of elements it moves .the _ cycle associated to a subset _ , with and , of , is . if , i.e. , if is a singleton , then is the identity in , which we do not consider a cycle .the _ matching permutation _ associated to a phylogenetic tree is the permutation of defined by the product of the sorted cycles associated to the members of its matching representation : [ ex : pit1 ] the matching permutation associated to the tree in fig .[ fig : example1 ] is the product of cycles i.e. , the permutation if are two different internal nodes of , then .therefore , all cycles appearing in the product defining are disjoint to each other , and hence they commute with each other , which implies that this product is well defined .notice that no element in remains fixed by , because every , with internal , has at least two elements and every element in is the bottom - up ordering label of a child of some internal node .now , if is a phylogenetic tree with leaves , then , the equality holding if and only if is binary . to be able to compare matching permutations of phylogenetic trees with the same number of leaves but different numbers of internal nodes , we shall understand henceforth that the matching permutation belongs to , leaving fixed the elements .the following result is a direct consequence of the facts that the matching representation of a phylogenetic tree uniquely determines it and every permutation has a unique decomposition as a product of disjoint cycles of length .[ prop : piinj ] for every , if , then .if we allow the existence of outdegree 1 nodes in our phylogenetic trees , then the last proposition is no longer true .indeed , consider the trees in fig .[ fig : example0 ] . the left - hand side one has matching representation , while the right - hand side one has matching representation .therefore the matching permutation associated to both trees is ( considered as an element of ) .arguing as in ( * ? ? ?* cor . 1 ), we have the following result .the mapping that associates to every pair of phylogenetic trees with leaves labeled in , the least number of transpositions necessary to represent the permutation , is a metric on . by proposition[ prop : piinj ] , the mapping that sends every to its matching permutation is an embedding .then , since the mapping defined by is a metric on ( see , for instance , ( * ? ? ?2 ) ) , the mapping is a metric on .[ record ] recall that the least number of transpositions required to represent a cycle of length is , for instance through and that the least number of transpositions required to represent a product of disjoint cycles is the sum of the least numbers of transpositions each cycle decomposes into , and hence the sum of the cycles lengths minus the number of cycles .the metric satisfies the following property .[ prop : tdparell ] for every , is an even integer smaller than .if have and internal nodes , respectively , then each ( ) decomposes into disjoint cycles : say , with of length . then , by remark [ record ] , has a decomposition into transpositions .but then admits a decomposition into transpositions .this entails that _ every _ decomposition of this permutation into a product of transpositions must involve an even number of them , and therefore that is an even integer .as far as the stated upper bound for goes , notice that moves at most elements and that if it is not the identity , then its decomposition into disjoint cycles has at least 1 cycle .therefore , again by remark [ record ] , a minimal decomposition of this permutation into transpositions will involve at most transpositions , and since this number is even , this implies that . in other words , is `` artificially '' multiplied by 2 .thus , we define a new metric on by dividing by 2 .the _ transposition distance _ on is in this way , takes values in .[ example2 ] let be the phylogenetic trees displayed in fig .[ fig : example2 ] ( which we already give bottom - up ordered ) .their matching permutations are ( understood as permutations in ) , and then which yields the distances between these trees given in table [ taula0 ] ..transposition distances between pairs of trees .[ cols="^,^,^,^,^",options="header " , ] [ taula0 ] the transposition distance between two phylogenetic trees can be easily computed in linear time . to prove it , we move to the more general setting of permutations and the graphs associated to them . for every permutation ,the _ directed graph _ associated to is the graph with the directed graph associated to the inverse of a permutation is obtained by reversing all arrows in : thus , and . given two permutations , by we understand the 2-colored - arcs multigraph with set of nodes , set of red arcs and set of blue arcs .we shall say that a node of is _ unbalanced _ when it is isolated in one , and only one , of the graphs ( which means that it is fixed by one , and only one , of the permutations ) .[ prop : unbalanced ] for every unbalanced node of : 1. if is isolated in and with , then replacing the red arcs and by a single red arc increases by 1 . 2 .if is isolated in and , removing the red arcs and increases by 1 .similar properties hold if is isolated in but not in and we modify the set of blue arcs .\(1 ) if , with , then and and hence , , and for every . therefore , replacing the arcs by an arc is equivalent to replacing by .so , it is enough to prove that , with the notations and assumptions of point ( 1 ) , to prove this equality , notice that , since is fixed by , sends to and to : let us denote this last index by . if , then is a cycle of and it appears in any decomposition of this permutation as a product of transpositions .but then both and are fixed by , and since and act exactly in the same way on the other elements , we deduce that and then in this case .if , then the cycle of moving has at least three elements : and thus it contributes transpositions to a minimal decomposition of as a product of transpositions .now , the cycle of that moves is and it only contributes transpositions to any decomposition of as a product of transpositions .therefore , also in this case .\(2 ) if , then , and hence and for every . therefore , to remove the arcs in this case means again to replace by .so , again in this case , it is enough to prove that , with the notations and assumptions of point ( 2 ) , since is fixed by , we have that sends to and to : let us denote this last index by . if , i.e. , if is also fixed by , then is a cycle of and it appears in any decomposition of this permutation as a product of transpositions .but then both and are fixed by and and then .if , then the cycle of moving has at least three elements : and thus it contributes transpositions to any decomposition of as a product of transpositions .now , is fixed by and the cycle of this permutation moving is and it only contributes transpositions to any decomposition of as a product of transpositions .thus , again in this case , .[ prop : nounbalanced ] if has no unbalanced node , then where is the number of non - isolated nodes of and is the number of _ alternating cycles _ in , i.e. , of cycles in this directed 2-colored - arcs multigraph such that two consecutive arcs have different colors .if has no unbalanced node , then every node either is isolated or has exactly one incoming and one outcoming arc of each color .this entails that decomposes into the union of arc - disjoint alternating cycles .now , every length alternating cycle with for every and for every and , corresponds to a length cycle of and hence it adds transpositions to any decomposition into transpositions of this permutation . therefore , if we denote by the set of alternating cycles in , we have that finally , it is straightforward to notice that if has no unbalanced node , then and it is equal to the number of non - isolated nodes in this multigraph .these propositions allow us to compute , for , in time linear on using the procedure given in pseudocode in algorithm [ alg : multigraph ] .if and are two phylogenetic trees with different sets of labels , then we can compute their transposition distance by first restricting them to the sets of leaves with common labels , and then relabeling consecutively these common labels , starting with 1 .since we do not allow outdegree 1 nodes , when we restrict a phylogenetic tree to a subset of its set of taxa we contract edges to remove outdegree 1 nodes .[ example3 ] let be the phylogenetic tree in example [ ex : primer ] and let be the lower phylogenetic tree displayed in fig .[ fig : nova ] , which represents the bottom - up ordering ( with its taxa sorted alphabetically ) of the tree t270c2x3x96c12c57c27 in treebase after removing the outer taxon _ dalbergia _ ( and the elementary root created in this way ) , which does not appear in .its matching permutation is since ( see example [ ex : pit1 ] ) , the multigraph has nodes , red arcs , , , , , , , , , , , , , and , and blue arcs , , , , , , , , , , , , , , , , and . to compute , we start with and . 1 .at the beginning , , , and are unbalanced .then , we remove the pairs of blue arcs , , , and and we set and .2 . in this way, the nodes become unbalanced .then , we remove the pairs of red arcs , and we replace the pair of red arcs by a new red arc and we set and .now , has become unbalanced .then , we remove the pair of blue arcs , and we set and ., has become unbalanced .then , we replace the pair of red arcs by a new red arc and we set and .5 . at this moment, there does not remain any unbalanced node : the resulting multigraph has 5 alternating cycles ( a cycle , a cycle , a cycle , and two cycles ) .then , we have in the introduction we mentioned that the transposition distance defined in this paper generalizes the transposition distance for fully resolved trees .this will be a direct consequence of the following result .[ thm : matching ] for every pair of binary phylogenetic trees , let be the undirected multigraph with and , and let be the set of connected components of .then , .let and denote the directed graphs associated to and .since and are binary , in for every blue or red arc there is the inverse arc of the same color , and the graph in the statement is the undirected graph obtained by replacing each pair of arcs of the same color by the undirected edge , which we shall understand colored with the same color as the original pair .since are binary , and therefore they have nodes , no one of the nodes of is unbalanced or isolated . then , by proposition [ prop : nounbalanced ] , moreover , is 2-regular , and therefore , every connected component in is an alternating cycle , which contains exactly two alternating cycles of .therefore . combining this equality with the expression for given by proposition [ prop : nounbalanced ], we obtain the expression in the statement . in , the _ transposition distance _ between two _binary phylogenetic trees _ and was defined as the least number of transpositions necessary to transform into : in this context , a _transposition _ means a replacement of a pair of 2-elements sets by a new pair .theorem 1 in _ loc .cit . _ and the last proposition entail that , for binary phylogenetic trees , our transposition distance and the transposition distance defined in are the same .we have implemented in perl the algorithms for the transposition distance between phylogenetic trees , using the bioperl collection of perl modules for computational biology .the software is available in source code form for research use to educational institutions , non - profit research institutes , government research laboratories , and individuals , for non - exclusive use , without the right of the licensee to further redistribute the source code .the software is also provided for free public use on a web server , at the address ` http://www.lsi.upc.edu/~valiente/ ` using this implementation , we have performed a systematic study of the treebase phylogenetic database , the main repository of published phylogenetic analyses , which currently contains 2,592 phylogenies with 36,593 taxa among them .previous studies have revealed that treebase constitutes a scale - free network .similarity of phylogenetic trees in treebase based on the transposition distance .each bullet represents the distance between a phylogenetic tree and the most similar phylogenetic tree in treebase ( other than itself ) with at least three common taxa . ] in order to assess the usefulness of the new distance measure in practice , we have computed the transposition distance for each of the pairs of phylogenetic trees in treebase .then , for each phylogenetic tree , we have recovered the most similar phylogenetic tree in treebase ( other than itself ) with at least three taxa in common .the results , summarized in fig . [ fig : treebase ] , show that the transposition distance allows for a good recall of similar phylogenetic trees .this work has been partially supported by the spanish dges project bfm2003 - 00771 albiom , the spanish cicyt project tin 2004 - 07925-c03 - 01 grammars , and the ue project intas it 04 - 77 - 7178 .j. e. stajich , d. block , k. boulez , s. e. brenner , s. a. chervitz , c. dagdigian , g. fuellen , j. g. gilbert , i. korf , h. lapp , h. lehvaslaiho , c. matsalla , c. j. mungall , b. i. osborne , m. r. pocock , p. schattner , m. senger , l. d. stein , e. stupka , m. d. wilkinson , e. birney .`` the bioperl toolkit : perl modules for the life sciences . '' _ genome research _ , 12 ( 2002 ) , 16111618 . http://www.bioperl.org g. valiente .`` a fast algorithmic technique for comparing large phylogenetic trees . '' in _ proc .string processing and information retrieval _ , lecture notes in mathematics 3772 ( 2005 ) , 370375 .
the search for similarity and dissimilarity measures on phylogenetic trees has been motivated by the computation of consensus trees , the search by similarity in phylogenetic databases , and the assessment of clustering results in bioinformatics . the transposition distance for fully resolved phylogenetic trees is a recent addition to the extensive collection of available metrics for comparing phylogenetic trees . in this paper , we generalize the transposition distance from fully resolved to arbitrary phylogenetic trees , through a construction that involves an embedding of the set of phylogenetic trees with a fixed number of labeled leaves into a symmetric group and a generalization of reidys - stadler s involution metric for rna contact structures . we also present simple linear - time algorithms for computing it .
rocky planet formation begins as gas and dust around a young star settle into a thin disk .the emergence of planets within this disk is the result of three phases of evolution ( safronov 1969 ; weidenschilling 1980 ; hayashi 1981 ; wetherill & stewart 1993 ; ida & makino 1993 ; kokubo & ida 1996 ) .initially , coagulation of dust particles causes a stochastic but steady growth in particle mass .the few largest bodies , or planetesimals , accumulate mass most quickly , and experience a runaway growth phase . as the largest objects clear out the smaller ones , the planetesimal growth is oligarchic , where the largest objects `` oligarchs '' become isolated from their neighbors and grow roughly at the same rate . in a final phase of chaotic growth , mergers of oligarchs lead to the formation of a few terrestrial planets around sun - like stars well within 100 myr ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?this scenario is bolstered by recent observations .most , if not all , young late - type stars have a disk of gas and dust ( backman & paresce 1993 ; beckwith 1999 ; lada 1999 ) .at least some of these disks will form large planets , similar to those detected in radial velocity studies ( e.g. , marcy & butler 2000 ) .the persistence of dusty disks around older stars suggests that smaller planets form as well .dust should be ejected by radiation pressure , but on - going formation of rocky planets can drive a collisional cascade which continually produces dust from a reservoir of small planetesimals ( kenyon & bromley 2001 , 2002a , 2002b ) .stars with such debris disks include pic ( kalas et al .2000 ; wahhaj et al .2003 ) , eri ( greaves et al .1998 ) , hr 4796a ( jayawardhana et al . 1998 ; koerner et al . 1998 ) and vega ( e.g. , koerner , sargent & ostroff 2001 ) .interpretations of observed dusty debris disks , and our understanding of planet formation in general , rely heavily on numerical calculations .two types of tools , statistical solvers and -body codes , provide complementary information about planetary disk evolution .the effectiveness of each tool depends on , the number of particles they can track .statistical methods like safronov s ( 1969 ) particle - in - cell formalism work well when the particles are numerous and have small mass .current -body codes are unable to follow planetary growth in this regime .when the mass of individual objects gets large , binary interactions become important .orbit evaluations then require direct -body calculations , not the ensemble averages of particle - in - cell approaches . our experience with coagulation codes ( kenyon & luu 1998 , 1999 ; kenyon & bromley 2001 , 2002b , 2003 ) suggests that we can accurately evolve objects with masses below g using the statistical approach . however , an -body code must track particles of heavier mass ( see also kokubo & ida 1996 , 2002 ). massive planetesimals requiring direct -body evolution are relatively rare in typical planet formation models .this situation is fortunate because -body codes can not accurately track large numbers of particles for long periods of time .simulations performed by chambers ( 2001 ) and some runs reported here involve particles integrated over 100 myr .ida , kokubo & kominami ( 2003 ) ran larger simulations with , but only for a .5 myr time period and only with specialized hardware for gravitational force calculations .the complementary limitations of -body algorithms and coagulation codes call for an integration of both methods into a single `` hybrid code '' ( jewell & alexander 1996 ; weidenschilling et al. 1997 ) here we describe an algorithm that includes both a statistical component and a direct -body part .our goal is to run self - consistent simulations of planet formation , tracking objects from micron - sized dust grains to jupiter mass planets .the coagulation code , described briefly below in 2 , accounts for gas , dust , and a swarm of lower mass planetesimals . our -body algorithm ( 3 ) , is invoked only when needed to evolve the largest planetesimals . in 4 we discuss the hybrid code itself , and focus on how the -body and coagulation parts interact .we provide tests of this code in 5 and give preliminary results related to the problem of terrestrial planet formation .the coagulation part of our hybrid code tracks dust and planetesimals in multiple annuli around the central star .kenyon & luu ( 1998 , 1999 ) and kenyon & bromley ( 2001 , 2002b , 2004a ) describe the algorithms and include a complete set of references . herewe briefly review this statistical algorithm .the physical processes which we simulate include coagulation , fragmentation , gas drag , poynting - robertson drag , and radiation pressure .we discretize the continuous distribution of particle masses into mass batches , assigning an integral number of particles to each batch . to allow better resolution of the mass spectrum for particles evolving rapidly , adjacent batches differ in mass by a mass - dependent factor .we dynamically adjust the number of mass batches , as needed .our spatial domain is cylindrical and is divided into a set of concentric annuli about the central star .the annuli may be set so that either they have equal width , or their boundaries are equally spaced in log - radius .masses and particle numbers evolve according to the coagulation equations which include the effects of collisions , poynting - robertson drag , and interaction with the gas disk .our collisions rates come from geometric cross sections of particles , augmented by a gravitational focusing factor for larger planetesimals .collision outcomes depend on the planetesimal tensile strength , gravitational binding energy , and relative velocities . depending on these parameters, collisions result in mergers , fragmentation , and dust production .we follow particle velocities statistically for each mass batch in each annulus , tracking vertical and horizontal velocity dispersions relative to keplerian orbits in the central plane of the disk .these dispersions evolve under the influence of gas drag , poynting - robertson drag , viscous stirring and dynamical friction , according to a set of fokker - planck equations ( hornung , pellat & barge 1985 ; wetherill & stewart 1993 ; stewart & ida 2000 ; ohtsuki , stewart , & ida 2002 ) . to solve the coagulation and fokker - planck equations , we use a fourth - order runge - kutta method .the algorithm strictly conserves mass .when objects shift from one annulus to another , velocity dispersions are updated to conserve kinetic energy explicitly . at the inner and outer boundaries of the spatial domain ,the mass batches reflect a steady , radial flow in the debris disk .the code does not enforce angular momentum conservation , nonetheless , in test runs of timesteps , angular momentum is conserved to better than 1% .the coagulation code evolves the swarm of planetesimals accurately until the largest objects reach roughly the mass of pluto .because these massive objects are rare , the statistical model provides a poor estimate of their behavior . although the coagulation code can still follow the evolution of the swarm , the orbits of the largest objects require explicit calculations using an -body code , which we describe in the next sectionplanetary -body codes must be fast and accurate . suitable solvers for the equations of motion are numerous , with published descriptions of symplectic integrators ( wisdom & holman 1991 ; kinoshita , yoshida & nakai 1991 ; saha & tremaine 1992 ) , and more general time - symmetric integrators ( e.g. , quinlan & tremaine 1990 , hut , makino & mcmillan 1995 ) . some integrators ( vasilev 1982 ; wisdom & holman 1991 ; ida & makino 1992 ; fukushima 1996 ; shefer 2002 ) speed up the calculations by tracking only deviations from a purely keplerian orbit about the central star , following the proposal of encke ( 1852 ) .other improvements include adaptive timestepping , allowing integrators to take large timesteps when forces are small , and to expend more computational resources only when particles experience strong forces ( aarseth 1985 ; mcmillan & aarseth 1993 ; skeel & biesiadecki 1994 ; kokubo , yoshinaga & makino 1997 ; duncan , levison & lee 1998 ) .our algorithm is based on the encke method .we work with non - inertial , keplerian frames about the central star and integrate the equations of motion in rectilinear coordinates defined relative to the keplerian frames .these coordinates have a fixed orientation , aligned with the inertial frame of the central star .we use an adaptive block timestepping scheme ( e.g. , mcmillan & aarseth 1993 ) and a sixth- or eighth - order accurate integrator based on richardson extrapolation .we give details of the integrator below .for now we briefly outline the timestepping scheme . at the beginning of a timestep , each particle s accelerated reference frame is initially set to its instantaneous keplerian orbit .if particles are in tight groups , their reference frames can be set to the keplerian orbit of their mutual center of mass .a friends - of - friends algorithm ( huchra & geller 1982 , geller & huchra 1983 ) finds any such groups with a linking parameter based on the hill radii of the particles .we then integrate the equations of motion in terms of spatial variables in the accelerated reference frames over a single timestep of length .next we calculate higher resolution orbits by dividing the timestep into equal substeps . in practice , seems to work best .a comparison between the lower and higher resolution orbits produces a check of integrator convergence .we have implemented several criteria for convergence , but have found that an individual particle s total energy , taken as a fraction , of its keplerian orbital energy , gives an effective comparison under most circumstances .if a particle s orbit has not converged , we simply repeat the process at even higher time resolution .the strategy of a block timestepping scheme is to track individual orbits and interpolate the converged orbits if needed to evaluate forces on particles whose orbits have not converged .this procedure reduces the computational load of force calculations if the number of particles is large ( ) . for a smaller number of particles we prefer to integrate all orbits until every orbit has converged .energy conservation is better in this case and there is no significant penalty in computational load .the time integrator is an ordinary differential equation ( ode ) solver based on richardson extrapolation .we start with a low - resolution estimate of the particle orbits over a time interval using a leapfrog intergrator . specifically , the position and velocity of a particle at the end of a time - forward step in the leapfrog case is where is the acceleration , which in our case depends only on position .we label the position at the end of this single leapfrog timestep as , and note that it contains errors of the order of .next we divide the time interval into two equal parts and take two successive leapfrog steps to derive a better resolved orbital position , .we can further subdivide the time interval to obtain a position , derived from successive leapfrog timesteps .our final position at the end of the time interval is a linear combination of these results , fourth- , sixth- , and eighth - order methods have , 2 and 3 , respectively and coefficients we calculate the gravitational forces between particles directly . at the lowest resolution timesteps ,we perform the o force evaluations .however , if we use the block timestepping scheme then only a few particles depend on o interparticle forces at high temporal resolution . to simulate a larger number of bodies , we have an o( barnes & hut ( 1986 ) treecode , as described in barton , bromley & geller ( 1999 ) .preliminary tests suggest that this code becomes competitive with a direct method only when is greater than o. an alternative would be to use specialized hardware ( e.g. , hut & makino 1999 ) . here ,our particle numbers are small and we work exclusively with the direct force solver . for realistic planet formation models , an -body algorithm must identify merger events .we check for mergers in a fast way , by assuming that any merger event which occurs during a single timestep can involve only two particles . for each particlewe save an index number corresponding to some other particle which came the closest during the force calculations .this indexing reduces the number of calculations from to . for each pair of bodies , with indices and , we check the relative radial velocity using the quantity , where the vectors are relative position and velocity respectively .if the relative radial velocity is positive at the beginning of a timestep , then we assume no merger takes place . otherwise , we find the minimum separation of the pair during the timestep , interpolating if necessary .once we have the minimum pair separation and relative speed , a merger event is identified if \ ] ] where is the escape velocity at distance from the particle , and is the particle s physical radius .thus the merger cross - section is at least the physical cross - section , and is larger if the relative velocity is small compared to the escape velocity of either particle .with our coagulation code and -body algorithm , we track both the numerous low - mass planetesimals and the relatively rare high - mass bodies in a planetary disk .the interactions between these two populations generally cause the larger bodies to circularize in their orbits , while the smaller bodies tend to get gravitationally stirred , as reflected in an overall increase in eccentricity and inclination . herewe describe how the coagulation and -body components work together to simulate these effects .the multiannulus coagulation code evolves all low - mass bodies .the model grid contains concentric annuli with widths centered at heliocentric distances .each annulus contains ) objects of mass with orbital eccentricity and inclination in mass batches .when an object in the coagulation code reaches a preset mass , it is ` promoted ' into the -body code .to set the initial orbit for this object , we use the three coagulation coordinates , , , and , and select random values for the longitude of periastron and the argument of perihelion .because the annuli have finite width , we set the semimajor axis of the promoted object to = , where is a random number between 0 and 1 .when two or more objects within an annulus are promoted to the -body code during the same timestep , we restrict the choices of the orbital elements to minimize orbital interactions between the newly promoted -bodies .the coagulation code also determines the simulation timestep .this interval , , is generally larger than the lowest resolution timestep of the -body code , so we take multiple low - resolution substeps , , within that interval .the length of a substep is set by the orbital speed of the fastest moving body ; for nearly circular orbits this limit is roughly 1/60 of the orbital period . to calculate the effects of the swarm of low - mass bodies on the -bodies , we use particle - in - a - box estimates .for an n - body with index , mass , eccentricity , inclination , horizontal velocity , and vertical velocity , we derive the time - evolution of the orbital eccentricity and inclination from the fokker - planck formulae for the derivatives of the horizontal and vertical velocities : for viscous stirring in the high velocity limit and for dynamical friction in the high velocity limit . in these expressions , , , and the functions , , , and are definite integrals .the subscript is the annular index while subscript is the index of a mass batch .the overlap fraction is the fraction of bodies in annulus that approach within 2.4 of the -body .we set , where is the total mass of bodies with in annulus and is the volume of annulus . following stewart & ida ( 2000 ) , we also set , where with , , , , and ; the subscript denotes a keplerian velocity ( see also * ? ? ?* ; * ? ? ?when the relative velocities of particles approach the hill velocity , , we set and use for viscous stirring in the low velocity limit and for dynamical friction in the low velocity limit . in these expressions , = 0.3125 , is the mutual scale height , = , and = .the combined velocity stirring is then for viscous stirring and for dynamical friction .to calculate the accretion rate of planetesimals onto the -bodies , we use the standard coagulation equation : where is the normalized cross - section ( see kenyon & luu 1999 ) . to calculate the stirring of planetesimals by -bodies, we calculate the appropriate fokker - planck terms for viscous stirring and dynamical friction and add these results to the long distance stirring of where is the number of -bodies and . at the end of each coagulation timestep, we pass the stirring and accretion rates for each -body , , and to the -body code . at the end of every low resolutiontimestep in the -body code , we modify each particle s orbit and mass to reflect these changes .when possible , we simply redirect the velocity vector so that the eccentricity varies independently of inclination and semimajor axis .the -body functions return an updated particle list to the coagulation code .new orbital positions and masses of the large objects are then reinserted into the coagulation grid .the coagulation calculations proceed in the standard fashion , except that the orbital velocities of the large - mass objects in the grid are not evolved . with a complete circuit from the coagulation code to the -body code to the coagulation code, -bodies influence the evolution of the swarm and the swarm influences the evolution of the -bodies .we have published elsewhere results on the performance of the coagulation code , as described in 2 .however , the -body and hybrid codes are new , and our purpose here is to validate them . in this sectionwe first test the -body code for stability , dynamic range , accuracy , and merger resolution , mostly following duncan , levison & lee ( 1998 ) in the validation of their symba algorithm .we then test the hybrid code against several simulations of terrestrial planet formation at 1 au .the following subsections are organized according to the type of calculation .we test stability during long term orbit integrations of the major planets and a `` scaled outer solar system '' where the masses of the major planets are increased by a factor of 50 ( duncan , levison & lee 1998 ) .then we test dynamic range with two binary planet configurations .the code s accuracy is also established with a test to resolve a critical orbital separation between two planetesimals which determines whether their orbits will cross . in considering mergers, we reproduce the greenzweig & lissauer ( 1990 ) results for planetary accretion rates .we also derive the integration accuracy required to track the collision between a massive object and a small projectile on opposing circular orbits at 1 au . finally , we simulate terrestrial planet formation , following , who consider the evolution of km - sized planetesimals , and chambers ( 2001 ) , who models the collisional evolution of lunar mass bodies . to evaluate the stability of our adaptive integrator , we evolve the four major planets in orbit about a stationary sun . figure [ fig : symplectic ] shows the behavior of the outer planets eccentricities over a 10 myr period .we use two different integrators , a sixth - order accurate symplectic integrator ( yoshida 1990 ) and our adaptive code , also with a sixth - order accurate ode solver for comparison .we run both codes at low and high resolution .the run times of the two codes are comparable at each resolution . as illustrated in the figure , the low resolution adaptive code obtains a better measure of eccentricity than its symplectic counterpart .one reason is that the adaptive code is not subject to accumulation of phase errors as in a symplectic algorithm . on the other hand ,the adaptive code produces a secular drift in energy over the course of the simulation , whereas the symplectic code does not . the fractional error in total energy at 10myr is and for the low and high resolution adaptive code , respectively .if the masses of each of the four major planets increase by a factor of 50 , the outer solar system is unstable ( see , for example , duncan & lissauer 1998 ) . to verify that our code yields the correct timescale for this instability , we made tests similar to one performed by duncan , levison & lee ( 1998 ) . as expected , the semimajor axis of jupiter s orbit shrinks by a modest fraction of the original semimajor axis , while saturn is ejected on a timescale of 1,000 years .as saturn leaves the solar system , its orbit crosses and excites the orbits of uranus and neptune .an increase in the orbital eccentricity of uranus or neptune leads to a second ejection , as seen in figure [ fig : scalsolsys ] .duncan , levison & lee ( 1998 ) note that bound binary planets whose center of mass revolves around a star provide a strong test of the ability of an -body integrator to handle a wide range of dynamical timescales .they set up a pair of jupiter - mass planets with a center of mass on a circular orbit at 1 au .the planets orbit each other with an initial binary semimajor axis of 0.0125 au and binary eccentricity of 0.6 .their simulation time was 100 yr , covering about 3,200 binary orbits .we reproduce the duncan , levison & lee ( 1998 ) test .we can limit total energy errors to within a part in 10 million , while keeping the wall time well below a minute on a 1.7 ghz amd athlon processor .figure [ fig : binary ] illustrates changes in the binary semimajor axis .the figure also demonstrates a test of our block timestepper .if a small particle is in a tightly bound orbit around a much more massive object , then the position of the massive particle is interpolated in the force calculations for the small body .our test simulates the orbit of a massless `` spy satellite '' on a polar orbit about the earth .figure [ fig : binary ] shows that the distance of the satellite from the center of the earth varies by about 100 meters during an orbit .there is no significant drift in its mean altitude over the course of 100 years , corresponding to roughly 600,000 satellite orbits of the earth .we next test the ability of the -body code to handle the evolution of planetesimals .first , we consider an isolated pair of planetesimals , with masses of g. these bodies start on corotating , circular orbits near 1 au .we specify their orbital separation in terms of their mutual hill radius , ^{1/3}$ ] .there is a critical separation , , which determines the onset of chaotic behavior : if the planetesimals initial orbital separation is smaller than , their orbits will cross , otherwise they will simply scatter to larger orbital separations .figure [ fig : npack ] shows the evolution of semimajor axes in cases where orbital separations are initially smaller than ( top panel ) and greater than ( middle panel ) . as expected , the more closely - spaced pair experiences several orbit crossings while the more distant pair does not .multiple calculations demonstrate that our code can resolve the critical separation to within a few percent .figure [ fig : npack ] also shows results for eight planetesimals , each with the same mass as in the preceding case , and initially on circular , keplerian orbits .the separation between nearest neighbors is .the orbit crossings proceed from larger radius inward , as illustrated anecdotally in the figure ( see also * ? ? ?* ; * ? ? ?next we use the -body code to model gravitational stirring by two g objects in a uniform disk of 805 lower mass , g objects ( kokubo & ida 1995 ; weidenschilling et al . 1997 ; kenyon & bromley 2001 ) .the disk is centered at 1 au , and is 35 in annular extent , where is the mutual hill radius between large and small objects .to speed up the code s convergence with this relatively large number of particles , we soften the interaction potential on a scale comparable to the physical radius of the large bodies , assuming their density is 1 g .figure [ fig:2plus800 ] illustrates the evolution of the radial profile of for the case where the two massive bodies are separated by , and are each at an orbital distance of away from 1 au .the initial eccentricity and inclinations are zero for the two massive objects , and small ( , ) for the planetesimals .the -body results here compare well with those from the pure coagulation code ( kenyon & bromley 2001 ) , except that the -body calculations produce more structure in the profile , particularly an enchanced scattering of the swarm away from the more massive bodies .this difference , a broadening of the profile as compared to the coagulation results , is expected from previous -body calculations ( kokubo & ida 1995 ) .one test of our merger algorithm is provided by greenzweig & lissauer ( 1990 ) , who studied gravitational focusing by a single planet in a field of test particles , all orbiting a 1 star .we consider one of their configurations , a planet on a circular orbit at 1 au and a set of test particles on orbits with eccentricity , inclination degrees , and semimajor axes distributed in two rings between 0.977 au and 1.023 au .we evolve the system through a single close encounter between the planet and each test particle , to determine the fraction of test particles accreted by the planet .when the planet has a physical radius of km , greenzweig & lissauer ( 1990 ) report an accretion fraction = 0.140 . in their validation of the symba code ,duncan , levison & lee ( 1998 ) estimate . for a planetary radius of 5,200 km , the derived accretion fractions are ( greenzweig & lissauer 1990 ) and ( duncan , levison & lee 1998 ) .using the merger criterion specified in [ sect : nbody ] ( eq . [[ eq : merger ] ] ) , our results are ( poisson errors with 50,000 test particles ) for the km radius planet and for the 5,200 km radius planet .a more extreme test of the merger algorithm is to simulate the high - speed collision between two counterrotating objects .we consider a small ( cm - size ) projectile and a larger target of some specified radius , , both with a density of 1 g/ .the two bodies initially are at 1 au on opposite sides of the sun and are on colliding circular orbits .the code calculates trajectories with a fixed number of timesteps over a time interval which is randomly distributed at values just greater than 0.25 yr .we run repeated trials and find the minimum number of low - resolution timesteps , , required to achieve a 95% success rate in detecting collisions .figure [ fig : xsection ] illustrates our results . for targets with km , is determined by the errors in the interpolation of position between the endpoints of each timestep , and scales approximately as . in our planet formation simulationsreported here , we typically take timesteps per orbit and therefore can expect to resolve high - speed collisions between 1,000 km objects .if gravitational interactions between target and projectile become important , as in cases with large bodies and slow impact speeds , then the adaptive integrator takes high - resolution timesteps , increasing the accuracy of the merger algorithm .this effect is illustrated in figure [ fig : xsection ] , which shows a dramatic decrease in for targets of size 100 km or larger , depending on the adaptive code s error tolerance . in practice ,our -bodies typically have radii greater than 1,000 km and have impact speeds which are more than an order of magnitude smaller than in a counterrotating collision .thus , our code should accurately resolve mergers in planet formation simulations .next , we test this assertion explicitly . to illustrate the behavior of the hybrid code for less idealized conditions , we consider calculations of terrestrial planet formation at 1 auwe compare our results with two sets of published calculations . and use a multi - annulus code which evolves the masses and orbital properties of planetesimals with a coagulation code and follows the evolution of discrete planetary embryos with a monte carlo algorithm . as in our calculations, their code allows interactions between the planetesimals and the embryos .they evolve planetesimals for 1 myr in two configurations , 0.861.14 au and 0.51.5 au . here, we use a grid at 0.841.16 au and compare the results with our calculations from the pure coagulation code and from the hybrid code . uses a three - dimensional -body code to consider the final phase of terrestrial planet formation at 0.42 au .these calculations begin with 150160 lunar - mass to mars - mass embryos and follow the collisional evolution for 200500 myr . here, we adopt a grid at 0.42 au and compare the outcomes of two pure -body calculations and many hybrid calculations with the results .we begin with results for = 0.841.16 au . in these calculations ,the grid has 32 radial zones each containing an initial distribution of planetesimals with radii of 14 km .the planetesimals have initial surface density g , eccentricity = , and inclination = .the calculations include gas drag but do not allow fragmentation .the initial stages of pure coagulation and hybrid calculations are identical . in a few thousand years , objects grow to radii of 1020 km .after yr , objects reach sizes of 100300 km . because the timescales for dynamical friction and viscous stirring are shorter than the growth timescales , the largest objects have nearly circular orbits while the smallest objects have eccentric orbits .runaway growth then yields a handful of 10003000 km objects . as runaway growth proceeds ,viscous stirring continues to heat up the orbits of the smallest objects faster than the smallest objects damp the velocities of the largest objects .thus , gravitational focusing factors decrease , accretion slows , and runaway growth ends .the evolution then enters a period of ` oligarchic growth , ' where all of the largest objects grow at roughly the same rate . during oligarchic growth, the growth of the largest objects in the hybrid calculations differs from the path followed in the pure coagulation models . in the hybrid models ,oligarchs grow slowly until their orbits begin to cross .once orbits cross , chaotic growth leads to a rapid merger rate and the formation of several ` super - oligarchs ' that accrete most of the leftover planetesimals .the super - oligarchs accrete some of the remaining lower mass oligarchs and scatter the rest out of the grid . in pure coagulation models ,the orbits of oligarchs are not allowed to cross .these oligarchs slowly accrete all of the planetesimals within their gravitational reach .oligarchs rarely accrete other oligarchs .after 10100 myr , pure coagulation models have more lower mass oligarchs than the hybrid models . however , the largest oligarchs are less massive than their counterparts in the hybrid calculations . figures [ fig:3dcoag ] and [ fig:3dwhyb ] compare some results from two calculations at specific times .figure 8 of shows results for a similar calculation . in every case ,the mass of the largest object is g at , g at yr , and g at yr . at early times , and decline monotonically with increasing mass .at later times , and have bimodal distributions : objects with g have large and , while more massive objects have much smaller and .the orbital anisotropy , , follows a similar evolution , with 0.4 at early times and 0.10.15 at late times .near the end of the calculations , the largest objects lie in a flattened disk , with 0.02 - 0.03 ( see also * ? ? ?figure [ fig : evsm ] illustrates the evolution of the eccentricity in more detail . at yr , dynamical friction maintains low for the largest bodies . because planets form fastest at the inner edge of the grid , small objects at the inner edge have larger than small objects at the outer outer edge . at yr, dynamical friction still maintains low for the largest bodies .because most of the small objects have grown to 100 km radius , these objects have slightly smaller than other objects with g. at yr , objects with g have low , while lower mass objects have higher .these results compare favorably with figure 9 of .we begin our version of the calculations in a grid of 40 annuli at = 0.42 au . for the hybrid calculations , the annuli contain an initial distribution of planetesimals with radii of 415 km .the pure -body model starts with 160 ` moons ' with a mass of g. in both cases , the objects have = and = . to provide some contrast with previous calculations , we adopt an initial surface density of planetesimals or moons , with = 8 g and = 0.252 , instead of the usual .although the final configuration of planets depends on the initial surface density gradient , the general evolution is fairly independent of .the large radial extent of these calculations allows us to illustrate the sensitivity of planet formation to the heliocentric distance . because the timescale for growth by coagulation is , planets grow fastest at the inner edge of the grid . in hybrid models with = 1 , objects with radii of 200 km form in yr at 0.4 au , in yr at 0.95 au , and in yr at 2 au .oligarchs with masses of g form on a timescale once large objects start to form , the transition from runaway to oligarchic to chaotic growth proceeds in several waves propagating from the inner disk to the outer disk . as oligarchs start to form at the outer edge of the grid , dynamical interactions between oligarchs begin at the inner edge .several orbit crossings lead to dynamical interactions between all oligarchs and a rapid increase in the merger rate .a few large oligarchs gradually accrete many of the smaller oligarchs , leading to a configuration with several earth - mass planets and a few mars - mass ` leftovers . ' in the moons calculations , dynamical interactions between the 160 original oligarchs dominate the entire evolutionary sequence . in the first yr , five mergers start to produce large oligarchs in the inner disk . the number of oligarchs declines to 146 in yr , to 115 in 1 myr , and to 55 in 10 myr . by 10 myr, the largest objects in the inner disk reach masses of 0.20.3 and slowly accrete the remaining moons . in the outer disk ,the smaller surface density and viscous stirring lead to a smaller merger rate .after 100 myr , only 20 oligarchs remain in the outer disk .to compare our results with , we consider measures of the orbital elements of final ` solar systems ' from several simulations . in our direct -body and hybrid calculations ,the time variation of the mass - weighted eccentricity ( fig .[ fig : ecc - evol ] ) follows a standard pattern ( see fig . 5 of ) .during runaway and oligarchic growth , protoplanets stir their surroundings and increases . at 10 - 100 myr ,the orbits of oligarchs cross , which further excites orbital eccentricity and produces sharp peaks in .mergers between oligarchs reduce .once a few remaining planets have fairly stable orbits , settles to 0.050.15 .hybrid models yield larger variations in than -body calculations of large objects .because objects grow to the promotion mass throughout a hybrid calculation , the merger phase lasts longer and produces more frequent resonant interactions at later times compared to pure -body models .thus , we often observe several peaks in the evolution of for hybrid models compared to a single peak in pure -body calculations .figure [ fig : eccent ] shows the relation between the eccentricity and mass of the planets produced in our calculations .our models yield a range in mass from 0.01 to 23 . for 30 planets with masses , 0.1 , the orbital eccentricity is not correlated with the mass . derived a similar result for 50 planets .however , our calculations yield a reasonably large ensemble of lower mass planets with 0.01 - 0.1 . in the full ensemble of planets with 0.013 , there is a small , but significant trend of decreasing eccentricity with increasing planet mass . from the spearman rank and kendall s tests , the probability that the derived distribution of is random is . to compare this result to our solar system, we restrict the mass range to 0.052 . for our ensemble of 40 planets , the spearman rank and kendall s probabilities are 0.1 .these tests yield probabilities of 0.150.2 for the 4 terrestrial planets in our solar system .we conclude that the weak correlation of with is consistent with architecture of our solar system . as a final comparison with , we calculate several statistical parameters to characterize the final configurations of our models . for a system with oligarchs, measures the fraction of the mass in the largest object , where is the mass of the largest oligarch and is the total mass in the grid .the orbital spacing statistic is where is the average mass of an oligarch , is the mass of the central star , is the minimum semimajor axis of an oligarch , and is the maximum semimajor axis of an oligarch . to measure the degree of orbital excitation of a planet , describes the angular momentum deficit , the difference between the z - component of the angular momentum of an orbit and a circular orbit with the same semimajor axis . generalizes this definition as the sum over all planets , }{\sum_{j=1}^{n } m_j ~ \sqrt{a_j } } ~.\ ] ] finally , the mass concentration statistic } \right ) \ ] ] measures whether mass is concentrated in a few massive objects ( large ; as in the earth and venus ) or many low mass objects ( small ; as in the kuiper belt and scattered disk of the outer solar system ) . table 1 compares results for our 14 hybrid calculations and our two ` moons ' calculations with statistics for the inner solar system . for calculations with ,our moons calculations yield roughly the same outcome as .the mass of the largest object and the statistics agree well with those in table 1 of . because we began with a shallower surface density gradient ,our planetary systems have more massive objects and more oligarchs at larger heliocentric distances than .our hybrid models for = 1 yield results similar to the moons calculations .our most massive object has a mass of 1.11.8 and contains 40% to 80% of the initial mass in the grid .the calculations typically produce 12 other massive objects comparable in mass to venus and several less massive objects with masses more similar to mars . for models with longer evolution times ,the number of mars - mass objects declines considerably from 100 myr to 500 myr .the statistical parameters agree well with the results .the planets in all of our model solar systems have more eccentric orbits than the planets in the solar system . however , the other statistics agree rather well with those in the solar system , including one model that is more concentrated than the solar system .the outcomes of our calculations are sensitive to the initial mass in the grid .models with = 0.25 produce very low mass planets , 0.1 , compared to models with = 12 , 12 .the lower mass models also yield more planets on more circular orbits and have a smaller fraction of the total mass in the largest object .in this paper we describe a hybrid -body coagulation code for planet formation .we put the -body part through a battery of tests to assess its performance in planetary dynamics simulations .validation tests of the coagulation algorithm appear elsewhere ( e.g. , kenyon & bromley 2001 ) . we provide tests of the hybrid method by comparing it to coagulation simulations and -body output separately .we demonstrate that all three methods quantitatively reproduce published results for the formation of rocky planets by mergers of km - sized or moon - sized planetesimals .table 2 gives a summary of these tests , with references to other published work .although our intent here is simply to describe our method , the hybrid code now has the potential to match observations of the solar system and debris disks ( e.g. * ? ? ?* ; * ? ? ?in addition to results on terrestrial planets in 5 , summarizes results on the transition from oligarchic to chaotic growth in the terrestrial zone . shows how a stellar encounter can produce sedna - like orbits in the outer solar system and outlines how observations might distinguish between locally - produced and captured sednas .future papers will describe more complete results on planet formation in the terrestrial zone and the trans - neptunian region .we acknowledge a generous allotment , 20 cpu years , of computer time on the silicon graphics origin-2000 ` alhena ' through funding from the nasa offices of mission to planet earth , aeronautics , and space science .advice and comments from m. geller are greatly appreciated !nasa _ _ astrophysics theory program _ supported part of this project through grant nag5 - 13278 .kenyon , s. j. , & bromley , b. c. 2001 , aj , 121 , 538 kenyon , s. j. & bromley , b. c. 2002 , , 577 , l35 kenyon , s. j. , & bromley , b. c. 2002 , aj , 123 , 1757 kenyon , s. j. , & bromley , b. c. 2004 , aj , 127 , 513 lccccccc 0.25 & 100 & 9 & 0.1 & 0.198 & 19.9 & 0.0031 & 26.4 + 0.25 & 300 & 10 & 0.1 & 0.213 & 22.7 & 0.0029 & 24.7 + 0.50 & 100 & 10 & 0.4 & 0.292 & 20.1 & 0.0058 & 26.1 + 0.50 & 100 & 8 & 0.4 & 0.314 & 25.1 & 0.0094 & 21.9 + 1.00 & 100 & 6 & 1.3 & 0.474 & 27.7 & 0.0090 & 33.4 + 1.00 & 100 & 8 & 1.0 & 0.526 & 23.4 & 0.0118 & 45.8 + 1.00 & 100 & 5 & 1.1 & 0.428 & 35.6 & 0.0131 & 33.4 + 1.00 & 100 & 6 & 1.4 & 0.498 & 28.3 & 0.0093 & 27.0 + 1.00 & 200 & 5 & 1.1 & 0.371 & 28.7 & 0.0121 & 22.4 + 1.00 & 200 & 5 & 1.8 & 0.644 & 32.0 & 0.0499 & 52.6 + 1.00 & 500 & 2 & 2.1 & 0.778 & 36.1 & 0.0228 & 119.4 + 2.00 & 100 & 5 & 2.4 & 0.521 & 19.3 & 0.0328 & 38.6 + 2.00 & 100 & 6 & 2.0 & 0.348 & 24.6 & 0.0292 & 25.7 + + 1.00 & 100 & 21 & 0.8 & 0.256 & 14.2 & 0.0136 & 17.1 + 1.00 & 100 & 15 & 0.9 & 0.417 & 28.8 & 0.0119 & 25.4 + + mvem & & 4 & 1.0 & 0.509 & 37.7 & 0.0018 & 89.9 + giant planet orbit integration & figure 1 & + scaled outer solar system & figure 2 & figure 5 , duncan , levison & lee ( 1998 ) + + binary jupiters & figure 3 & figure 4 , duncan , levison & lee ( 1998 ) + earth satellite & figure 3 & + + resolving chaos & figure 4 & + two planetesimals in a swarm & figure 5 & figure 1 , kokubo & ida ( 1995 ) + & & figure b3 , weidenschilling et al .( 1997 ) + + planetary accretion ( km ) & & , greenzweig & lissauer ( 1990 ) + ( is the accretion fraction ) & & , duncan , levison & lee ( 1998 ) + planetary accretion ( km ) & & , greenzweig & lissauer ( 1990 ) + & & , duncan , levison & lee ( 1998 ) + high - speed collisions & figure 6 & + + planet formation ( coagulation ) & figures 79 & figures 8 & 9 , weidenschilling et al . (1997 ) + terrestrial planet formation & table 1 & table 1 , chambers ( 2001 )
we describe a hybrid algorithm to calculate the formation of planets from an initial ensemble of planetesimals . the algorithm uses a coagulation code to treat the growth of planetesimals into oligarchs and explicit -body calculations to follow the evolution of oligarchs into planets . to validate the -body portion of the algorithm , we use a battery of tests in planetary dynamics . several complete calculations of terrestrial planet formation with the hybrid code yield good agreement with previously published calculations . these results demonstrate that the hybrid code provides an accurate treatment of the evolution of planetesimals into planets .
many techniques have been developed to stabilize both the long term and short term drift of laser frequencies .they mainly use reference frequencies obtained by cavity resonances or atomic or molecular transitions .many commercial lasers have built - in stabilization systems and analog or digital inputs that can be used with custom , external referencing systems .we developed a stabilization system to be used in experiments where no atomic line references are available .in particular , it has been designed to stabilize single - mode ring dye lasers and ti : sapphire lasers .the system provides a voltage which can also be used as a feedback for other kinds of externally controllable lasers , and in particular for diode lasers .this problem has already been faced and successfully solved in the framework of laser - cooling experiments involving short - lived radioactive species , for which no atomic vapor can be used .similar applications may arise when rare isotopes have to be excited .the peculiarities of our stabilization technique also allow for easy scanning of the stabilized frequency over broad ranges .the technique makes use of an optical cavity whose length is scanned over more than a free spectral range ( fsr ) by means of a piezo actuator .the master laser and slave laser(s ) beams are collimated and simultaneously analyzed , so that a multiple peak spectrum is observed .finally , a reference signal is produced by reading the relative positions of the observed peaks .our implementation simplifies the one described in by using a thermal control of the optical cavity length , instead of compensating the thermal drift with piezo actuators , thus making the use of high - voltage offset on the piezo unnecessary .this choice is similar to the one reported in and it makes the piezo response more constant in time . actually , the response of piezo actuators is non - linear , and the large values of dc offset needed to compensate thermal drift of the cavity length may dramatically change the slope of the response and hence the effect of the ac scanning signal . for this reason , differing from , we do not need a continuous re - calibration of the scan following the variation of the piezo response .the same result could be achieved by using two separate actuators for thermal compensation and for scanning , nevertheless our solution is easier and also allows for simpler construction of the cavity , because no fused quartz , invar or other materials with low thermal coefficient are needed .in fact , the compensation of the slow thermal drift of the cavity length does not need the fast response of a piezo actuator to be accomplished , moreover the thermal control does not suffer of the small range compensation which is intrinsic in the piezo . in our casethe optical cavity was home - made in aluminium , and this choice also makes the cavity alignment fast and cheap , with the use of a simple device produced by a common tool machine .the system is fully controlled by a computer program which operates a commercial adc - dac card .the program was developed in order to achieve relatively fast operation , continuous control of the cavity response , flexible adjustment of the feedback parameters , and on - time visual monitoring of the laser spectra .analysis of the error signal has also been implemented in order to characterize the performance of the system .all the digital controls were developed in labview , making use of either a 16 bit or a 12 bit national instruments card .no external electronics are needed apart from a very simple voltage - to - current converter used to supply the cavity heater .we used the program to stabilize only one laser with respect to the he - ne , but other laser lines can be added and referenced with straightforward extensions of the program . in particular with this systemwe plan to stabilize a ti : sa ring laser working at nm and a diode laser working at nm , which will be used as cooling and repumping lasers in an experiment of magneto optical trapping ( mot ) of francium .the experimental apparatus is sketched in fig . [fig : appsp ] ; it consists of a fabry - perot ( fp ) confocal optical cavity , on which all the lasers are analyzed .an electric heater h ( maximum power ) fed back by the computer keeps the cavity at a temperature about 15 c above the room temperature , stabilizing the cavity length to the he - ne line .the cavity length is then dithered over a range just wider than one fsr by means of a piezo actuator ( p ) which is directly driven by a waveform generator .the transmitted light is detected by a single amplified photodiode whose signal is directly acquired by pc .the computer feeds back both the heater through a voltage controlled current generator which thermally stabilizes the cavity length , and the slave laser(s ) with a suitable error signal .schematic of the apparatus .the beams of the master and slave lasers are made collinear , at the beam splitter bs .similarly the beams of other slave lasers can be added .v - i is a voltage - to - current converter .the length of the cavity c is slowly adjusted by controlling the mount temperature through the heater h , and is dithered with a piezo p over a range of the order of half a wavelength.,width=377 ] we built up several confocal fabry perot cavities having a free spectral range .the mirrors have a high reflectivity for the nm he - ne line and for other wavelengths which are nm ( diode laser for rb lines ) , nm ( dye laser for na lines ) , nm ( ti : sapphire for fr lines ) .each set of mirrors has high reflectivity for two or more wavelengths . in this paperwe report results obtained with the nm device , by which we stabilized a ( coherent ) ring dye laser used for experiments on sodium mot s . the mirrors ( 1 inch , 5 cm curvature radius ) were provided by cvi ( tlm1 series ) .the measured finesse is 70 for both the nm and the nm , consistent with the declared reflectivity ( ) .the mirrors are mounted on an al tube adjustable in length , as represented in fig .[ fig : cavity ] .schematic of the mechanical mount of the aluminium cavity.,width=453 ] such a mount is home - made by a standard tool machine , allowing for easy and effective centering of the optics at a very low cost . a thread is included in order to adjust the position of one mirror and to get an easy , precise match of the cavity length to operate in confocal regime .the non - adjustable mirror is mounted on a rubber o - ring and it is asymmetrically actuated by a low - voltage driven piezo ( ae0203d08 thorlab ) .the high stability of the confocal resonators makes this peculiar approach possible , without relevant detrimental effects on the quality of the observed spectra .a part of the aluminium tube where the heater is mounted has a thinner wall .this reduced thickness ( about 1 mm ) is essential to achieve a suitably fast response to the thermal control .the heater consists of eight , resistors , tightly fold onto the tube , powered with up to .the thermal contact between resistors and tube is improved by means of thermal silicon grease for electronic purposes .a photodiode detects the transmitted light from the fp interferometer and the photocurrent is converted into a voltage signal _ via _ an operational amplifier , whose output is directly acquired by computer . in our present application , only one laser has to be stabilized and a scan over a fsr is sufficient for the application , so that a single photodiode is used and the peaks of both the master and the slave lasers are acquired in a single trace . if more slave lasers have to be stabilized , or if tunability on a wider range than a fsr is requested , the different lines can be sent on different filtered photodiodes , and the program can be modified in order to acquire more traces separately .the signal from the photodiode is acquired as an array of 1000 values ( this value can be adjusted ) , and a second adc channel acquires the signal driving the piezo actuator in a similar array .the signal of the waveform generator is a triangular wave at a frequency of about 10 hz , ( this is much lower than the measured mechanical resonance of the piezo - mirror system , which is around 800 hz ) .the triangular wave amplitude is 24 v peak - to - peak , and it is enough to scan over more than a fsr .no high - voltage signals are needed , and a commercial signal generator is used .it is worth stressing that possible small deviations in the signal slope or amplitude do not affect precision as the transmittance is detected together with the signal itself , so that the spectrum is plotted as a function of the actual voltage applied to the piezo .this choice makes the stability of the waveform generator not crucial , nevertheless in our case the generator is stable enough , and this detail turned out to be not essential for the final performance .the stabilization system provides a current which through the heater keeps the cavity length constant , and a voltage which drives the slave laser frequency .these two signals are produced according to different philosophies .namely , the cavity length is kept constant by keeping a transmission peak of the master laser _ close _ to the position selected by the operator , while the slave laser peak is _ precisely _ kept at a given distance from the master laser peak .in fact , the stabilization of the cavity length does not demand absolute precision , having only the aim of keeping the peaks within the scanned range , while the distance of the peaks gives an error signal which is neither affected by possible low - frequency noise in the triangular wave , nor by slight oscillations of the cavity temperature ( and hence average cavity length ) . as the cavity length is scanned over more than one master - laser half - wavelength , it is possible to monitor two master - laser peaks .the measured distance between two adjacent master - laser peaks ( fsr ) allows for a precise calibration of the frequency axis on the monitored spectra .the program was developed using labview and runs on a pentium iii processor working at 1 ghz clock frequency .the computer works with a 16-bit adc card , model ni 6052e ; we also checked the program with a cheaper adc card ( namely the 12-bit ni labpc+ ) obtaining comparable performances .the logic of the program is summarized in fig .[ fig : logic ] .the scheme summarizes the logic followed by the program in stabilizing the cavity length and the slave laser(s ) frequency . for each scan of the piezo driving voltage, the spectra are displayed and analyzed .the peaks are detected and their position and relative distance is calculated .the feedback signal to the cavity thermal control is updated depending on the position of the master laser peak , while the feedback for the slave laser(s ) is calculated using the distance of the master peak and slave peak(s ) . the cycle is directly restarted with no updating of the output when the detection of the peaks is not reliable , e.g. due to noise glitches , accidental shutting of a beam , etc.,width=302 ] the program acquires both the slope of the waveform generator and the fp pattern , then calculates in terms of the voltage on the slope the absolute position of the he - ne peak and the distance between the he - ne peak and the slave laser peak . by comparing these two data with the nominal values ,the program calculates the two feedback signals for the fp thermal stabilization and the slave laser stabilization .the nominal values are set by means of two cursors .the position of the first cursor sets the position of the master laser peak , and hence the cavity length , while the second cursor sets the position of the slave laser(s ) peak , and hence the final frequency .each acquisition consists in n scans ( we used ) and the program takes 10 acquisitions per second , while refreshing the feedback signals at the same rate .the sampling rate is 20000 samples per second .all the data and the corrections to the feedback signals are reported on the screen , which is also refreshed after each acquisition .if the program finds problems in calculating the peaks position ( e.g. due to glitches originated by the laboratory electric noise affecting the photodiode signal ) , it leaves the feedback signals unchanged and goes to the next acquisition ; if the problem persists for several cycles the program alerts the user , with a permanent alarm which is manually resettable , and a counter reports the number of cycles which gave problems .the feedback signals consist of two voltages varying between -5 and 5 v. one of these signals is sent directly to the external control of the slave laser , the other is converted by a voltage - to - current amplifier to supply the cavity heater . the algorithms which calculate the correction of the two signalsare different : the thermal stabilization needs only a correction proportional to the deviation of the he - ne peak from the selected position , and the thermal capacitance of the fp cavity provides an integration of the discontinuities of the output voltage by itself .the stabilization signal sent to the slave laser needs a more advanced algorithm : the program must integrate the output signal to keep the laser frequency stable over the short - time range .this is achieved by limiting the maximum slope of the correction signal , which can be optimized by the user , according to the typical drift velocity encountered in the internal stabilization system of the laser .when necessary , the program makes it possible to scan the laser voltage from -5 v to 5 v , searching for a reference signal ( for example the fluorescence of the mot ) to set the unknown value of the right laser voltage automatically ; obviously the user has to stop the laser stabilization before running the program with this aim .this is the window shown to the user , which allows him to control the program . on the left sidethere are controls for the scan rate , the number of scans ( abscissa points ) , the trigger , and trigger time - out .the spectra are shown in the central panel together with the piezo ramp .two cursors above and below this central panel allow for placing and displacing of the peak positions . on the right side ,on the upper part there are two panels with switches for starting the stabilization of the cavity and slave laser respectively , and two indicators showing the actual error signals . in the lower partthe nominal and actual peak positions are reported in one panel , while on the other panel an alert goes on when locking fails , and a counter reports the number of failed scans .finally a knob makes it possible to adjust the gain on the photodiode signal , and the trigger threshold can be set numerically from an input close to the `` stop '' button .another button starts the scan operation.,width=453 ] the scan operation will have an important role in the application in the francium experiment .in fact , the locking procedure will start by using a wave - meter in order to set the laser frequency in resonance with an uncertainty of several hundred mhz .then the control of the laser will be passed to the program , which will scan the frequency over a range just wider than the wave - meter uncertainty , looking for the exact resonance . during the scan, the frequency will be referenced with respect to the he - ne peaks .once a known resonance is found , the program will provide an absolute frequency scale , which will keep being available as long as the he - ne laser stabilization system of the cavity is kept on .finally the frequency will be locked to the exact value , and the program will provide the error signal necessary to maintain long - term stability . a user - friendly graphic interface ( see fig .[ fig : schermo ] ) makes use of a few numerical controls and cursors , which allow for * locking of the cavity with respect to the he - ne fp peaks ; * scan of the slave frequency in a given interval ; * fast setting and precise adjustment of the stabilization point .the achieved frequency stability of the laser is estimated by measuring the fluctuation of the saturation spectroscopy signal on a na cell .the figure shows a spectrum and the location at which the laser was stabilized ( red wing of the , transition ) .also two traces are reported of obtained in conditions of stabilized and not - stabilized operation respectively .the boxes show that the frequency varies within a range of 4 mhz when the stabilization system is on .this is probably an overestimation due to uncontrolled fluctuations in the reference cell , in fact the fluorescence of the mot was extremely stable , even using detuning close to the edge of the trapping range.,width=377 ] fig .[ fig : results ] reports the saturation spectroscopy signal obtained by scanning the laser frequency over the , lines of sodium .two traces of the same signal are also reported as obtained by keeping the frequency at a nominal fixed value , either with the stabilization system on or off .the recording time for these two traces is .short term deviation of the laser frequency as deduced from these plots is definitely less than , as graphically shown by the dashed boxes .statistical analysis on the trace recorded in stabilized conditions shows that the ratio between the standard deviation and the range of values is less than 0.15 , corresponding to a standard deviation of 600 khz in the frequency scale .these estimations of the frequency deviation may be larger than the actual values , because other effects produce some noise on the saturated spectroscopy signal .in fact , a relevant noise level in is also visible on the wing of the saturation spectroscopy signal , which at detunings larger than 100 mhz , should be nominally zero .the short term ( few seconds ) deviations of the frequency appear with similar features both in the stabilized and in the not - stabilized operation , so that they are probably intrinsic in the internal stabilization system of the laser ; the comparison of the two curves in fig .[ fig : results ] show that they are only partially reduced by our external device , which on the contrary definitely fixes the long - term drift .we performed a cross - check of stability using the fluorescence signal of the atoms trapped in a mot using the transition of the line of the sodium . by displacing the slave laser frequency in steps of 2 mhz , it was possible to evaluate in 14 mhz the total spectral width of the trap . as also reported in , at the high - frequency sidethe trap abruptly disappears . in our observationwe were able to keep the trap unstable on that condition , with a fluorescence signal significantly lower than the maximum , and essentially stable in time .a cheap and easy - to - use set - up was developed which , coupled with an efficient computer program , a commercial adc - dac card , and a standard confocal fp interferometer allows for active long term stabilization of one or more laser frequencies .a long term stability better than was demonstrated , and a fwhm of was achieved in the statistical distribution of the stabilized frequency .the short term stability keeps being given by the internal , fast stabilization system of the ring dye laser used as a slave .the peculiar approach which stabilizes the relative peak position of the master and slave laser(s ) reduces the effect of low frequency noise in the driving voltage applied to scan the piezo , and makes ultra high accuracy on the cavity length stabilization unnecessary , so that only a slow and not high precision thermal feedback must be used , in order to always keep given interference orders within the scanned range .the final stability which is achieved with this technique is in excess with respect to the demands of the experiment to which this technique was applied .further improvements could be made by using a longer cavity and separate photodiodes for the acquisition of the peaks corresponding to different lasers , and setting up fp with a smaller fsr .in fact with the single photodiode operation the adjustable range is limited by one fsr , as no superposition of peaks is allowed .we address to for a detailed analysis of the limitations on long term stability which are set by the effects of atmospheric pressure and humidity , whose variation may introduce errors due to different changes of air refraction index for the different wavelength of the laser used .variations in atmospheric temperature are also potentially critical , but this problem is definitely overcome in setups where a close cavity is thermally stabilized . in our case the difference in the two wavelengths is smaller than the one reported in , thus reducing the effect due to dispersive refraction index of humidity .the authors thank all the colleagues of the siena , ferrara and legnaro laboratories for the encouragements and the useful discussions .evro corsi and alessandro pifferi are thanked as well , for their effective technical support .s. n. atutov , v. biancalana , a. burchianti , r. calabrese , l. corradi , a. dainelli , v. guidi , b. mai , c. marinelli , e. mariotti , l. moi , a. rossi , e. scansani , g. stancari , l. tomassetti , s. veronesi ; production of francium ions for traprad ; annual report infn - lnl 2001 , in press ; l. moi , s. n. atutov , v. biancalana , a. burchianti , r. calabrese , l. corradi , a. dainelli , v. guidi , b. mai , c. marinelli , e. mariotti , a. rossi , e. scansani , g. stancari , l. tomassetti , s. veronesi ; laser cooling and trapping of radioactive atoms ; spie , proceed .of xvii int.conf . on coherent and nonlinear optics ( minsk 26/06 - 01/07 2001 ) .
we describe an apparatus for the stabilization of laser frequencies that prevents long term frequency drifts . a fabry - perot interferometer is thermostated by referencing it to a stabilized he - ne laser ( master ) , and its length is scanned over more than one free spectral range allowing the analysis of one or more lines generated by other ( slave ) lasers . a digital acquisition system makes the detection of the position of all the laser peaks possible , thus producing both feedback of both the thermostat and the error signal used for stabilizing the slave lasers . this technique also allows for easy , referenced scanning of the slave laser frequencies over range of several hundred mhz , with a precision of the order of a few mhz . this kind of stabilization system is particularly useful when no atomic or molecular reference lines are available , as in the case of rare or short lived radioactive species . rsi volume 73 , number 7 july 2002 , pages 2544 - 2548
given independent copies of an absolutely continuous real random variable with unknown density and distribution functions and , respectively , the classical kernel estimator of introduced by authors such as , or , is defined , for , by where , for , with a kernel on , that is , a bounded and symmetric probability density function with support ] , the previous kernel estimator suffers from boundary problems if or .this question is addressed in by extending to the distribution function estimation framework the approach followed in nonparametric regression and density function estimation by authors such as , , and .specially , the author considers the boundary modified kernel distribution function estimator given by where and with where and are , respectively , left and right boundary kernels for ,1[ ] and ] and ( here and bellow integrals without integrations limits are meant over the whole real line ) . for ease of presentation , from now onwe assume that the right boundary kernel is given by , the reason why only the left boundary kernel is mentioned in the following discussion . by assuming that is a second order kernel , that is, ,1[,\ ] ] where we denote shows that the previous estimator is free of boundary problems and that the theoretical advantage of using boundary kernels is compatible with the natural property of getting a proper distribution function estimate .in fact , it is easy to see that the kernel distribution function estimator based on each one of the second order left boundary kernels where we assume that is such that for all , and is , with probability one , a continuous probability distribution function ( see * ? ? ?* examples 2.2 and 2.3 ) .additionally , the author shows that the chung - smirnov law of iterated logarithm is valid for the new estimator and has presented an asymptotic expansion for its mean integrated squared error , from which the choice of is discussed ( see * ? ? ?* theorems 3.2 , 4.1 and 4.2 ) .a careful analysis of the asymptotic expansions presented in for the local bias and the integrated squared bias of estimator ( [ classical ] ) , suggests that the previous properties may still be valid for all the boundary kernels satisfying the less restricted condition ,1[,\ ] ] which is in particular fulfilled by the left boundary kernel where we denote , for ( see figure [ kernels ] ) . if is a continuous density function , it is not hard to prove that the kernel distribution function estimator based on this left boundary kernel is , with probability one , a continuous probability distribution function .c + ( left column ) and ( right column ) for , where is the epanechnikov kernel .__,title="fig : " ] & ( left column ) and ( right column ) for , where is the epanechnikov kernel .__,title="fig : " ] + + ( left column ) and ( right column ) for , where is the epanechnikov kernel .__,title="fig : " ] & ( left column ) and ( right column ) for , where is the epanechnikov kernel .__,title="fig : " ] + + ( left column ) and ( right column ) for , where is the epanechnikov kernel .__,title="fig : " ] & ( left column ) and ( right column ) for , where is the epanechnikov kernel .__,title="fig : " ] the main purpose of this note is to show that the results presented in for the class of second order boundary kernels are still valid for the enlarged class of boundary kernels that satisfy assumption ( [ c2 ] ) . this objective is achieved in sections [ secboundary ] and [ global ] where we study the boundary and global behaviour of the boundary modified kernel distribution function estimator . in section [ compar ]we present exact finite sample comparisons between the distribution function kernel estimators based on the left boundary kernels , for , given by ( [ k1 ] ) , ( [ k2 ] ) and ( [ k3 ] ) , respectively .we conclude that the boundary kernel is especially performing when the classical kernel estimator suffers from severe boundary problems .all the proofs can be found in section [ proofs ] .the plots and simulations in this paper were carried out using the software .in this section we study the boundary behaviour of the kernel distribution function estimator by presenting asymptotic expansions for its bias and variance with in the boundary region .we will restrict our attention to the left boundary region , a+h[ ] .[ theoremboundl ] if satisfies condition ( [ c2 ] ) with ,1 [ } |\mu_{0,l}|(\alpha ) < \infty,\ ] ] and the restriction of to the interval ] .theorem [ theoremboundl ] enables us to undertake a first asymptotic comparison between the boundary kernels given by ( [ k1 ] ) , ( [ k2 ] ) and ( [ k3 ] ) , respectively . in figure [ functionsmunu ]we plot the functions and which respectively correspond to the coefficients of the most significant terms in the expansions of the local variance and square bias of estimator for in the left boundary region .we take for the bartlett or epanechnikov kernel , but similar conclusions are valid for other polynomial kernels such as the uniform ( in this case ) , the biweight or the triweight kernels ( for the definition of these kernels see * ? ? ?[ cols="^,^ " , ] we finish this section with a cautionary note that aims to call the attention of the reader to the fact that , due to the continuity of on , the boundary effects for kernel distribution function estimation may not have the same impact in the global performance of the estimator as in probability density or regression function estimation frameworks ( see * ? ? ?however , one may have cases where the local behaviour dominates the global behaviour of the estimator which stresses the relevance in using boundary corrections for the classical kernel distribution function estimator .we illustrate this fact by taking the above considered beta mixture distribution with and ( see figure [ testdist ] ) . in figure [ iseboundary ]we present the empirical distribution of the integrated square error of the classical estimator with kernel and of the boundary corrected estimators with boundary kernels , , over the boundary regions ] ( right boundary ise ) , and over the all interval ] , the expectation of is given by ( see * ? ? ?* ) . by the continuity of the second derivative of on \blacksquare ] , where moreover , using ( [ taylor ] ) and the fact that ,1[,\ ] ] we deduce that uniformly in , a+h[ ] . finally , from ( [ expanc ] ) and taylor s expansions ( [ taylor1 ] ) and ( [ taylor2 ] ) we get , a+h [ } \left| c(x , h ) + h \ , f^\prime(x ) \nu_l\big((x - a)/h\big ) \right| = o(h^2),\ ] ] which concludes the proof . * acknowledgments . * research partially supported by centro de matemtica da universidade de coimbra ( funded by the european regional development fund through the program compete and by the portuguese government through the fct fundao para a cincia e tecnologia under the project pest - c / mat / ui0324/2011 ) .gasser , t. , mller , h .-. kernel estimation of regression functions . in_ smoothing techniques for curve estimation _( t. gasser and m. rosenblatt , eds . ) , lecture notes in mathematics 757 , 2368 .gin , e. , nickl , r. ( 2009 ) .an exponential inequality for the distribution function of the kernel density estimator , with applications to adaptive estimation ._ probab . theory related fields _ 143 , 569596 .
the use of second order boundary kernels for distribution function estimation was recently addressed in the literature ( c. tenreiro , 2013 , boundary kernels for distribution function estimation , _ revstat statistical journal _ , 11 , 169190 ) . in this note we return to the subject by considering an enlarged class of boundary kernels that shows it self to be especially performing when the classical kernel distribution function estimator suffers from severe boundary problems . keywords : distribution function estimation ; kernel estimator ; boundary kernels . ams 2010 subject classifications : 62g05 , 62g20
cellular automata ( ca ) are discrete time dynamical systems on a spatially extended discrete space .they are well known for at the same time being easy to define and implement and for exhibiting a rich and complex nonlinear behavior as emphasized for instance in for ca on one dimensional lattice .see to precise the connections with the nonlinear physics . for the general theory of deterministicca we refer to the recent paper and references therein .probabilistic cellular automata ( pca ) are ca straightforward generalization where the updating rule is stochastic .they inherit the computational power of ca and are used as models in a wide range of applications ( see , for instance , the contributions in ) . from a theoretic perspective , the main challenges concern the non ergodicity of these dynamics for an infinite collection of interacting cells .ergodicity means the non dependence of the long time behavior on the initial probability distribution and the convergence in law towards a unique stationary probability distribution ( see for details and references ) .non ergodicity is related to _critical phenomena _ and it is sometimes referred to as _dynamical phase transition_. strong relations exist between pca and the general equilibrium statistical mechanics framework .important issues are related to the interplay between disordered global states and ordered phases ( _ emergence of organized global states , phase transition _ ) .altough , pca initial interest arose in the framework of statistical physics , in the recent literature many different applications of pca have been proposed . in particularit is notable to remark that a natural context in which the pca main ideas are of interest is that of evolutionary games .pca dynamics are naturally defined on an infinite lattice .given a local stochastic updating rule , one has to face the usual problems about the connections between the pca dynamics on a finite subpart of the lattice and the dynamics on the infinite lattice .in particular , it was stated in for translation invariant infinite volume pca with _ _ positive rates _ _ , that the law of the trajectories , starting from any stationary translation invariant distribution , is the boltzmann gibbs distribution for some space time associated potential .thus phase transition for the space time potential is intimately related to the pca dynamical phase transition .moreover , see ( * ? ? ?* proposition 2.2 ) , given a translation invariant pca dynamics , if there exists one translation invariant stationary distribution which is a gibbs measure with respect to some potential on the lattice , then all the associated translation invariant stationary distributions are gibbs with respect to the the same potential . in this paper we shall consider a particular class of pca , called _ reversible _ pca , which are reversible with respect to a gibbs like measure defined via a translation invariant multi body potential . in this frameworkwe shall study the zero and low temperature phase diagram of such an equilibrium statistical mechanics like system , whose phases are related to the stationary measures of the original pca .we shall now first briefly recall formally the definitions of cellular automata and probabilistic cellular automata and then describe the main results of the paper .cellular automata are defined via a local deterministic evolution rule .let be a finite cube with periodic boundary conditions .associate with each site ( also called _ cell _ ) the , where is a finite single - site space and denote by the _ state space_. any is called a _ state _ or _ configuration _ of the system . in order to define the evolution rule we consider , a subset of the torus , and a function depending on the state variables in .we also introduce the shift on the torus , for any , defined as the map the configuration at site shifted by is equal to the configuration at site . for example ( see figure [ f : def ] ) set , then the value of the spin at the origin will be mapped to site .the _ cellular automaton _ on with rule is the sequence , for a positive integer , of states in satisfying the following ( deterministic ) rule : for all and .note the local and parallel character of the evolution : the value , for all , of all the state variables at time depend on the value of the state variables at time ( parallel evolution ) associated only with the sites in ( locality ) .( 10,55)(-150,-5 ) ( -5,0)(0,10)5(0,0)(1,0)50 ( 0,-5)(10,0)5(0,0)(0,1)50 ( -12,20) ( 10,20 ) ( 11,21) ( 5,15)(0,1)20 ( 5,15)(1,0)10 ( 5,35)(1,0)20 ( 15,15)(0,1)10 ( 15,25)(1,0)10 ( 25,25)(0,1)10 ( 1,13) ( 30,0 ) ( 31,1) ( 25,-5)(0,1)20 ( 25,-5)(1,0)10 ( 25,15)(1,0)20 ( 35,-5)(0,1)10 ( 35,5)(1,0)10 ( 45,5)(0,1)10 ( 47,14) the stochastic version of cellular automata is called _ probabilistic cellular automata _ ( pca ) .we consider a probability distribution ] and , hence , & \;\;\textrm { if } k_0 + 2k_1 + 2k_2+h>0 \\ -|\lambda|[-k_0 - 2(k_1+k_2 ) ] & \;\;\textrm { if } k_0 + 2k_1 + 2k_2+h<0 \end{array } \right.\ ] ] moreover , recalling , = -(|\lambda|/2)[|k_0 - 2k_1 - 2k_2+h| + ( -k_0 + 2k_1 + 2k_2+h ) ] ], we always find a critical transition and the value of the critical inverse temperature decreases when is increased .our results are depicted in figure [ f : mfres01 ] .( 100,160)(-100,0 ) in the plane with . in the region the three lines are the first order transition line between the positive ferromagnetic and the checkerboard phases .continuous , dotted , and dashed lines refer , respectively , to the cases . in the region horizontal axis is the first order transition line between the two ferromagnetic phases .the phase digram in the region can be constructed by symmetry.,title="fig : " ] as explained in the introduction , our main concern is that of understanding how the zero temperature phase diagram , see figure [ f : ground ] , is modified at small positive temperatures . in view of the ground state structurewe expect the presence of four phases : positive ferromagnetic , negative ferromagnetic , odd , and even checkerboard . the first two phases are respectively characterized by positive and negative magnetization , while the last two by a positive and negative staggered magnetization . in the sequel we will not distinguish between the two checkerboard phases , since they are equivalent .moreover , we shall discuss the phase diagram only in the region .the case can then be recovered by using the spin flip symmetry . for , in order to draw the mean field phase diagram we identify the two different phases ( checkerboard ) and positive ferromagnetic by solving iteratively the mean field equations starting from a suitable initial point , namely , for the ferromagnetic phase and and for the checkerboard one .we shall decide about the phase of the system by choosing the one with smallest mean field free energy .the free energy will be computed by using , , , and .mean field predictions are summarized in figure [ f : mfres02 ] , where the phase diagram of the cross pca is plotted on the plane at different values of .more precisely , we considered the values .the most important remark is that the triple point is not affected by the temperature , indeed , its position is constantly the origin of the plane for each value of .in other words the mean field approximation confirms the conjecture based on the entropy argument quoted in the introduction . for the sake of completeness we briefly recall this argument .at finite temperature , ground states are perturbed because small droplets of different phases show up .the idea is to calculate the energetic cost of a perturbation of one of the four coexisting states via the formation of a square droplet of a different phase .if it results that one of the four ground states is more easily perturbed , then we will conclude that this is the equilibrium phase at finite temperature .the energy cost of a square droplet of side length of one of the two homogeneous ferromagnetic ground states plunged in one of the two checkerboards ( or vice versa ) is equal to .on the other hand if an homogeneous phase is perturbed as above by the other homogeneous phases , or one of the two checkerboards is perturbed by the other one , then the energy cost is .hence , from the energetic point of view the most convenient excitations are those in which a homogeneous phase is perturbed by a checkerboard or vice versa . moreover , for each state there exist two possible energetically convenient excitations : there is no entropic reason to prefer one of the four ground states to the others when a finite low temperature is considered .this is why it is possible to conjecture that at small finite temperature the four ground states still coexist .finally we note that the mean field prediction for the ferro checkerboard phase transition is that such a transition is discontinuous . andthis result does not depend on the value of the temperature .in this paper we have discussed some general properties of the hamiltonian associated with a class of reversible probabilistic cellular automata .we have focused our attention to the so called cross pca model , which is a two dimensional reversible pca in which the updating rule of a cell depends on the status of the five cells forming a cross centered at the cell itself .this model had been extensively studied from the metastability point of view and many interesting properties have been shown .in particular a suggestive analogy with the blume capel model had been pointed out in the metastability literature . in this paperwe focused our attention on the structure of the potentials describing the microscopic interaction and on the zero and positive small temperature phase diagram .we computed the zero temperature phase diagram exactly with respect to the self interaction intensity and the magnetic field . at finite temperaturethe phase diagram has been derived in the framework of a suitable mean field approximation .we have discussed the variation of the critical temperature for the transition between the ordered and the disordered phase at zero magnetic field as a function of the self interaction intensity .we have shown that , in the mean field approximation , such a temperature is an increasing function of the self interaction intensity .moreover we have discussed the low temperature phase diagram in the plane and have shown that the topology of the zero temperature phase diagram is preserved when the temperature is positive and small .finally , we have shown that the mean field approximation is consistent with an entropic heuristic argument suggesting that the position of the zero temperature triple point is not changed at low temperature . in this appendixwe report the expression of the coupling constants defined in section [ s : ham - croce ] ( see also figure [ f : accoppiamenti ] ) as function of , , , , and .the scheme we adopted for the computation is described in section [ s : ham - croce ] as well . forthe couplings in which we had to distinguish between the horizontal and the vertical case we report only the horizontal one and note that the corresponding vertical one can be obtained by exchanging in the formula the role of and . } { \cosh^4[\beta(h - k_0 ) ] } \\ & \phantom{h+\frac{1}{2 ^ 5\beta}\log\bigg\{\times } \times \frac { \cosh^2[\beta(h+k_0 + 2k_1 ) ] \cosh^2[\beta(h+k_0 - 2k_1 ) ] } { \cosh^2[\beta(h - k_0 + 2k_1 ) ] \cosh^2[\beta(h - k_0 - 2k_1 ) ] } \\ & \phantom{h+\frac{1}{2 ^ 5\beta}\log\bigg\{\times } \times \frac { \cosh^2[\beta(h+k_0 + 2k_2 ) ] \cosh^2[\beta(h+k_0 - 2k_2 ) ] } { \cosh^2[\beta(h - k_0 + 2k_2 ) ] \cosh^2[\beta(h - k_0 - 2k_2 ) ] } \\ & \phantom{h+\frac{1}{2 ^ 5\beta}\log\bigg\{\times } \times \frac { \cosh[\beta(h+k_0 + 2(k_1-k_2 ) ) ] \cosh[\beta(h+k_0 - 2(k_1-k_2 ) ) ] } { \cosh[\beta(h - k_0 + 2(k_1-k_2 ) ) ] \cosh[\beta(h - k_0 - 2(k_1-k_2 ) ) ] } \\ & \phantom{h+\frac{1}{2 ^ 5\beta}\log\bigg\{\times } \times \frac { \cosh[\beta(h+k_0 + 2(k_1+k_2 ) ) ] \cosh[\beta(h+k_0 - 2(k_1+k_2 ) ) ] } { \cosh[\beta(h - k_0 + 2(k_1+k_2 ) ) ] \cosh[\beta(h - k_0 - 2(k_1+k_2 ) ) ] } \bigg\ } \end{split}\ ] ] \cosh^2[\beta(h+k_0 + 2k_1 ) ] } { \cosh^2[\beta(h - k_0 - 2k_1 ) ] \cosh^2[\beta(h+k_0 - 2k_1 ) ] } \\ & \phantom{h+\frac{1}{2 ^ 5\beta}\log\bigg\{\times } \times \frac { \cosh[\beta(h+k_0 + 2(k_1-k_2 ) ) ] \cosh[\beta(h - k_0 + 2(k_1-k_2 ) ) ] } { \cosh[\beta(h+k_0 - 2(k_1-k_2 ) ) ] \cosh[\beta(h - k_0 - 2(k_1-k_2 ) ) ] } \\ & \phantom{h+\frac{1}{2 ^ 5\beta}\log\bigg\{\times } \times \frac { \cosh[\beta(h+k_0 + 2(k_1+k_2 ) ) ] \cosh[\beta(h - k_0 + 2(k_1+k_2 ) ) ] } { \cosh[\beta(h+k_0 - 2(k_1+k_2 ) ) ] \cosh[\beta(h - k_0 - 2(k_1+k_2 ) ) ] } \bigg\ } \end{split}\ ] ] \cosh^2[\beta(h+k_0 + 2k_1 ) ] } { \cosh^2[\beta(h+k_0 - 2k_1 ) ] \cosh^2[\beta(h - k_0 + 2k_1 ) ] } \\ & \phantom{\frac{1}{2 ^ 5\beta}\log\bigg\{\times } \times \frac { \cosh[\beta(h+k_0 + 2k_1 - 2k_2 ) ] \cosh[\beta(h - k_0 - 2k_1 + 2k_2 ) ] } { \cosh[\beta(h - k_0 + 2k_1 - 2k_2 ) ] \cosh[\beta(h+k_0 - 2k_1 + 2k_2 ) ] } \\ & \phantom{\frac{1}{2 ^ 5\beta}\log\bigg\{\times } \times \frac { \cosh[\beta(h - k_0 - 2(k_1+k_2 ) ) ] \cosh[\beta(h+k_0 + 2(k_1+k_2 ) ) ] } { \cosh[\beta(h+k_0 - 2(k_1+k_2 ) ) ] \cosh[\beta(h - k_0 + 2(k_1+k_2 ) ) ] } \bigg\ } \end{split}\ ] ] \cosh[\beta(h+k_0 - 2(k_1+k_2 ) ) ] } { \cosh[\beta(h - k_0 + 2k_1 - 2k_2 ) ] \cosh[\beta(h+k_0 + 2k_1 - 2k_2 ) ] } \\ & \phantom{\frac{1}{2 ^ 5\beta}\log\bigg\{\times } \times \frac { \cosh[\beta(h - k_0 + 2(k_1+k_2 ) ) ] \cosh[\beta(h+k_0 + 2(k_1+k_2 ) ) ] } { \cosh[\beta(h - k_0 - 2k_1 + 2k_2 ) ] \cosh[\beta(h+k_0 - 2k_1 + 2k_2 ) ] } \bigg\ } \end{split}\ ] ] \cosh^2[\beta(h+k_0 - 2k_1 ) ] } { \cosh^2[\beta(h - k_0 - 2k_2 ) ] \cosh^2[\beta(h+k_0 - 2k_2 ) ] } \\ & \phantom{\frac{1}{2 ^ 5\beta}\log\bigg\{\times } \times \frac { \cosh^2[\beta(h - k_0 + 2k_1 ) ] \cosh^2[\beta(h+k_0 + 2k_1 ) ] } { \cosh^2[\beta(h - k_0 + 2k_2 ) ] \cosh^2[\beta(h+k_0 + 2k_2 ) ] } \\ & \phantom{\frac{1}{2 ^ 5\beta}\log\bigg\{\times } \times \frac { 1 } { \cosh^4[\beta(h+k_0 ) ] \cosh^4[\beta(h - k_0 ) ] } \\ & \phantom{\frac{1}{2 ^ 5\beta}\log\bigg\{\times } \times \cosh[\beta(h - k_0 + 2(k_1-k_2 ) ) ] \cosh[\beta(h+k_0 + 2(k_1-k_2 ) ) ] \\ & \phantom{\frac{1}{2 ^ 5\beta}\log\bigg\{\times } \times \cosh[\beta(h - k_0 - 2(k_1-k_2 ) ) ] \cosh[\beta(h+k_0 - 2(k_1-k_2 ) ) ] \\ & \phantom{\frac{1}{2 ^ 5\beta}\log\bigg\{\times } \times \cosh[\beta(h - k_0 - 2(k_1+k_2 ) ) ] \cosh[\beta(h+k_0 - 2(k_1+k_2 ) ) ] \\ & \phantom{\frac{1}{2 ^ 5\beta}\log\bigg\{\times } \times \cosh[\beta(h - k_0 + 2(k_1+k_2 ) ) ] \cosh[\beta(h+k_0 + 2(k_1+k_2 ) ) ] \bigg\ } \end{split}\ ] ] \cosh^2[\beta(h+k_0 - 2k_2 ) ] } { \cosh^2[\beta(h - k_0 + 2k_2 ) ] \cosh^2[\beta(h+k_0 + 2k_2 ) ] } \\ & \phantom{\frac{1}{2 ^ 5\beta}\log\bigg\{\times } \times \frac { \cosh[\beta(h - k_0 - 2k_1 + 2k_2 ) ] \cosh[\beta(h+k_0 - 2k_1 + 2k_2 ) ] } { \cosh[\beta(h - k_0 + 2k_1 - 2k_2 ) ] \cosh[\beta(h+k_0 + 2k_1 - 2k_2 ) ] } \\ & \phantom{\frac{1}{2 ^ 5\beta}\log\bigg\{\times } \times \frac { \cosh[\beta(h - k_0 + 2(k_1+k_2 ) ) ] \cosh[\beta(h+k_0 + 2(k_1+k_2 ) ) ] } { \cosh[\beta(h - k_0 - 2(k_1+k_2 ) ) ] \cosh[\beta(h+k_0 - 2(k_1+k_2 ) ) ] } \bigg\ } \end{split}\ ] ] \cosh[\beta(h - k_0 - 2(k_1-k_2 ) ) ] } { \cosh[\beta(h+k_0 + 2(k_1-k_2 ) ) ] \cosh[\beta(h+k_0 - 2(k_1-k_2 ) ) ] } \\ & \phantom{\frac{1}{2 ^ 5\beta}\log\bigg\{\times } \times \frac { \cosh[\beta(h+k_0 - 2(k_1+k_2 ) ) ] \cosh[\beta(h+k_0 + 2(k_1+k_2 ) ) ] } { \cosh[\beta(h - k_0 - 2(k_1+k_2 ) ) ] \cosh[\beta(h - k_0 + 2(k_1+k_2 ) ) ] } \bigg\ } \end{split}\ ] ] \cosh^2[\beta(h+k_0 - 2k_1 ) ] \cosh^2[\beta(h+k_0 + 2k_1 ) ] } { \cosh^4[\beta(h+k_0 ) ] \cosh^2[\beta(h - k_0 - 2k_1 ) ] \cosh^2[\beta(h - k_0 + 2k_1 ) ] } \\ & \phantom{\frac{1}{2 ^ 5\beta}\log\bigg\{\times } \times \frac { \cosh^2[\beta(h - k_0 - 2k_2 ) ] \cosh^2[\beta(h - k_0 + 2k_2 ) ] } { \cosh^2[\beta(h+k_0 - 2k_2 ) ] \cosh^2[\beta(h+k_0 + 2k_2 ) ] } \\ & \phantom{\frac{1}{2 ^ 5\beta}\log\bigg\{\times } \times \frac { \cosh[\beta(h+k_0 + 2(k_1-k_2 ) ) ] \cosh[\beta(h+k_0 - 2(k_1-k_2 ) ) ] } { \cosh[\beta(h - k_0 + 2(k_1-k_2 ) ) ] \cosh[\beta(h - k_0 - 2(k_1-k_2 ) ) ] } \\ & \phantom{\frac{1}{2 ^ 5\beta}\log\bigg\{\times } \times \frac { \cosh[\beta(h+k_0 - 2(k_1+k_2 ) ) ] \cosh[\beta(h+k_0 + 2(k_1+k_2 ) ) ] } { \cosh[\beta(h - k_0 - 2(k_1+k_2 ) ) ] \cosh[\beta(h - k_0 + 2(k_1+k_2 ) ) ] } \bigg\ } \end{split}\ ] ] \cosh^2[\beta(h - k_0 + 2k_2 ) ] } { \cosh^2[\beta(h - k_0 - 2k_2 ) ] \cosh^2[\beta(h+k_0 + 2k_2 ) ] } \\ & \phantom{\frac{1}{2 ^ 5\beta}\log\bigg\{\times } \times \frac { \cosh[\beta(h - k_0 + 2(k_1-k_2 ) ) ] \cosh[\beta(h+k_0 - 2(k_1-k_2 ) ) ] } { \cosh[\beta(h+k_0 + 2(k_1-k_2 ) ) ] \cosh[\beta(h - k_0 - 2(k_1-k_2 ) ) ] } \\ & \phantom{\frac{1}{2 ^ 5\beta}\log\bigg\{\times } \times \frac { \cosh[\beta(h - k_0 - 2(k_1+k_2 ) ) ] \cosh[\beta(h+k_0 + 2(k_1+k_2 ) ) ] } { \cosh[\beta(h+k_0 - 2(k_1+k_2 ) ) ] \cosh[\beta(h - k_0 + 2(k_1+k_2 ) ) ] } \bigg\ } \end{split}\ ] ] \cosh^4[\beta(h+k_0 ) ] \\ & \phantom{\frac{1}{2 ^ 5\beta}\log\bigg\{\times } \times \frac { \cosh[\beta(h - k_0 - 2(k_1+k_2 ) ) ] \cosh[\beta(h+k_0 - 2(k_1+k_2 ) ) ] } { \cosh^2[\beta(h - k_0 - 2k_1 ) ] \cosh^2[\beta(h+k_0 - 2k_1 ) ] } \\ & \phantom{\frac{1}{2 ^ 5\beta}\log\bigg\{\times } \times \frac { \cosh[\beta(h - k_0 + 2(k_1+k_2 ) ) ] \cosh[\beta(h+k_0 + 2(k_1+k_2 ) ) ] } { \cosh^2[\beta(h - k_0 + 2k_1 ) ] \cosh^2[\beta(h+k_0 + 2k_1 ) ] } \\ & \phantom{\frac{1}{2 ^ 5\beta}\log\bigg\{\times } \times \frac { \cosh[\beta(h - k_0 + 2(k_1-k_2 ) ) ] \cosh[\beta(h+k_0 + 2(k_1-k_2 ) ) ] } { \cosh^2[\beta(h - k_0 - 2k_2 ) ] \cosh^2[\beta(h+k_0 - 2k_2 ) ] } \\ & \phantom{\frac{1}{2 ^ 5\beta}\log\bigg\{\times } \times \frac { \cosh[\beta(h - k_0 - 2(k_1-k_2 ) ) ] \cosh[\beta(h+k_0 - 2(k_1-k_2 ) ) ] } { \cosh^2[\beta(h - k_0 + 2k_2 ) ] \cosh^2[\beta(h+k_0 + 2k_2 ) ] } \bigg\ } \end{split}\ ] ] \cosh^2[\beta(h - k_0 - 2k_1 ) ] \cosh^2[\beta(h - k_0 + 2k_1 ) ] } { \cosh^4[\beta(h - k_0 ) ] \cosh^2[\beta(h+k_0 - 2k_1 ) ] \cosh^2[\beta(h+k_0 + 2k_1 ) ] } \\ & \phantom{\frac{1}{2 ^ 5\beta}\log\bigg\{\times } \times \frac { \cosh^2[\beta(h - k_0 - 2k_2 ) ] \cosh^2[\beta(h - k_0 + 2k_2 ) ] } { \cosh^2[\beta(h+k_0 - 2k_2 ) ] \cosh^2[\beta(h+k_0 + 2k_2 ) ] } \\ & \phantom{\frac{1}{2 ^ 5\beta}\log\bigg\{\times } \times \frac { \cosh[\beta(h+k_0 + 2(k_1-k_2 ) ) ] \cosh[\beta(h+k_0 - 2(k_1-k_2 ) ) ] } { \cosh[\beta(h - k_0 + 2(k_1-k_2 ) ) ] \cosh[\beta(h - k_0 - 2(k_1-k_2 ) ) ] } \\ & \phantom{\frac{1}{2 ^ 5\beta}\log\bigg\{\times } \times \frac { \cosh[\beta(h+k_0 - 2(k_1+k_2 ) ) ] \cosh[\beta(h+k_0 + 2(k_1+k_2 ) ) ] } { \cosh[\beta(h - k_0 - 2(k_1+k_2 ) ) ] \cosh[\beta(h - k_0 + 2(k_1+k_2 ) ) ] } \bigg\ } \end{split}\ ] ] * acknowledgments . *the authors thank j. bricmont , a. pelizzola , and a. van enter for some very useful discussions , comments , and references .louis thanks eurandom / tu eindhoven , utrecht university , and dipartimento sbai ( sapienza universit di roma ) where part of this work was done and the cnrs for supporting these research stays .cirillo thanks technical university delft , utrecht mathematics department and eurandom / tu eindhoven for their kind hospitality and financial support . e.n.m .cirillo , f.r .nardi , c. spitoni , competitive nucleation in metastable systems . " applied and industrial mathematics in italy iii .series on advances in mathematics for applied sciences , volume * 82 * , 208219 , ( 2010 ) .j. kari , `` theory of cellular automata : a survey '' , _ theoretical computer science _ ,volume * 334 * , issues 1 - 3 , 333 , ( 2005 ) .v. kozlov , n.b .vasiljev , reversible markov chain with local interactions , " in multicomponent random system , " 451469 , adv . in prob . and rel. topics , ( 1980 ) .louis . automates cellulaires probabilistes : mesures stationnaires , mesures de gibbs associes et ergodicit " , _ phd thesis _, universit lille i and politecnico di milano , sept .louis and j .- b . time - to - coalescence for interacting particle systems : parallel versus sequential updating . " , _ preprint 2009/03 _ , universitt potsdam , issn no .1613 - 3307 ( 2009 ) + http://nbn-resolving.de/urn:nbn:de:kobv:517-opus-49454 .j. palandi , r.m.c .de almeida , j.r .iglesias , m. kiwi , cellular automaton for the order - disorder transition " , _ chaos , solitons & fractals _ , vol .* 6 * , 439445 , ( 1995 ) .m. perc , j. gmez gardees , a. szolnoki , l.m .flora , and y. moreno , evolutionary dynamics of group interactions on structured populations : a review " , _ j. r. soc. interface _ * 10 * , 20120997 , ( 2013 ) .sirakoulis and s. bandini ( editors ) , `` cellular automata : 10th international conference on cellular automata for research and industry '' , acri 2012 , _ proceedings , 2012 , lecture notes in computer science _ , springer ( 2012 ) .j. slawny , low temperature properties of classical lattice systems : phase transitions and phase diagrams " , in _ phase transitions and critical phenomena _ , vol . *11 * , c. domb and j.l .lebowitz , eds . academic press , london , ( 1987 ) .toom , n.b .vasilyev , o.n .stavskaya , l.g .mityushin , g.l .kurdyumov , s.a .pirogov , _ discrete local markov systems _ , in _stochastic cellular systems : ergodicity , memory , morphogenesis _ , edited by r.l .dobrushin , v.i .kryukov , a.l .toom , manchester university press , 1182 , ( 1990 ) .s. wolfram , universality and complexity in cellular automata ,_ physica d : nonlinear phenomena _ , vol . *10 * , issues 1 - 2 , 135 , ( 1984 ) .s. wolfram , cellular automata as models of complexity " , _ nature _ * 311 * , 419424 , ( 1984 ) .
cellular automata are discrete time dynamical systems on a spatially extended discrete space which provide paradigmatic examples of nonlinear phenomena . their stochastic generalizations , i.e. , probabilistic cellular automata ( pca ) , are discrete time markov chains on lattice with finite single cell states whose distinguishing feature is the _ parallel _ character of the updating rule . we study the ground states of the hamiltonian and the low temperature phase diagram of the related gibbs measure naturally associated with a class of reversible pca , called the _ cross pca_. in such a model the updating rule of a cell depends indeed only on the status of the five cells forming a cross centered at the original cell itself . in particular , it depends on the value of the center spin ( _ self interaction _ ) . the goal of the paper is that of investigating the role played by the self interaction parameter in connection with the ground states of the hamiltonian and the low temperature phase diagram of the gibbs measure associated with this particular pca .
the key question in investment management is to understand the sources of investment returns . without such understanding , it is virtually impossible to succeed in managing money . in this article , we explore , using an empirical model of hedge fund strategy returns , the importance of non - gaussian features , such as time - varying volatility , asymmetry and fat tails , in explaining the level of expected returns .we demonstrate that the volatility compensation is often a significant component of the expected returns of the investment strategies , suggesting that many of these strategies should be thought of as being ` short vol ' .the notable exceptions are the cta strategies and certain fixed income and fx strategies .we suggest a fundamental explanation for this phenomenon and argue that it leads to important adjustments in capital allocation .the experience of the past few years has largely confirmed our hypothesis , laid out originally in conference presentations and earlier strategy papers . in particular , we believe that the quant crunch experience in august 2007 , and the widespread large losses in september 2008 till march 2009 period , followed by large gains after march 2009 till early 2010 period , are broadly in line with the volatility dependence of strategy returns presented in this paper and as such are neither exceptional not surprising . moreover , the increased correlation across most strategies during this period can be largely attributed to the common driving factor which is the level of volatility . knowing that most strategies returns are highly directionally dependent on the volatility , and having a lot of prior empirical evidence that the volatility spikes across markets are highly correlated , leads one to a straightforward conclusion that the correlation across strategies will also rise during a market - wide spike in risk .there are many theories and plausible hypotheses about the driving factors explaining the returns . the capital asset pricing model , which postulates a specific relationship between the expected excess returns of assets and their co - variation with market returns , is among the pillars of modern financial theory . a more elaborate and pragmatically more satisfying arbitrage pricing theory expands on capm insights by adding more explanatory variables and allowing more flexibility in defining the common driving factors .the popular fama - french framework can be seen as a particularly successful example of apt in application to stock returns .majority of these theories are focused on explaining the returns of tradable assets , such as stocks , bonds or futures . while this is clearly the most granular level which is of interest to the researchers , we believe there is much to be gained by focusing instead on returns of typical investment strategies . indeed , over the past decades, many such strategies , from quantitative long / short equity investing to cta and to convertible arbitrage , have become well - established and boasting must - have allocations in most alternative investment portfolios . we can take this argument even further , stating that not only it is possible to model the strategy returns without knowing its composition , but it is actually important to do that .suppose we had the benefit of knowledge of some particular hedge fund s portfolio composition at a given point in time .would that give us a better understanding of the nature of its returns over time ?the answer , of course , is it depends. if the strategy is slow and with low turnover , then yes the current composition does provide important clues for the future returns .however , if the strategy has either high turnover or potentially large trading activity driven by market events , then the current composition can actually be quite misleading for predicting the future returns .let s consider two simple cases .in the first , the strategy at hand is what used to be called a _ portfolio insurance _ , which in essence was trying to replicate the downside s&p 500 put option position by dynamically trading in the s&p 500 futures . clearly , this strategy has a pretty high turnover , driven by the daily fluctuations of the s&p 500 index .moreover , its return pattern , by design , matches not that of a linear futures position , but that of a non - linear option position .in particular , the strategy is supposed to make money if the market fluctuates strongly but returns roughly to the same level over time , while the linear futures position of course would have close to zero return in this case . in the second example , consider a simplified macro strategy , where the portfolio manager switches between the long and short positions in some macro index , such as the very same s&p 500 futures , depending on a bullish / bearish signal for the economy , e.g. based on some macroeconomic model .these signals , by construction , would not change very frequently , sometimes staying in one state for many months or even years .but when they do , the impact on the strategy will be dramatic it will reverse the sign of the returns .so , if one is interested in the distribution of the returns of such strategy over long periods of time encompassing the business cycle , they will be quite different from the distribution of the underlying s&p 500 returns , even if for a typical month the returns might actually be identical .so , it appears that in order to estimate the returns characteristics of the strategy , it is less important to know _ what _ does the strategy trade , and more important to know _ how _ it trades .moreover , majority of professional investors who actually produce these returns exhibit a good level of discipline in what they do . depending on their style and competitive advantage, they typically stick to certain patterns of behavior and are driven by relatively stable methodologies which they use for estimating risks and returns within their universe of tradable securities , and for allocating capital across particular investments .it is these behavior patterns and stable methodologies that leave their imprint on the corresponding strategy returns . andwhile this statement is obviously true with respect to so called _ quantitative _ or _systematic _ investment managers , we think it largely applies even to investors following the _ discretionary _ style .one could take a step further and say that each of the investment funds can be considered an independent company , whose stock ( nav ) performance reflects its business model and the management style . unlike bricks and mortar businesses , such ` virtual ' companies do not have assets in which their business model is ingrained . and looking at their balance sheet produces little more than the knowledge of their leverage and other basic facts . unlike gilette or starbucks , you ca nt really judge these companies by what products or services they produce , or what factories and stores they have . and while in real companies the management style also sometimes matters a lot ( think apple and steve jobs , or ge and jack welch ) , in the virtual financial holding companies that the investment funds really are , the management style and methodology is really the only thing that matters . to discern meaningful patterns in such nebulous things as management style and methodology one must start by modeling the strategy returns. there are three basic ways to model them , in order of decreasing granularity : micro - replication : : : in this approach , one actually tries to build a systematic process similar to each particular strategy in complete detail including specific trading signals in variety of chosen instruments , and then fine tunes a few parameters to match the observed returns of the investment strategy benchmark , such as a particular hedge fund index .macro - replication : : : in this approach , one tries to figure out which macro variables influence the strategy s returns , and tries to build a time - series forecasting model with these variables .parametric : : : in this approach , the strategy return time series is modeled endogenously , in a manner similar to modeling ` elemetary ' asset returns , e.g. by well - known econometric methods .the parametric approach , following the pioneering work of fung and hsieh , attempts to simply understand the properties of hedge fund returns , while the replication approaches ( see for a recent review ) attempt to model the hedge fund strategies themselves in their full dynamics , either in a bottom - up or top - down manner .each of the approaches has its pros and cons .micro - replication has been successful in mimicking some mainstream hedge fund strategies , such as cta , equity stat arb , or merger arb , so much so that there exist etfs and etns making such strategies available for broad investor audience .but for many other types of strategies , such an approach is hopelessly difficult and ambiguous .the macro - replication is somewhat more universally applicable , especially in strategies which are known as _ alternative beta _, i.e. where the performance of the strategy is driven by its exposure to well - identified market risk factors . heretoo , certain alternative beta strategies have been sufficiently popular to launch a widely marketed etf or etn , for example ipath optimized currency carry etn ( ici ) and ipath s&p 500 vix short - term futures etn ( vxx ) .such an approach , by design , does not explicitly model the strategy s _ pure alpha _ , treating it as a residual constant return .it also assumes that the strategy maintains a constant exposure to the chosen set of macro factors , which is also not necessarily a universally valid assumption . on the other hand , the parametric approach , which we advocate in this paper , does not concern itself with the detailed composition of returns or indeed with their attribution to observable market factors .instead , it treats the investment strategies in a holistic manner , as opaque financial assets with little more than their net asset values ( nav ) and returns visible to outside observers .it is universally applicable to any strategy that one may consider modeling , without complicating the analysis with additional assumptions .the main question here is the stability and interpretability of the results .the way to attain a positive answer to this question is not to modify the econometric model or make it more complex , but to choose a well - designed set of benchmark indexes or fund peer groups for estimation .this is akin to an old approach of modeling all stocks on the basis of their price / earnings ( pe ) ratios , but recognizing that companies from different industry sectors or with widely different market caps may have substantial differences in the manner in which their pe impacts their future returns .there , too , the simple solution is to divide the universe of all stocks into relatively uniform peer groups , and to only compare the pe within the same group .thus , to get sensible results from our chosen parametric approach , we must apply it to a set of investment strategy indexes which we believe have been constructed in a sufficiently uniform manner and with the appropriate amount of granularity .this requirement has led us to select the lehman brothers hedge fund index , which was a part of the overall suite of global indexes built and maintained by lehman s index group , and benefited from the thorough and disciplined rules - based methodology in the same manner as their better known us aggregate bond index . in our opinion , this set of indexes was much more complete and much better designed than the more widely disseminated cs tremont , hennessee or hfri indexes . while the latter ones have been around for longer and have possibly more hedge funds in their coverage , they use classification schemes which are outmoded and do not correspond to actual segmentation of the hedge fund universe by investment style or product focus .the lehman brothers hedge fund index had , in contrast , a full set of available sub - indexes classified by style , product or region , all constructed in their typical consistent fashion .unfortunately for the analysts , this index product did not survive the demise of the parent company and was discontinued by barclays capital in early 2009 . still , it offered a consistent set of data from early 2000 until the end of 2008 , a period that saw two recessions , two market crashes and a multi - year boom , and so it appeared to be still the best choice for our research purposes , despite not being available post 2009 .in this section , we specify the econometric model of investment strategy returns . it is a straightforward generalization of the celebrated garch family of models designed to capture the well - known stylized facts regarding the dynamics of financial time series , such as the clustering and mean reversion of time - varying volatility , fat - tailed distribution of periodic returns , and asymmetric volatility responses which serve as a dynamic mechanism of generation of non - gaussian features of long - run aggregate returns .given that some investment strategies exhibit not only long term asymmetries but also quite visible short - term asymmetries , we expand the definition to also include asymmetric fat tails of periodic returns . finally , as it is the primary objective of our study , we allow non - zero conditionally time - varying means of the periodic returns .we call this setup a generalized asymmetric autoregressive conditional heteroscedasticity ( gaarch ) model .the gaarch(1,1 ) model includes a single lag for both past returns and past conditional volatilities : \chi_{t } \nonumber \\ & + & \eta^{- } \left(1 + \chi_{t}\right ) \left(\epsilon_{t}^2 \cdot 1_{\epsilon_{t } < 0 } - m_{2}^{- } \right ) \nonumber \\ & + & \eta^{+ } \left(1 + \chi_{t}\right ) \left(\epsilon_{t}^2 \cdot 1_{\epsilon_{t } \geq 0 } - m_{2}^{+ } \right ) \label{gaarch_cxvar } \\m^{-}_{2 } & = & e\left\{\epsilon^2 \cdot 1_{\epsilon < 0}\right\ } \label{gaarch_trunchmoments- } \\m^{+}_{2 } & = & e\left\{\epsilon^2 \cdot 1_{\epsilon \geq 0}\right\ } \label{gaarch_trunchmoments+}\end{aligned}\ ] ] here , we used the following notations : * is the single - period ( in our case , monthly ) return of the asset .* are the residual returns for the period ending at , which are i.i.d .variables with zero mean and unit variance .* is the conditional mean of asset returns for the period ending at .* is the unconditional volatility of asset returns .* is the conditional volatility of asset returns for the period ending at .* is the conditional excess variance , equal to the percentage difference between the conditional and unconditional variance .* are the truncated upside and downside second moments of single - period residual returns .the specification of the gaarch model incorporates a scale - invariant reparameterization of the standard stationary garch model , separating the level of the unconditional volatility and the conditional excess variance process .the dynamic asymmetry of volatility is specified in a manner similar to tarch ( see also gjr - garch ) , but we redefined the arch terms in a more symmetric fashion , without specifying which of the signs ( positive or negative ) is more influential .this is because in some asset classes , notably credit and volatility , the upside shocks are more influential , while in others , like equities or commodities , the downside shocks are more influential .the symmetric specification is obviously equivalent to tarch but allows one to have a more natural positivity restriction on arch coefficients , if so desired .the conditional mean specification in ( [ gaarch_cmean ] ) is more or less in line with the conventional apt assumptions , if we assume that the strategy is uncorrelated with the overall market and that variability of returns is priced as an alternative beta .the unconditional mean can be considered the ` true alpha ' of the strategy , i.e. its excess return above the compensation for systematic risks that the strategy takes .the conditional mean process can also be rewritten in a form that allows a more subtle attribution of expected returns . introducing the parameter , which we will call ` convexity compensation ' because it resembles the additional return that any convex investment acquires under fair pricing rules , and using the definition of the excess variance process , we get : this equation can be interpreted as a three - way attribution of conditional expected returns to the true alpha , the convexity compensation , and time - varying excess variance compensation ( the latter has an unconditional expected value of zero ) .one can also add the first two constants to get the _ risk adjusted alpha _ of the strategy : finally , let us specify the distribution of the return residuals .our main criteria are that it must be parsimonious ( i.e. have no more than two free parameters to describe the asymmetric fat tails of a standardized distribution with zero mean and unit variance ) , and that the estimation of these parameters should be robust with respect to the deviation of sample returns from the zero mean assumption . the latter criterion is necessary because we would like as much separation as possible between the estimation of the gaarch conditional mean and the estimation of the distribution of residuals .the requirement of robustness with respect to sample mean suggests that we should consider distributions whose asymmetry is defined by their tail dependence , rather than by introducing a third order skewness term which could be influenced by the estimate of the mean . from a variety ofwell - known skewed fat - tailed distributions , jones and faddy skewed t - distribution fits our criteria best .the probability density and cumulative density functions of this distribution are : where , is the beta function , and is the incomplete beta function . the parameters and have the meaning of the left and right tail parameters , as can be seen from the asymptotics of this distribution . the distribution ( [ jf_skewt_pdf ] ) becomes equivalent to the conventional student s t - distribution when , which in turn nests the gaussian one when , and therefore our entire model specification allows for the goodness of fit comparisons between different models and for the likelihood ratio tests of statistical significance of the estimated parameters .in this section , we will try to classify the dependence of the quantitative characteristics of hedge fund strategy returns , seen through the empirical fit of the gaarch model , on the type of peer group and other qualitative characteristics .we specifically highlight the non - gaussian features of the returns , including the presence of fat tails , the asymmetry between the upside and downside tails , the volatility level and its dynamics , and in particular the asymmetry of volatility response to return shocks ( asymmetric leverage effect ) . from we know that the relative order of importance of these characteristics from the perspective of description of intermediate / long term distribution of returns in financial time series is as follows : * persistence and mean reversion of vol ( garch terms ) - governs the behavior of conditional volatility and the vol of vol at intermediate timescales .* asymmetric volatility response ( tarch terms ) - is the leading contributor to the non - gaussian features ( tails and asymmetry ) of long - run aggregated returns . *fat - tailedness and asymmetry of periodic returns ( non - gaussian shocks ) - is important for short term ( up to several periods ) we will confine the discussion in this section to the dynamic properties of the returns , and will postpone the discussion of the conditional means ( expected returns ) till the next section .we apply the gaarch model specified in section [ berd : sec : model ] to the lehman brothers hedge fund index ( lbhf ) and its subindexes . the universe of funds in the index has grown from 325 in the beginning of 2000 to the peak number of 2288 by august 2008 , before dropping back to 1583 by early 2009 . given the limited historical monthly data for these indexes , available only from january 2000 until january 2009 , and given the extreme realizations of the returns and volatilities in the second part of 2008 , which would have dominated the small dataset and skewed the estimation of parameters significantly , we have chosen a somewhat shorter period of january 2000 until april 2008 as our sample period for estimation .to ensure a better diversification of the subindex returns , we used the equal - weighted version of the lbhf index .while this may limit the applicability of our results from the perspective of investability of the corresponding representative indexes , using the aum - weighted version would have subjected our data sample to dominance of a small number of very large funds , making the corresponding index returns much more idiosyncratic .the equal - weighted version returns , on the other hand , appear quite systematic , and therefore we are able to obtain reasonable model fits despite the limited length of time series .the results of the model fit for different size buckets of the lbhf index are shown in table [ berd : tab : size ] .we also report the results of the model fit for style and asset class subindexes of the lbhf index in tables [ berd : tab : style ] and [ berd : tab : asset ] , respectively . in each table, we indicate the hierarchy of the subindexes by tabbing , e.g. the cta trend - following subindex within the cta index is indicated by an extra tab level . for the ease of comparison with well - known industry metrics , we report the values of risk adjusted alpha , true alpha , convexity compensation and unconditional volatility in annualized percentage points terms .the coefficients which drive the volatility dynamics of the model are reported in gaarch convention , including the downside and upside arch coefficients , and the garch coefficient . and finally , we report the left and right tail degrees of freedom of the jones - faddy skewed student s t - distribution of the residual returns ..estimation of gaarch model for lbhf size subindexes [ cols="<,^,^,^,^,^,^,^,^,^",options="header " , ] as we can see from table [ berd : tab : size ] , the model fit is fairly uniform across all the size subindexes , with the unconditional volatility gradually decreasing from 5.30% across all funds , to 4.31% across funds with aum greater than 500 mm the right tail of the distribution actually becomes normal ( is a limiting value in our numerical estimation procedure , indicating a convergence to normal case which formally corresponds to ) .this could be explained by the hypothesis that the larger funds tend to be more internally diversified , following multi - strategy investment process , and are therefore , on average , less subject to tail risk and asymmetric leverage .it could also be a sign of a more stringent risk management and especially more careful operational risk management in larger funds , which minimizes the likelihood of margin call induced forced selling and subsequent drawdowns ( notwithstanding the negative examples of amaranth , sowood and few other funds ) .another visible dependence is the growing importance of the time - variability of volatility for bigger funds , evidenced by the relatively larger values of the arch coefficients and smaller values of the garch coefficient .this could be simply due to the fact that we have many fewer funds in the large fund category , and therefore the subindex itself is much less diversified , leading to greater importance of idiosyncratic return shocks month to month .an important caveat is that the number of funds drops significantly as the cutoff size grows .therefore , our results for the larger size funds are subject to greater statistical errors than for the smaller ones .the results of the model fit for different style subindexes of the lbhf index are shown in table [ berd : tab : style ] . here, there is a far bigger diversity of results , stemming from significant differences in the investment processes followed by the managers in each of these peer groups .the subindexes are grouped in three major sectors : macro / directional , cta , and relative value ( rv ) , with finer subdivisions within each sector .we see dramatic contrast between the groups with respect to asymmetry and fat - tailedness of returns .for example , some strategies , such as event driven , distressed and broad relative value , and all of the cta strategies , have tails approaching the normality , as evidenced by large values of tail degrees of freedom parameters .on the other hand , certain strategies , such as the macro , long - short with variable bias , and many of the relative value strategies , exhibit strong fat tails in their periodic returns distribution .the static asymmetry of periodic returns is captured by the difference in the left and right tail parameter .we see some cases where this asymmetry is very large , e.g. ls long only strategies , which has fat left tail and normal right tail , cta composite , which has a normal left tail and a mildly - fat right tail , the relative value composite which has a mildly - fat left tail and a normal right tail . the dynamic asymmetry of returns , captured by the difference between the downside and upside arch coefficients , is also markedly different across the strategy styles . for some , such as macro composite , cta composite and cta trend - following , ls long only , ls short bias , rv merger arbitrage , and rv market - neutral equity , the difference is minimal and the dynamic asymmetry is insignificant . for others , such as fundamental cta , and vast majority of the relative value strategies ,the downside arch coefficient is much greater than the upside one , signifying that there is a strong dynamic asymmetry , which is known to generate strong downside asymmetry of aggregate returns over long horizons ( see and references therein ) .there are no strategies where the upside arch coefficients would be much greater than the downside , i.e. there are no strategies which would exhibit strong upside asymmetry of the aggregated returns over long horizons . in each of these cases, one can see the vestiges of the corresponding strategy process in the distribution of its returns .let s list some of the more obvious ones . the cta strategies trade futures , which have linear risk profile .moreover , in vast majority of cases , they tend to have a reasonably low turnover , such that the biggest positions are held for a month or more .this makes the strategy returns over a month horizon ( which is the periodicity of the returns in our sample ) to be close to normal , because the daily fluctuations of the asset returns are allowed to aggregate over a month without strongly correlated position changes .this explanation is especially true for fundamental cta strategies , and less so for trend - following ones which occasionally dabble in higher frequency trading that can scramble returns away from normal .moreover , the fundamental cta , being more prone to holding large futures positions over long periods of time , exhibits a stronger dynamic asymmetry of returns which is more in line with general market behavior , whereas the trend - following cta has less dynamic asymmetry because its typical holdings are less biased towards being long the market . on the other hand ,most relative value strategies actually behave like a carry trade , selling something that is rich ( lower yielding in expected return ) and buying something that is cheap ( higher yielding in expected return ) . for any such strategy ,the typical return profile is highly asymmetric , because it can get caught in a forced unwind of this ` carry trade ' , which usually happen violently .the proverbial ` collecting nickels in front of a steamroller ' strategy , which the relative value is in large part , is indeed subject to an occasional run - down by the steamroller !hence , we see it manifested in both static and dynamic asymmetry of returns on the downside .the few exceptions from this rule are also telling . both the event driven and especially distressed relative value strategies are quite different , in that they typically have much more balanced upside vs. downside trade profiles .for example , a distressed bond , which trades at 40 cents on a dollar , has an upside of 60 cents vs a downside of 40 cents , which is a dramatically different profile than a typical yield - based relative value bond trade , where one hopes to collect a few tens of basis points but takes a full principal risk ( albeit with low probability ) .finally , we can make sense of the patterns observed in various flavors of long - short strategies , by noting that the long bias and variable bias strategies are similar to more imprecise and losely built relative value strategies , and correspondingly they exhibit similar features , namely fat tails and dynamic asymmetry of volatility . on the other hand ,the long only and short bias strategies are quite different in their process .they are often run by fundamental managers who put on large conviction trades and do not rely on formal hedging for risk management .most importantly , because they do not hedge , they also can not employ a lot of leverage , which is usually the main cause of the asymmetric volatility response .this is indeed reflected in the much more symmetric pattern across the downside and upside fitted arch coefficients for these strategies .the results of the model fit for different asset - class subindexes of the lbhf index are shown in table [ berd : tab : asset ] .the segmentation by traded asset class appears to have less discriminatory power over the return characteristics , in part because within each asset class different hedge funds pursue different investment styles .in fact , the subsectors classified by the investment style exhibit a stronger dispersion of model characteristics , then the top level of this classification .the most notable fact is that for those asset classes which admit a large variety of investment styles , encompassing both the macro or cta styles and the relative value styles , the corresponding top - level composite index exhibits almost normal distribution of periodic returns .this is most likely due to the mutually diversifying effects of those sub - styles in each asset class composite .the exceptions , convertibles , and equity composites , is most likely too dominated by thematic and sectoral funds and the corresponding composite is simply not diverse enough in style to achieve the normality of returns . in this classification , we see for the first time the difference between the funds focused on the emerging markets vs. the ones trading in the developed markets only .it is telling that the emerging market funds exhibit larger variability of volatility , but with less asymmetry a feature generally present in strategies which are exposed to a lot of idiosyncratic return sources , as opposed to the ones whose returns are more systematic in nature . the lower typical leverage levels could also be an explanatory factor for this difference .let us now turn our attention away from the volatility dynamics and distribution of return shocks and towards the expected returns and their dependence on the conditional volatility .in all three results tables [ berd : tab : size],[berd : tab : style],[berd : tab : asset ] we show the estimates of the true alpha , convexity compensation and the risk adjusted alpha .the first thing that strikes the eye is the remarkable uniformity of the risk adjusted alpha estimates across all lbhf subindexes , despite a substantially varied levels of true alpha and convexity compensation .the histograms of distributions of these model parameters are shown in figure [ berd : fig : alphahist ] .the few notable outliers are : ls long only ( in style subindexes ) and eq long only ( in asset class subindexes ) , which benefited from the bubble - esque runup in asset prices from 2000 until first half of 2008 ( our sample period ) , and ls short bias and eq short bias , which suffered from the same historical fact .we also note the outsized risk - adjusted returns in the broad emerging markets subindex , which is explained by a secular rise in the emerging markets integration in the global capital markets and also to some extent by the super - fast economic growth and the bubble - like runup in the key commodity prices , affecting em investments .save for these few exceptions , the rest of the sectors exhibit risk - adjusted alpha between 5% and 10% , with most clustered around the magic ` 8% ' number that must warm the hearts of pension fund managers everywhere , because it is precisely what they often assume for the long - term expected returns in their portfolios .we will reserve the judgment on whether this number is reliable going forward , which is the subject of much discussion these days between the ` old normal ' and new normal paradigm proponents .what is even more remarkable than the uniformity of the risk - adjusted alphas , is that the vast majority of hedge fund sectors exhibit positive true alpha , but negative convexity compensation . remembering that the same value of also enters the time - varying component of the conditional mean ( see eq .( [ gaarch_cmean_parts ] ) ) , we are led to conclusion that when the volatility of the strategy returns increases ( ) , these strategies tend to lose money , and vice versa , their returns get a positive boost whenever their return volatility is lower than the unconditional forecast ( ) .it is important to note , that in our gaarch model s conditional mean specification ( [ gaarch_cmean ] ) the relationship is between the _ forecast _ of the conditional variance for the next time period , and the conditional expected ( mean ) return for that future period . in other wordsthis is not the usual leverage effect in financial time series , which runs in the opposite direction large realized returns precede increasing volatility .the effect we are describing might be consistent with a notion that somehow the relationship between the mean return and the scale of the return distribution in a given period is fixed , and then the predictive power of the volatility forecast over the mean return is simply by translation , due to volatility clustering .that would be an easy answer , except it would require that the mean return be always _ positively _ proportional to the forecasted volatility , which is actually the opposite of what we observe in the vast majority of cases .in fact , we have discovered empirically that most hedge fund strategies have expected returns which are _ negatively _ proportional to the forecasted variance , and so are essentially _ short vol_. of course , this is not the same meaning of this phrase as usually used by traders , i.e. vast majority of these strategies do nt actually have a negative exposure to some option - like instruments .rather , what they have is a systematic pattern of investment which produces returns similar to those that would be produced by a short option position .figure [ berd : fig : tstats ] demonstrates that in most cases the volatility exposure coefficients are statistically significant .in fact , for most of the negative values observed we have t - stats greater than 1 in absolute value , and for some even greater than 2 .the statement about being short vol applies to all of the broad - based lbhf subindexes , all relative value subindexes , and to most long - short strategies , but it _ does not apply _ to ls short bias , cta , cta trend - following style subindexes , and to fi macro / directional , fi multi - sector , fi government , eq short bias , fx and commodities asset class subindexes .we believe that both the positive and the negative examples of this statement are very relevant for understanding of the nature of investment strategy alpha . in the remainder of this section we will present a laundry list of both fundamental and technical reasons explaining why are most investment strategies short volatility , and what drives the exceptions .* volatility is a synonym for ` risk ' .it is tough to both have a positive expected return _ and _ benefit from risk . if it was easily done , everyone would do it , and would thus change the properties of those strategies .so , the strategies appear to be short vol partly for darwinian reasons , because they are successful . *any well - defined investment strategy is intrinsically short an opportunity to reallocate to other strategies , which is usually triggered by either large loss or a large gain over some medium - term horizon .this can be interpreted as if the strategy contained a short straddle option on its own medium - term trailing returns , which naturally leads to a negative relationship between past volatility and future expected returns . * as a corollary to the previous point , we should mention that the cash asset can be considered a long opportunity option .the dry powder in your wallet gives you the opportunity to deploy it whenever you wish .again , if the cash is long vol , then having spent the cash and invested in a strategy , the investors get shorter vol .* the ultimate demand to be long risk comes from the real economy , where the main risk takers , the entrepreneurs would like to share these risk exposures with outside investors .therefore , investors in aggregate must be short the risk level ( i.e. lose money when the risk goes up ) , otherwise they would serve no useful purpose for the real economy . * liquidity reasons it is difficult to maintain a positive vol / convexity strategy with large capacity , unless one takes large directional risks _ and _ is willing to sacrifice potentially sizable carry costs .providing liquidity is the only way to handle large capacity , and it leads to being short vol . *leverage and financing most relative value strategies are designed to capture small pricing differences , and therefore must be leveraged to achieve reasonable nominal returns .this leverage and financing requirement , which is naturally short - term and subject to refinancing risk , exposes all relative value strategies to volatility risk ( i.e. risk of uncertain volatility ) , when an increase in perceived vol will lead to higher margin requirement and trigger more trading and increased losses .this mechanism , in particular , was in abundant display during the recent crisis . *any convergence , contrarian or relative value strategy that is betting on some sort of mean reversion or a convergence of some risk / return metrics is naturally short vol , because it explicitly bets against the wings of the returns distribution .we shall illustrate this point below in greater detail . * many relative value strategies are really ` carry trades ' , buying the higher yielding ( cheap ) asset and selling short the lower yielding ( rich ) one . and, like all carry trades , they are also naturally short vol because they are exposed to decompression risk , which is in turn directional with vol .* as a corollary to our statement about the convergence strategies , we can deduce that the momentum , trend - following strategies ( which are essentially the opposites of the convergence ) are naturally long vol .the cta , cta trend - following , and commodities strategies fall under this rule . *the niche for opportunistic / directional players always exists , but will always be limited in capacity .notable exceptions from this are the directional fixed income ( particularly in the government sector ) and fx strategies , where the ` other side ' of the trade is the global economy and central banks , which are acting as risk absorbers rather than risk demanders .the fi macro / directional , fi multi - sector , fi government , and fx strategies fall under this rule . *the short bias strategy s long vol exposure is somewhat coincidental the higher volatility is generally associated with downside markets , which in turn benefit the short bias strategy , so it is the market asymmetry that plays the role here .any convergence strategy , when implemented in a disciplined way , will ultimately buy some assets when their price is ` too low ' and will sell them when their price is ` too high ' , whatever the metrics they use to determine these thresholds . as a result , such strategies are betting that the prices will stay closer to the middle of the distribution and therefore betting against the wings of distribution .this is essentially a short strangle position on the assets , and is therefore naturally short volatility , but it benefits from a positive carry ( collecting option premiums ) .the propensity of convergence strategies to be short vol is even clearer at the times of sudden change in volatility regime . even if the strategy was correct in both regimes , meaning that it has set the appropriate thresholds and will on average make money by buying low and selling high in each of these regimes , it will inevitably loose money when the low vol regime is transitioning to a high vol regime , and correspondingly the thresholds are being widened out .this logic is illustrated in figures [ berd : fig : convergence1 ] and [ berd : fig : convergence2 ] .on the other hand , any momentum strategy , when implemented in a disciplined way , will buy some assets when their price is ` going up ' and sell them when their price is ` going down ' , using some sort of metrics to establish a proper time scale and thresholds . as a result ,contrary to convergence strategies , these ones are betting that the prices will move further away from the middle of the distribution .this is essentially a long strangle position on the assets , and is therefore naturally long vol , but suffers from a short carry .this logic is illustrated in figure [ berd : fig : momentum ] .the observation that the trend - following strategies have option - like behavior has been made in a much more detailed manner in a notable paper by fung and hsieh , building on an earlier work by glosten and jagannathan .what we argue here , is that this insight applies almost uniformly across all hedge fund styles , once we recognize the differences in their volatility dynamics .note , that when we talk about the disciplined investment process , we do not necessarily mean a black - box , quantitative approach to investing .certainly , it fits the bill . but so does also a process followed by most traditional , discretionary portfolio managers who have their own step - by - step approach to determining good investment opportunities .warren buffett is certainly not a quant , but his value style of investment is eminently disciplined and perhaps even repeatable , and is therefore very much subject to the same patterns that we have identified .in fact , one could probably explain a good deal of buffett s investment returns by noting that he runs a distressed contrarian strategy ( therefore somewhat short vol , but not as much as other relative value strategies , and less subject to asymmetric drawdowns ) , and he also keeps around a great deal of cash which , as we mentioned above , has some long vol characteristics , especially when coming out of recessions when the opportunity value of cash is the greatest .the most important result which we obtained here is that the vast majority of mainstream investment strategies have negative volatility exposure , whether or not they actually know it .we call this an _ implicit volatility exposure _ , to distinguish it from _ implied volatility exposure _ which one obtains when trading options .it is very difficult to manage such an exposure if the portfolio manager does not recognize it explicitly .the false sense of being hedged or being market - neutral lulls many portfolio managers into the trap of high leverage , and leaves them exposed to the risk of sudden margin calls when the vol goes up , which is the most painful way to discover one s dependence on a risk factor .we believe that this was in some way a contributing factor behind the quant crunch of 2007 , as well as behind the more benign but equally befuddling behavior of quant strategies in 2009 after the beginning of the fed s quantitative easing program .just a month before the quant crunch , two bear stearns hedge funds focused on structured credit blew up , followed by sowood , another credit hedge fund focused on leveraged loan investing . that marked the turning point for the market volatility which went up from the multi - year lows .as was argued in , the actual quant crunch occurred due to a liquidity squeeze in the crowded quantitative long / short equity space .but what precipitated this squeeze was the notch up in the volatility .as to the 2009 ` great recovery ' which left behind many traditional quant managers , it also seems to fit our description . what the fed has done starting from march 2009 ,is acting as a risk absorber , and actively damping the level of all risks in the market .this leads to a fast drop in vol , and correspondingly large outperformance of convergence strategies .however , the outperformance was even greater for the extreme mean - reversion strategy pushing up the prices of every stock that went particularly far down in the preceding crisis period ( what the cnbc pundits were calling a ` melt - up ' of the markets ) .this left the more mainstream slow - turnover quant strategies , which often identify risky stocks to short in order to maintain market - neutrality , dangerously squeezed by this run - up in the ` crappy ' stocks .so , what should the investors do with all these insights ?there are several ways one can manage the implicit volatility risks , once they have been identified .first and foremost , one can employ style and strategy diversification .combining the strategies which are naturally short vol with those that are long will lead to a better overall portfolio .this is why all funds of funds must have a healthy allocation to ctas , and other trend - following strategies , and to macro fixed income and fx strategies , since those are the ones that have positive vol exposure yet still can offer a reasonable capacity .an even better approach is to add explicit ( or implied ) volatility exposure to the portfolio , by investing in volatility strategies .volatility as an asset class has been on the rise in the recent years , although it still suffers from the lack of respect resulting in the absence of appropriate silo in which to put an allocation to this strategy ( that is to say , it has not yet achieved a mainstream status ) and many institutional investors do not think it can handle the size issues very well . also , a lot of investors associate volatility strategies with just protective put option buying ( ` black swan ' insurance ) , the cost of which is often prohibitive .we actually prefer dynamic volatility strategies mixing both directional , long / short , relative value and tail risk protection styles , which can provide the right amount of protection while also earning some alpha along the way , rather than costing an arm and a leg as pure insurance strategies do .we believe that the ideal volatility strategy mix must achieve return characteristics combining small but positive true alpha with positive ( and large ) convexity compensation , with fatter upside tail of the return shocks and low dynamic asymmetry .this can not be achieved by trading volatility in any single asset class , but it might be possible if one builds a comprehensive volatility strategy encompassing different asset classes and investment styles .i would like to conclude by expressing my thanks to many colleagues with whom i discussed these topics over the years , including robert engle , artem voronov , bob litterman , peter zangari , emanuel derman , mark howard , arthur tetyevsky , eric rosenfeld , marc potters and jean - philippe bouchaud .
we suggest an empirical model of investment strategy returns which elucidates the importance of non - gaussian features , such as time - varying volatility , asymmetry and fat tails , in explaining the level of expected returns . estimating the model on the ( former ) lehman brothers hedge fund index data , we demonstrate that the volatility compensation is a significant component of the expected returns for most strategy styles , suggesting that many of these strategies should be thought of as being ` short vol ' . we present some fundamental and technical reasons why this should indeed be the case , and suggest explanation for exception cases exhibiting ` long vol ' characteristics . we conclude by drawing some lessons for hedge fund portfolio construction .
time - discrete dynamical systems on a finite state space play an important role in several different contexts. examples of such systems include boolean networks , cellular automata , agent - based models , and finite state machines , to name a few .this modeling paradigm has been used with great success to model natural and engineered systems such as biological networks , social networks , and engineered control systems .it has the advantage of being intuitive and models can be easily simulated on a computer in most cases .a disadvantage of discrete models of this type is that few analytical tools beyond simulation are available .the motivation for this paper is to develop such tools for the analysis of models in biology , but the results are of independent mathematical interest .some theoretical results have been proven for boolean networks and are reviewed in . in that paperthe authors study conjunctive boolean networks , that is , boolean networks for which the future state of each node is computed using the boolean and operator .it is shown that the dynamics of such networks is controlled strongly by the topology of the network .the current paper shows that the results in are valid much more broadly .we briefly describe the main results in .a boolean network on nodes can be viewed as a time discrete dynamical system over the boolean field : where the _ coordinate functions _ are the boolean functions assigned to the nodes of the network .we can associate two directed graphs to .the _ dependency graph _ ( or wiring diagram ) of has vertices corresponding to the variables of .there is a directed edge if the function depends on ( i.e. there is an instance where changing the value of changes the value of ) .that is , encodes the variable dependencies in .the dynamics of is encoded by its _ phase space _ , denoted by .it is the directed graph with vertex set and a directed edge from to if .thus , the graph encodes part of the structure of and the graph encodes its dynamics .the results in relate these two graphs , deriving information about from , in the case where each is a conjunction of the variables for which there is an edge in .these networks were called _ conjunctive _ networks in .it is easy to see that the graph has the following structure .its connected components consist of a unique directed cycle , a _ limit cycle _ , with directed trees " feeding into the nodes of the cycle , representing the transient states of the network , each of which eventually maps to a periodic point . if the graph is strongly connected , that is , there is a directed path from any vertex to any other vertex , then it is shown in that there is a precise closed formula for the number of limit cycles of any given length .this formula depends on a numerical invariant of , its loop number .if is not strongly connected , then there is a sharp lower bound and an upper bound on the number of limit cycles , in terms of the loop numbers of the strongly connected components of and the antichains in the poset of strongly connected components .these two bounds agree for the number of fixed points of , so that one corollary is a closed formula for the number of fixed points of a general conjunctive network .the current paper shows that these results hold in much broader generality , namely for time discrete dynamical systems over any finite set : such that the are constructed from an operator , which has the property that it endows with the structure of a semilattice , that is , is commutative , associative , and idempotent .we will call such functions _ semilattice functions_. that is , any semilattice gives rise to a dynamical system to which the formulas and bounds for limit cycles hold .it is shown furthermore that they hold precisely for the class of semilattice networks .it is important to mention that since semilattice networks are not linear ( see example [ eg : linear ] ) , our results complement those for linear systems in .let be a set with elements and consider a dynamical system with variables over : [ def : junc ] a function is called a _ semilattice operator _ if it satisfies the following ( with notation ) : note that semilattice operators induce the structure of a semilattice on .conversely , the meet or join operations on a semilattice are semilattice operators in the above sense .some examples of semilattice operators are : ^ 2 \\ { \wedge } & = & or \textrm { over } [ 0,1]^2 \\ { \wedge } & = & min \textrm { over } [ 0,m]^2 \\ { \wedge } & = & max \textrm { over } [ 0,m]^2 \ ] ] that is , semilattice operators are a generalization of the conjunctive and disjunctive operators used in .[ example : junctive ] the domain of a semilattice operator can be extended to with by we will call such an extended operator a _ semilattice function_. a dynamical system is called a _ semilattice network _ if there exists a semilattice operator , , such that is a semilattice function for all ( where depends only on ) .let be a semilattice network , , and the adjacency matrix of .we will assume here and in the remainder of the paper that none of the coordinate functions of are constant , that is , all vertices of have positive in - degree .define the following relation on the vertices of : if and only if there is a directed path from to and a directed path from to .it is easy to check that is an equivalence relation .suppose there are equivalence classes .for each equivalence class , the subgraph with the vertex set is called a _ strongly connected component _ of . the graph is called _ strongly connected _ if consists of a unique strongly connected component .a trivial strongly connected component is a graph on one vertex and no self loop .since such components do not influence the cycle structure of the network , we assume that all strongly connected components are non - trivial .[ eg : junctivenetwork ] our running example for this paper will be the semilattice network with dependency graph given by figure [ fig : example ] and semilattice function .let be a strongly connected component , and let be the semilattice network with dependency graph .let be the semilattice network defined by .that is , the dependency graph of is the disjoint union of the strongly connected graphs , and is obtained from by deleting all edges between strongly connected components .now define the following order relation on the strongly connected components of the dependency graph of the network . in this way we obtain a partially ordered set . in this paper, we relate the dynamics of to the dynamics of its strongly connected components and the poset .example [ fig : example ] ( cont . ) .the dependency graph of has four strongly connected components , , , , and ( bottom , left , right and top , resp . ) .the poset is given by .the _ loop number _ of a strongly connected graph is the greatest common divisor of the lengths of its simple ( no repeated vertices ) directed cycles .the loop number of any directed graph is the least common multiple of the loop numbers of its non - trivial strongly connected components .example [ fig : example ] ( cont . ) . in our running example ,the loop numbers are , , and .in this section we give an exact formula for the cycle structure of semilattice networks with strongly connected dependency graphs . in the next section we will also consider networks with general dependency graphs and give upper and lower bounds for the cycle structure . the formula for conjunctive networks was given and proven in .it is not difficult to show that the proofs are also valid for general semilattice networks .let be a semilattice network with semilattice operator .assume that the dependency graph of is strongly connected with loop number .[ lem : partition ] the set of vertices of can be partitioned into non - empty sets such that each edge of is an edge from a vertex in to a vertex in for some with and . for a proof of this fact see (* lemma 3.4.1(iii ) ) or ( * ? ? ?* lemma 4.7 ) .let be the number of elements of .without loss of generality we assume for the rest of the section that , , , , let be the loop number of ; then there exists such that for all we have where [ prop : iterations ] the proof is analogous to ( * ? ? ?* theorem 4.10 ) . the following corollary states that the period of can be obtained from the topology of its dependency graph .[ cor : period ] the period of is equal to the loop number of . the following corollary states that the long - term dynamics of a semilattice network can be reduced to the dynamics of a rotation over , with as in lemma [ lem : partition ] .[ cor : rotation ] the cycle structure of is equal to the cycle structure of where .let and be defined by and .the proof now follows from the equalities and . for any positive integers that divide , let be the set of periodic points of period and let .the cardinality of the set is .it follows from the fact that if , then a rotation in colors and variables has colorings of periods that divide . if is a prime number and divides for some , then it is clear that ( there are constant colorings ) .now if is prime and , then the proof follows from the fact that , where is the disjoint union . next we prove theorem [ am - formula ] which gives the exact number of periodic points of any possible length .[ am - formula ] let be a semilattice network whose dependency graph is strongly connected and has loop number . if , then . if and is a divisor of , then the number of periodic states of period is the statement for is part of the previous corollary .now suppose that .for , let .then , where is a disjoint union , in particular , the formula [ am ] follows from the inclusion - exclusion principle and the disjoint union above .if divides , then the number of cycles of length in the phase space of is .hence the cycle structure of is example [ fig : example ] ( cont . ) . in our running examplewe obtain : notice that the cycle structure of depends on the loop number and only .in particular , a semilattice network with loop number 1 on a strongly connected dependency graph only has as limit cycles the fixed points where , regardless of how many vertices its dependency graph has and how large is .let be a semilattice network with dependency graph .let be the strongly connected components of . furthermore ,suppose that none of the is trivial .for , let be the semilattice network that has as its dependency graph and suppose that the loop number of is .in particular , the loop number of is .first we study the effect of deleting an edge in the dependency graph between two strongly connected components .let and be two strongly connected components in and suppose .let be a directed edge in between a vertex in and a vertex in .let be the graph after deleting this edge , and let be the semilattice network such that .[ theorem : f - h ] any cycle in the phase space of is a cycle in the phase space of . in particular componentwise .first notice that appears in the expression if and only if there is a path from to of length .let be a cycle of length in . to show that is a cycle in , it is enough to show that , .thus , the value of is determined already by the value of and the edge does not contribute anything new and hence is a cycle in .this is equivalent to show that for all such that there is an edge from to .suppose the loop number of ( resp . ) is ( resp . ) . now ,any path from ( resp . ) to itself is of length ( resp . ) where and is large enough , see ( * ? ? ?* corollary 4.4 ) .thus there is a path from to of length for any .also , there is a directed path from to of length for any .this implies the existence of a path from to of length for all , then appears in .now , for all .choose large enough such that .then , and . therefore .let be the semilattice network with the disjoint union of as its dependency graph .that is , . for , there are no edges between any two strongly connected components of the dependency graph of the network .its cycle structure can be completely determined from the cycles structures of the alone .[ disjoint ] let be the cycle structure of .then the cycle structure of is ( where ) and the number of cycles of length ( where ) in the phase space of is this follows from the fact that if is a periodic point of of period and is a periodic point of of period , then is a periodic point of of period .[ upperbound ] let and be as above .the number of cycles of any length in the phase space of is less than or equal to the number of cycles of that length in the phase space of .that is componentwise . in particular , the period of is a divisor of the loop number of its dependency graph . in , it was shown that the poset structure of gives an algebraic way to combine the cycle structure of to obtain lower and upper bounds for the cycle structure of , where was a conjunctive boolean network .it is not difficult to see that the proofs of these results still also hold for general semilattice networks if the corresponding semilattice operator has a `` neutral '' and an `` absorbent '' element ( analogous to the identities and ) . in order to state the theorem about lower and upper bounds of semilattice networks ,we need the following definitions .let be a semilattice operator .an element is called a _ neutral element _ if for all .an element is called an _ absorbent element _ if for all .all the functions in example [ example : junctive ] have a neutral and absorbent element ; they are : since is finite , every semilattice operator has an absorbent element ( ) .any semilattice operator on can be extended to a set with a neutral element by defining as if and otherwise .let , , be as above and let be the loop number of .furthermore , let be the poset of the strongly connected components .let be the set of all maximal antichains in . for = \{1,\ldots , t\} ] .in the remainder of the paper we assume that is a semilattice network such that its semilattice operator , , has idempotent neutral and absorbent elements , and , resp . for any subset $ ] , let a limit cycle in the phase space of is ( resp . ) if the component of is ( resp . ) for all .denote and .the - and -polynomial associated to are defined as follows : \\j\subseteq m\subseteq [ t ] } } { ( -1)^{|n|+|m|+|i|+|j| } { \langle i_n , j^m\rangle \prod_{k\in \overline{i_n\cup j^m}}{z_k } } } \end{aligned}\ ] ] example [ fig : example ] ( cont . ) . for the poset in figure [ fig : examplecdg ]we obtain : [ thm : main ] with the notation above we have the following coefficient - wise inequalities here , the polynomials and are evaluated using the multiplication " described in theorem [ disjoint ] and coefficient - wise addition .the proof is analogous to the proofs of theorems 6.2 and 7.4 in .note that the left and right sides of the inequalities ( [ ineq ] ) are polynomial functions in the variables , with integer coefficients .that is , the lower and upper bounds are polynomial functions depending exclusively on measures of the network topology .example [ fig : example ] ( cont . ) . in our running examplewe obtain : and therefore .it is important to mention that the phase space of has nodes so it is not feasible to obtain information about the cycle structure from exhaustive enumeration .also , although the bounds agree on fixed points for boolean semilattice networks , our example shows that they do not agree for general lattice networks .in this section we characterize semilattice networks ; in order to do this , it is enough to characterize semilattice operators . since semilattice operators are semilattice operations , the number of semilattice operators over a set with elements ( up to permutation ) is the number of semilattices with elements . according to the next proposition ,the number of semilattice operators over a set with elements is the number of lattices of size . although there is no closed formula for the number of lattices of a given size , algorithms for counting themhave been developed .there is a one - to - one correspondence between semilattices with elements and lattices with elements .if is a semilattice with elements , consider , by defining if and otherwise ; also , .it follows that is a lattice with elements . on the other hand ,if is a lattice with elements , let ; then it follows that is a semilattice with elements .we present some results on infinite semilattice networks , both networks on an infinite set and networks on an infinite cartesian product of a set . if is a set with infinitely many elements ( that is , ) , some of the theorems remain valid .notice that if is finite , there exists a finite set that is closed under ; that is , is a semilattice function .suppose that is strongly connected with loop number , and let be a periodic point of .let be a finite subset of that it is closed under ; then we can consider as a periodic point of .therefore , the results on finite semilattice networks apply , and the period of must divide .that is , corollary [ cor : period ] is valid for .now , consider a divisor of and consider finite with at least elements such that is a semilattice operator .then , the number of periodic states of period of is at least ( see theorem [ am - formula ] and notice that is increasing with respect to ) .since , it follows that has infinitely many periodic points and limit cycles of length .then theorem [ am - formula ] ( and the corresponding corollary ) holds for .if is not necessarily strongly connected , suppose has a neutral and an absorbent element .let correspond to , by using the argument in the paragraph above , it follows that if at least a limit cycle of length appears in , then has infinitely many limit cycles of length .if the dimension of is infinite , that is , , then needs to satisfy an additional condition to be properly defined : every ( possibly infinite ) subset has a largest lower bound .this means that there exists such that for all ( in lattice terminology : is a lower bound ) ; and if there is another such , then ( in lattice terminology : is the largest lower bound ) .this additional condition allows for a function to have infinitely many inputs .suppose that is strongly connected with loop number , and let be a periodic point of of period .consider ( see lemma [ lem : partition ] ) ; then there exists such that for all there is a path of length from to and from to .then , , for all ; in particular , and for some . then , , similarly ; therefore .it follows that corollary [ cor : period ] and theorem [ am - formula ] remain valid for ( and ) .if is not necessarily strongly connected , suppose has a neutral and an absorbent element .it is not difficult to show that theorem [ thm : main ] remains valid for ( and ) .finally , we show with counterexamples that we can not omit any of the properties of in definition [ def : junc ] .that is , the formulas and bounds derived in this paper are valid exactly for semilattice networks .[ eg : linear ] consider , then is commutative and associative , but not idempotent . consider , defined by .it is not difficult to see that has a unique limit cycle ( a fixed point ) , ; that is , . on the other hand ,the loop number of its dependency graph is , so from theorem [ am - formula ] we would obtain .consider defined over .it is easy to show that associative and idempotent , but not commutative .consider , defined by .it is not difficult to show that has the limit cycle of length 2 , ( it also has 3 fixed points ) . on the other hand ,the loop number of its dependency graph is , so from theorem [ am - formula ] we would obtain .consider defined over .it is easy to show that is commutative and idempotent , but not associative .consider , defined by .it is not difficult to show that has the limit cycle of length 3 , ( it also has 3 fixed points and another limit cycle of length 3 ) . on the other hand ,the loop number of its dependency graph is , so from theorem [ am - formula ] we would obtain .in this paper we have identified a broad class of discrete dynamical systems with a finite phase space for which one can derive strong results about their long - term dynamics in terms of properties of their dependency graphs .we classify completely the limit cycles of semilattice networks with strongly connected dependency graph and provide polynomial upper and lower bounds in the general case .it is our hope that the formulas in this paper are related to general properties of semilattices , which is a subject for future investigation . as mentioned in the introduction ,the motivation for this investigation was the need for theoretical tools to analyze discrete models in biology .an example of such an application is given in , where it is shown that the results about conjunctive boolean networks can be applied to determining the limiting behavior of certain types of epidemiological models .the results in this paper apply to certain types of boolean networks and cellular automata , which in many cases have the property that the update functions are of the same type for all nodes .another model type to which the results in this paper apply in some cases is that of so - called logical models , developed by ren thomas for the purpose of modeling gene regulatory networks .it is shown in that logical models can be translated into the framework of polynomial dynamical systems .if the dynamical system arises from a semilattice function , then the results of this paper give information about the steady states and limit cycles of the model under synchronous update .
time - discrete dynamical systems on a finite state space have been used with great success to model natural and engineered systems such as biological networks , social networks , and engineered control systems . they have the advantage of being intuitive and models can be easily simulated on a computer in most cases ; however , few analytical tools beyond simulation are available . the motivation for this paper is to develop such tools for the analysis of models in biology . in this paper we have identified a broad class of discrete dynamical systems with a finite phase space for which one can derive strong results about their long - term dynamics in terms of properties of their dependency graphs . we classify completely the limit cycles of semilattice networks with strongly connected dependency graph and provide polynomial upper and lower bounds in the general case .
although the first example of a wavelet basis dates back to 1910 , it was not until the early 1980 s , with the work of goupillaud , morlet and grossmann in seismic geophysics , that the wavelet transform ( wt ) became a popular tool for the analysis of signals with non - periodic characteristics , termed _ non - stationary _ signals .the wt allows a signal to be examined in both the time- and frequency - domains simultaneously .the wt as a time - frequency method has replaced the conventional fourier transform ( ft ) in many practical applications . the wt has been successfully applied in many areas of physics including astrophysics , seismic geophysics , turbulence and quantum mechanics , as well as many other fields including image processing , biological signal analysis , genomic dna analysis , speech recognition , computer graphics and multifractal analysis .the term wt is conventionally used to refer to a broad selection of transformation methods and algorithms . in all cases ,the essence of a wt is to expand the input function in terms of oscillations which are localized in both time and frequency .different applications of the wt have different requirements .image compression , for example , often uses the discrete wt ( dwt ) to transform data to a new , orthogonal basis set where the data are hopefully presented in a more redundant form .other applications , particularly signal analysis , use the cwt , sacrificing orthogonality for extra precision in the identification of features in a signal .the principal aim of this paper is to present the wt in a form well suited to the analysis of one - dimensional signals whose frequency components have rapidly varying frequency and amplitude modulations . in order to achieve this aim, we introduce a new ` tunable ' , complex wavelet .this wavelet is based upon the well - known morlet wavelet but is better suited to high - resolution analysis .the features of the wt using the proposed wavelet are understood through an asymptotic stationary - phase approximation to the integral expression of the wt specialized to the new wavelet .we demonstrate the properties of the wt using two example functions , a mathematical function and the other a realistic physical model function .we are interested in exploiting the complex wt in the analysis of structural correlation functions which describe the atomic structure of disordered materials .as these correlation functions have different spatial regimes , they may be classed as non - stationary signals . despite the overwhelming success of the wt in other fields , we are aware of only one other paper on this application of the wt . in that application , ding et al . studied an experimentally observed correlation function of vitreous silica using the mexican hat wavelet .we improve upon this single , prior application in three significant ways , namely the use of the complex wt , a ` tunable ' wavelet and the method of interpreting the resulting transforms .we then analyze the reduced radial distribution function ( rrdf ) of a structural model of a one - component glass with pronounced icosahedral local order .the resulting wt clearly shows the existence of different frequency components in the rrdf and their exponential decay .these features were not clearly detectable by earlier methods ( c.f . ref . ) . in sec .[ sec : formulation ] we review the mathematical framework of the wavelet transform and discuss some mother wavelet functions before modifying an existing wavelet for our purposes . in sec .[ sec : instantaneous frequency and amplitude ] we consider the wt of a general oscillatory signal using the new wavelet .the results are then used to interpret the wavelet transforms of a variable - frequency example function in sec .[ sec : example function ] and of the rrdf in sec .[ sec : reduced radial distribution function ] .concluding remarks can be found in sec . [sec : conclusions ] .the underlying wt used in this paper can be completely described as a one - dimensional complex , continuous wt using wavelets of constant shape .we begin by examining the formulation of this wt in terms of an integral transform , before examining the choice of mother wavelet function .for simplicity we use time - frequency terminology , considering the signal to be an input function of time , .the cwt is an integral transformation which expands an input function in terms of a complete set of basis functions .these basis functions are all the same shape as they are defined in terms of dilation by and translation by of a mother wavelet function : with and .the cwt is defined as the inner product: the original formulation of the cwt used a prefactor to give a normalization to unity , ) .we choose an alternative prefactor ( giving ) after delprat et al . .as we shall see , this formulation of the cwt allows for simple frequency identification by examining the maxima in the modulus of the cwt with respect to the scale . in order to understand the cwt, it is useful to relate it to the ft .the ft has a non - localized , plane - wave basis set and , therefore , has a single transform parameter - the frequency .in contrast , the basis set of the cwt contains localized oscillations characterised by two transform parameters - the scale ( or dilation ) and the translation ( or position ) .it is this critical difference which makes the cwt preferable for the analysis of non - stationary signals .we are free to choose a functional form for , subject to some constraints .some of these constraints are forced upon us whereas others arise from the practical usefulness of the resulting transform . in order to recover a function from its wavelettransform via the _ resolution of the identity _ , must satisfy an _admissibility condition_. although we do not make direct use of the resolution of the identity in this paper , we require that our choice of satisfies this condition to ensure that all information about the signal is retained by the transform .the admissibility condition is essentially that the ft , , satisfies the relation , equivalent to requiring that the mother wavelet and , hence , the basis wavelets , have a mean of zero . beyond simply satisfying the admissibility condition , it is practically useful to create mother wavelet functions which mimic features of interest in the signal . in the case of time - frequency analysis ,mother wavelet functions are chosen which represent localized sinusoidal oscillations .the resulting wavelet transforms can then be used to extract instantaneous measures of frequency and amplitude .the uncertainty principle dictates that the product of the time and frequency uncertainties of such wavelets has a lower bound .it is no surprise , therefore , that this class of mother wavelet functions are typically based upon gaussians .however , it is still possible to trade temporal precision for frequency precision by altering the number of oscillations in the envelope of the mother wavelet .the simplest such wavelet is the `` mexican hat '' wavelet which mimics a single oscillation and is commonly used in signal analysis .the functional form of this wavelet is the second derivative of a gaussian .this wavelet offers good localisation in the time domain whilst retaining admissibility .however , this wavelet has two major drawbacks for general signal analysis : ( i ) useful information can only be extracted from the wt at discrete intervals where the wavelets are in phase with the signal , and ( ii ) the time - frequency resolution is fixed .the former drawback has been overcome by the invention of complex wavelets which mimic localized plane waves .the wt can be computed separately for the real and imaginary parts , yielding a complex scalar field , , where the modulus and argument of represent the amplitude and phase of the signal , respectively .the latter drawback has been overcome by the invention of tunable wavelets which include an additional parameter to the mother wavelet function controlling the number of oscillations in the envelope .goupillaud , morlet and grossman overcame these problems simultaneously with the invention of a modulated gaussian wavelet , now known as the `` morlet '' wavelet .this wavelet has a parameter , , which controls the number of oscillations in the envelope , allowing time and frequency uncertainties to be traded .thus the morlet wavelet can be expressed as: where and the parameter allows the admissibility condition to be satisfied .the ft of this wavelet is: from eq .( [ eqn : spectrum of the morlet wavelet ] ) it is clear that the admissibility condition implies that many previous applications of the morlet wavelet have been concerned with signals containing slowly varying frequency and amplitude components for which large values of ( ) are applicable and ( ) is negligible .however , we are interested in applying this type of analysis to signals which contain rapidly varying frequencies and amplitudes . in this case, the ability to use small values of becomes important as we wish to maximize the temporal resolution by minimizing whilst still being able to separate the various frequency components in the signal and , consequently , is no longer negligible .although the morlet wavelet is admissible at small , the temporal localization is unsatisfactory ( see fig . [ fig : envelope of the morlet wavelet ] ) ; namely , undergoes a transition from mono - modality to bimodality ( a single ridge at large splits into two symmetric ridges for small ) .the wavelet transform of a signal performed using a wavelet which has a bimodal envelope results in the signal being localized about two different positions ( see fig . [fig : morlet wavelet sigma=3d1 ] ) .this produces unwanted artefacts in the resulting instantaneous frequency and amplitude measurements ( shown later in figs .[ fig : example function wt period ] and [ fig : example function wt amplitude ] ) .therefore we remedy this drawback by modifying the morlet wavelet to produce a new wavelet , , such that has a single , global maximum for all . for the new wavelet we choose to replace the single , normalization constant in the morlet wavelet with two new parameters and determined by two conditions : ( i ) total normalization of the wavelet to unity , and ( ii ) equal contributions to the normalization from the real and imaginary parts .the new wavelet has the following functional form: , \label{eqn : harrop wavelet } \end{aligned}\ ] ] where and are given by : the fourier transform of this wavelet is: .\label{eqn : harrop wavelet spectrum } \end{aligned}\ ] ] it is worthwhile noting that the real part of this new wavelet recovers the functional form of the mexican hat wavelet in the limit : = \sqrt{\frac{2}{3}}\pi ^{-\frac{1}{4}}e^{-\frac{1}{2}t^{2}}(t^{2}-1).\ ] ] thus the new wavelet allows a complete transition from very high temporal localization , ( the mexican hat wavelet ) , to maximum frequency localization , ( plane wave ) . even in the limit of minimal , remains mono - modal ( see figs .[ fig : envelope of the harrop wavelet ] and [ fig : harrop wavelet sigma=3d1 ] ) .thus we have improved upon the temporal localisation of the morlet wavelet .we have also checked that , using the new wavelet , the original signal can be recovered by the resolution of the identity operator .in this section we demonstrate how the new wavelet may be used to extract instantaneous frequencies and amplitudes from a signal via the cwt .the following analysis is based upon the stationary - phase approach of delprat et al . but is specialized to the new wavelet .the wavelet transform at a given scale and translation is given by the integral ( eq .[ eqn : integral expression for the wavelet transform ] ) of a rapidly oscillating integrand .this integral may be rewritten in the form: where : with $ ] and . in order to take the integral in the stationary - phase approximation , we first approximate by a gaussian . from eq .[ eqn : dilation and translation of the mother wavelet ] we have , where is taken to be a normalized gaussian whose variance is equal to the variance of , giving: , \ ] ] where the variance can be found analytically: .\nonumber \end{aligned}\ ] ] the approximate envelope , , tends to the true envelope , ( see fig .[ fig : harrop wavelet gaussian approximations ] ) .\a ) b ) we assume ( without loss of generality ) that there is a single point of stationary phase for the integrand in eq .( [ eqn : wavelet transform complex integrand ] ) at . under the conventional asymptotic approximation: expand around the stationary point assuming and substitute the approximate expression for from eq .( [ eqn : approximate psi envelope ] ) into the integral , which can then be taken .this gives an approximate expression for the squared modulus of the cwt using the new wavelet : .\label{eqn : accurate wt modulus } \end{aligned}\ ] ] further , assuming the frequency of the mother wavelet to be constant ( ) and the frequency variation of the signal to be slow in the region of interest ( i.e. ) then: for a monochromatic signal ( i.e. a signal which contains only a single frequency at any given position ) , there is a scale at any given which corresponds to a basis wavelet centred at whose frequency is equal to the local frequency of .the scale of this wavelet identifies the instantaneous frequency of the signal and may be found as the solution of the equation . from the definition of the points of stationary phase ( ), this corresponds to , an alternative equation which can be used to find . with the choice of normalization used in eq .( [ eqn : dilation and translation of the mother wavelet ] ) , it is clear that these points maximize the expression for the squared modulus of the cwt with respect to as obtained by the stationary - phase approximation , eq .( [ eqn : slow frequency variation form of the wt ] ) . as the cwt is a linear operation , superimposed frequency componentsare manifested as different scales which locally maximize ( assuming sufficiently large to resolve the peaks ) .the curves formed by the points are known as the `` ridges '' of the transform .the trajectory of each ridge can be used to extract the amplitude and frequency modulations of the corresponding signal components . an approximate expression for the instantaneous amplitude , , of a signal componentcan be obtained by rewriting the stationary - phase approximation to the squared modulus of the wt ( eq . [ eqn : slow frequency variation form of the wt ] ) on the ridge , , in terms of : there are two different well - known approximations to the instantaneous frequency . as each has relative merits , we consider both . the simplest approximation to the instantaneous frequency is the rate of change of the phase of the cwt with respect to , evaluated at :\right ] _ { b = t}\right| .\ ] ] the derivation for this expression using the new wavelet is identical to that of the morlet wavelet given by delprat et al . .the other approximation to the instantaneous frequency uses the equality of the frequency of the signal and of the wavelet on a ridge to create an expression for the frequency of the signal as a function of the scale on the ridge and the frequency of the mother wavelet , : conventionally , is taken to be the underlying modulating frequency , , of the mother wavelet function .however , this is a poor approximation at small .therefore , the obvious definition of is the modal average ( position of the highest peak ) in the fourier power spectrum .unfortunately , this expression for can not be found analytically .however , even at small , the spectrum is nearly symmetric about the main peak ( see fig . [fig : harrop wavelet spectrum at sigma=3d0 ] ) .therefore , the mean average is always a good approximation to the modal average ( see fig .[ fig : harrop wavelet mean and mode frequency ] ) and , unlike the mode , the mean can be found analytically : using this expression for in conjunction with the relationship between scale and frequency in eq .( [ eqn : instantaneous frequency from mod ] ) , a cwt may be plotted as a function of time and frequency .delprat et al . proposed that the phase - based instantaneous frequency , eq .( [ eqn : instantaneous frequency from arg ] ) , is more accurate than the modulus - based measurement , eq .( [ eqn : instantaneous frequency from mod ] ) , and suggested an iterative algorithm for extracting signal components .carmona et al .have since shown that the modulus - based measurement is extremely resiliant to noise and have suggested numerous methods for extracting signal components using this approach .thus the instantaneous frequencies and amplitudes of components in a signal may be found from the cwt at the points where is locally maximized with respect to .these maxima can be identified numerically from a set of samples of generated by discrete approximation to the integral expression for the cwt , eq .( [ eqn : integral expression for the wavelet transform ] ) .once found , the maxima may be interpreted using the approximate analytic results given above .the method described in the previous section is most easily clarified by the following examples .first , we choose to apply the method to the simple , variable - frequency function ( see fig .[ fig : example function ] ) : the ft conveys little useful information about the original function. however , the modulus of the wt does convey useful information , particularly when plotted as a function of frequency instead of scale ( see fig .[ fig : example function wt ] ) as this highlights the linearly changing local frequency of ( given by ) as a function of .the cwt of contains a single , ` v ' shaped ridge at .this ridge reflects both the frequency modulation of ( see fig .[ fig : example function wt period ] ) and the amplitude modulation ( see fig . [fig : example function wt amplitude ] ) . in all cases ,the results show fluctuations linked with the phase of the signal .however , compared to the morlet wavelet , the new wavelet produces much smaller fluctuations in all results .\a ) b ) we now apply the method described in sec .3.1 to a function of practical interest .we choose to study the rrdf of a model glass structure .the rrdf analyzed in this paper is taken from a structural model of the icosahedral ( ic ) glass created in a classical molecular - dynamics simulation .we calculate the transform as detailed in sec . 2 and perform the analysis as discussed in sec . 3 in order to study the components of function is considered to be zero outside the range , where is half the side of the cubic simulation super - cell which contains atoms .the rrdf , , is defined in terms of the atomic density as: where is the average atomic density .this is shown in fig .[ fig : rrdf]a for the ic glass . reduced lennard - jones units ( r.u . )are used for length with the mean nearest - neighbour separation being .u .. the damped extended - range density fluctuations are clearly visible , extending beyond .u .( see the inset in fig .[ fig : rrdf]a ) .\a ) b ) from the fourier power spectrum of ( shown in fig .[ fig : rrdf]b ) , it is clear that contains many components with different frequencies .the highest peak in occurs at the frequency .this peak has non - zero width implying that the real - space fluctuation in corresponding to this peak has a spatially varying amplitude but we can not deduce a functional form from this alone . plotting the modulus of the cwt using different envelope widths , shown as a function of and ( ) in fig . [fig : rrdf wt ] , allows to be examined in the time - frequency plane . using small results in high spatial resolution but poor frequency resolution and the ridgesare smeared together ( see fig . [fig : rrdf wt]a ) .larger values of separate the ridges at the cost of decreasing the spatial resolution ( see fig .[ fig : rrdf wt]b ) .unlike the example function from the previous section , contains several components with different frequencies which , particularly when using large , manifest themselves as separate ridges in the wt . in this paperwe consider only the prominent ridge at but the same analysis can be applied to the ridges seen at other frequencies .the ridge along shows that the prominent frequency component identified from the fourier power spectrum of is particularly strong around but decays away with greater . this trajectory of the ridgecan then be used to extract the instantaneous frequencies and amplitudes of this component in .\a ) b ) the instantaneous frequency found using ( see fig . [fig : rrdf instantaneous frequency from mod ] ) remains constant over a large range of .as expected , the scale at which this ridge occurs in the cwt of corresponds to the position of the prominent peak in the fourier power spectrum of .the amplitudes of components in an rrdf are expected to tend to zero in the limit for a disordered material due to the absence of long - range order . the instantaneous amplitude of the dominant ridge extracted from the cwt using the new wavelet , eq .( [ eqn : harrop wavelet ] ) , is shown plotted on a logarithmic scale in fig .[ fig : rrdf wt amplitude ] . the amplitude is clearly seen to decay exponentially in the region .the reason for the exponential form of this decay ( also observed by ding et al . for silica glass ) is not yet known .the method used by ding et al . could not reproduce the frequency modulation of the damped density fluctuations in and their observed amplitude modulation contained only six points which were noted to decay approximately exponentially . in comparison ,our method reproduces true , instantaneous frequencies ( analogous to frequencies obtained by fourier analysis ) , showing the frequency modulation of individual density fluctuations in , and the amplitude modulations of these variable - frequency components , as a continuum of points .this gives much more compelling evidence for the exponential decay first observed by ding et al .in addition , we can detect a significant deviation from the exponential decay of at large ( see fig .[ fig : rrdf wt amplitude ] ) .this may either be due to statistical noise from the finite nature of the simulation or due to the use of periodic boundary conditions in the model producing effective long - range order .the precise reason needs further investigation .we have identified the complex , continuous wavelet transform using wavelets of constant shape as a method well suited to the time - frequency analysis of one - dimensional functions . for our target application , namely the analysis of functions with components which have rapidly varying frequency and amplitude modulations ,we have illustrated an important shortcoming of the existing morlet wavelet , explained the origin of this shortcoming and proposed a new wavelet which overcomes the problem .in addition , we have specialized an existing method for extracting instantaneous frequency and amplitude measurements from signals to the new wavelet .two example functions have been analysed using the new wavelet and new method of analysis .the first , a simple variable - frequency function , illustrates the significant improvement of the new wavelet over the morlet wavelet and gives numerical evidence that our method of analysis is accurate .the second is a real - world example of a direct - space atomic correlation function of a glass which highlights the advantages of the method over the conventional fourier transform and greatly improves upon the single , previous wavelet analysis of such a function by ding et al . .we have successfully used the wt to analyze the reduced radial distribution function ( rrdf ) of a model glass and can immediately identify previously undetected features . the dominant component in the rrdf ( the damped extended - range density fluctuations ) has a period which rapidly settles to a constant value .other components with different frequencies are present in the rrdf .these oscillations all have approximately exponentially decaying real - space amplitudes .
this paper describes a method for extracting rapidly varying , superimposed amplitude- and frequency - modulated signal components . the method is based upon the continuous wavelet transform ( cwt ) and uses a new wavelet which is a modification to the well - known morlet wavelet to allow analysis at high resolution . in order to interpret the cwt of a signal correctly , an approximate analytic expression for the cwt of an oscillatory signal is examined via a stationary - phase approximation . this analysis is specialized for the new wavelet and the results are used to construct expressions for the amplitude and frequency modulations of the components in a signal from the transform of the signal . the method is tested on a representative , variable - frequency signal as an example before being applied to a function of interest in our subject area - a structural correlation function of a disordered material - which immediately reveals previously undetected features .
imputation is often used to handle missing data . for inference , if imputed values are treated as if they were observed , variance estimates will generally be underestimates ( ) . to account for the uncertainty due to imputation , proposed multiple imputation which creates multiply completed datasets to allow assessment of imputation variability .multiple imputation is motivated in a bayesian framework ; however , its frequentist validity is controversial . claimed that multiple imputation can provide valid frequentist inference in various applications ( for example , ) . on the other hand , as discussed by , , , , , , , and , the multiple imputation variance estimator is not always consistent . for multiple imputation inference to be valid ,imputations must be proper . a sufficient condition is given by , the so - called congeniality condition , imposed on both the imputation model and the form of subsequent complete - sample analyses , which is quite restrictive for general purpose estimation .rubin s variance estimator is otherwise inconsistent . pointed out that multiple imputation that is congenial for mean estimation is not necessarily congenial for proportion estimation .therefore , some common statistical procedures , such as the method of moments estimators , can be incompatible with the multiple imputation framework . in this paper, we characterize the asymptotic bias of rubin s variance estimator when the method of moments estimator is used in the complete - sample analysis .we also discuss an alternative variance estimator that can provide asymptotically valid inference for method of moments estimation .the new variance estimator is compared with rubin s variance estimator through two limited simulation studies in .suppose that the sample consists of observations , which is an independent realization of a random vector .for simplicity of presentation , assume that is a scalar outcome variable and is a -dimensional covariate .suppose that is fully observed and is not fully observed for all units in the sample .without loss of generality , assume the first units of are observed and the remaining units of are missing .let be the response indicator of , that is , if is observed and otherwise . denote and .we further assume that the missing mechanism is missing at random in the sense of .the parameter of interest is , where is a known function .for example , if , then is the population mean of , and if , then is the population proportion of less than .assume that the conditional density belongs to a parametric class of models indexed by such that for some and the marginal distribution of is completely unspecified . to generate imputed values for missing outcomes from , we need to estimate the unknown parameter , either by likelihood - based methods or by bayesian methods .the multiple imputation procedure employs a bayesian approach to deal with the unknown parameter , which unfolds in three steps : _ step 1 ._ ( imputation ) create complete datasets by filling in missing values with imputed values generated from the posterior predictive distribution . specifically , to create the imputed dataset ,first generate from the posterior distribution , and then generate from the imputation model for each missing ._ step 2 . _( analysis ) apply the user s complete - sample estimation procedure to each imputed dataset .let be the complete - sample estimator of applied to the imputed dataset and be the complete - sample variance estimator of ._ step 3 . _( summarize ) use rubin s combining rule to summarize the results from the multiply imputed datasets .the multiple imputation estimator of is , and rubin s variance estimator is where and .if the method of moments estimator of is used in step 2 , the multiple imputation estimator of becomes where . to derive the frequentist property of , we rely on the bernstein - von mises theorem ( ; chapter 10 ) , which claims that under regularity conditions and conditional on the observed data , the posterior distribution converges to a normal distribution with mean and variance , where is the maximum likelihood estimator of from the observed data and is the inverse of the observed fisher information matrix with . as a result ,assume that is sufficiently smooth in , conditional on the observed data , we have \cong e\{g(y)\mid x_{i};\hat{\theta}\ } , ] , and the first term , , is the variance of the sample mean of . to estimate this term ,consider as in ( [ eq : rubin s var ] ) , and we have and $ ] .therefore , the first term can be estimated by . by the strong law of large numbers , as .the second term , , reflects the variability associated n the imputed values . to estimate this term, we use over - imputation in the sense that the imputation is carried out not only for the units with missing outcomes , but also for the units with observed outcomes .over - imputation has been used in model diagnostics for multiple imputation .let for and .define and the key insight is based on the following observations : and ; therefore , the second term of ( [ eq : tvar ] ) can be estimated by . combining the estimators of the two terms in ( [ eq : tvar ] ), we have the new multiple imputation variance estimator , given in the following theorem . under the assumptions of theorem 1 ,the new multiple imputation variance estimator is where , with defined in ( [ cm ] ) and being the usual between - imputation variance in ( [ eq : rubin s var ] ) . is asymptotically unbiased for estimating the variance of the multiple imputation estimator in ( [ 1b ] ) as . to account for the uncertainty in the variance estimator with a small to moderate imputation size ,a interval estimate for is , where is an approximate number of degrees of freedom based on satterthwaite s method ( 1946 ) given in supplementary material . from simulation studies ,we find that using gives similar satisfactory results as using the formula we provided . as a practical matter , is preferred .the proposed variance estimator in ( [ eq : new2 ] ) is also asymptotically unbiased when is the maximum likelihood estimator of ( see supplementary material for proof ) .therefore , the proposed variance estimator is applicable regardless of whether the maximum likelihood estimator or the method of moments estimator is used for the complete - sample estimator .the price we pay for the better performance of our variance estimator is an increase in computational complexity and data storage space , which requires datasets , with of them including the over - imputations and the last one containing the original observed data .however , when one s concern is with valid inference of multiple imputation , as in this paper , our proposed variance estimator based on over - imputation is preferred over that of rubins .in addition , given over - imputations , the subsequent inference does not require the knowledge of the imputation models .this is important because data analysts typically do not have access to all the information that the imputers used for imputation .our study would promote the use of over - imputation at the time of imputation , which not only allows the imputers to assess the adequacy of the imputation models , but also enables the analysts to carry out valid inference without knowledge of the imputation models .to test our theory , we conduct two limited simulation studies . in the first simulation , monte carlo samples of size are independently generated from where , and with . in the sample , we assume that is fully observed , but is not .let be the response indicator of and , where .we consider two scenarios : ( i ) and ( ii ) , with the average response rate about .the parameters of interest are and . for multiple imputation , imputed values are independently generated from the linear regression model using the bayesian regression imputation procedure discussed in , where and are treated as independent with prior density proportional to . in each imputed dataset , we adopt the following complete - sample point estimators and variance estimators : , , , and . the relative bias of the variance estimator is calculated as .the confidence intervals are calculated as , where is the quantile of the distribution with degrees of freedom . for rubins method , with , , , and .the coverage is calculated as the percentage of monte carlo samples where the estimate falls within the confidence interval . from table 1 , for , under scenario ( i ) , the relative bias of rubin s variance estimator is , which is consistent with our result in ( [ eq:1 ] ) with , where , , and . under scenario ( ii ) , the relative bias of rubin s variance estimator is , which is consistent with our result in ( [ eq:1 ] ) with , where , , and . the empirical coverage for rubin s method can be over or below the nominal coverage due to variance overestimation or underestimation . on the other hand , the new variance estimator is essentially unbiased for these scenarios .in the second simulation , monte carlo samples of size are independently generated from where , and with .the parameters of interest are and .we consider two different factors for simulation .one is the response mechanism : missing completely at random and missing at random . for missing completely at random , . for missing at random , , where and with the average response rate about other factor is the size of multiple imputation , with two levels and . from table 2 ,regarding the relative bias , rubin s variance estimator is unbiased for , with absolute relative bias of less than , and our new variance estimator is comparable with rubin s variance estimator with absolute relative bias of less than .rubin s variance estimator is biased upward for , with absolute relative bias as high as ; whereas our new variance estimator reduces absolute relative bias to less than .regarding confidence interval estimates , for , the confidence interval calculated from our new method is slightly wider than that from rubin s method , because our new method uses a smaller number of degrees of freedom in the distribution .however , for , the confidence interval calculated from our new method is narrower than that from rubin s method even with a smaller number of degrees of freedom in the distribution , due to the overestimation in rubin s method .rubin s method provides good empirical coverage for in the sense that the empirical coverage is close to the nominal coverage ; however , the empirical coverage for reaches to for confidence intervals , and for confidence intervals , due to variance overestimation .in contrast , our new method provides more accurate coverage of confidence interval for both and at and levels .c.i . , confidence interval ; ; ; rubin / new , rubins / new variance estimator .c.i . , confidence interval ; ; ; rubin / new , rubins / new variance estimator .our method can be extended to a more general class of parameters obtained from estimating equations .let be defined as a solution to the estimating equation .examples of include mean of , proportion of less than , quantile , regression coefficients , and domain means .a similar approach can be used to characterize the bias of rubin s variance estimator and to develop a bias - corrected variance estimator .we are grateful to xianchao xie and xiaoli meng for many helpful conversations and to the _ biometrika _ editors and four referees for their valuable comments that helped to improve this paper .the research of the second author was partially supported by a grant from us national science foundation and also by a cooperative agreement between the u.s .department of agriculture natural resources conservation service and iowa state university .the supplementary material available at _ biometrika _ online includes the proof of theorem 1 , the proof of theorem 2 , verification of the new variance estimator being unbiased when is the maximum likelihood estimator of , and an approximate number of degrees of freedom .
multiple imputation is a popular imputation method for general purpose estimation . provided an easily applicable formula for the variance estimation of multiple imputation . however , the validity of the multiple imputation inference requires the congeniality condition of , which is not necessarily satisfied for method of moments estimation . this paper presents the asymptotic bias of rubin s variance estimator when the method of moments estimator is used as a complete - sample estimator in the multiple imputation procedure . a new variance estimator based on over - imputation is proposed to provide asymptotically valid inference for method of moments estimation . bayesian method ; congeniality ; missing at random ; proper imputation ; survey sampling .
modeling and simulations enable one to understand and explain the observable phenomena and predict the new ones .this is true , as well , for the mathematical study and modeling of the traffic flow with the aim to get a better understanding of phenomena and avoid some problems of traffic congestion .traffic phenomena are complex and nonlinear , they show cluster formation , huge fluctuations and long - range dependencies .almost forty years ago from the empirical data it was detected that fluctuations of a traffic current on a expressway obey the law for low spectral frequencies .similarly noise is observable in the flows of granular materials . noise , or fluctuations are usually related with the power - law distributions of other statistics of the fluctuating signals , first of all with the power - law decay of autocorrelations and the long - memory processes ( see , e.g. , comprehensive bibliography of noise in the website , review articles and references in the recent paper ) .the appearance of the clustering and large fluctuations in the traffic and granular flows may be a result of synchronization of the nonlinear systems with stopping and driven by common noise , resulting in the nonchaotic behavior of the brownian - type motions , intermittency and noise .the traffic and granular flows usually may be considered as those consisting of the discrete identical objects such as vehicles , pedestrians , granules , packets and so on , they may be represented as consisting of pulses or elementary events and further simplified to the point process model . moreover , from the modeling of the traffic it was found that noise may be the result of clustering and jumping similar to the point process model of noise . on the other hand , noise may be conditioned by the flow consisting of uncorrelated pulses of variable size with the power - law distribution of the pulse durations . in the internet trafficthe flow of the signals primarily is composed of the power - law distributed file sizes .the files are divided by the network protocol into the equal packets .therefore , the total incoming web traffic is a sequence of the packets arising from large number of requests .such a flow exhibits fluctuations , as well .the long - range correlations and the power - law fluctuations in the wide range of the time scale from minutes to months of the expressway traffic flow have recently been observed and investigated using the method of the detrended fluctuation analysis .there are no explanations why the traffic flow exhibit noise behavior in such a large interval of the time .it is the purpose of this paper to present analytical and numerical results for the modeling of flows represented as sequences of different pulses and as a correlated non - poissonian point process resulting in noise and to apply these results to the modeling of the internet traffic .we will investigate a signal of flow consisting of a sequence of pulses , here the function represents the shape of the pulse having influence on the signal in the region of time .the power spectral density of the signal can be written as where , is the observation time and the brackets denote the averaging over realizations of the process .we assume that pulse shape functions decrease sufficiently fast when . since , the bounds of the integration in eq .can be changed to .when the time moments are not correlated with the shape of the pulse , the power spectrum is after introduction of the functions and the spectrum can be written as equation can be further simplified for the stationary process . then all averages can depend only on , i.e. , and equation then reads introducing a new variable and changing the order of summation , yield here and are minimal and maximal values of the index in the interval of observation . eq .may be simplified to the structure where is the mean number of pulses per unit time and is the number of pulses in the time interval .if the sum when , then the second term in the sum vanishes and the spectrum is when the shape of the pulses is fixed ( -independent ) then the function does not depend on and and , therefore , . then equation yields the power spectrum eq. represents the spectrum of the process as a composition of the spectrum of one pulse , and the power density spectrum of the point process with the area of the pulse .the shapes of the pulses mainly influence the high frequency power spectral density , i.e. , at , with being the characteristic pulse length .therefore the power spectral density at low frequencies for the not very long pulses is mainly conditioned by the correlations between the transit times , i.e. , the signal may be approximated by the point process .the point process model of noise has been proposed , generalized , analysed and used for the financial systems .it has been shown that when the average interpulse , interevent , interarrival , recurrence or waiting times of the signal diffuse in some interval , the power spectrum of such process may exhibit the power - law dependence , , with .the distribution density of the signal intensity defined as may be of the power - law , , with , as well .the exponents and are depending on the manner of diffusion - like motion of the interevent time and , e.g. , for the multiplicative process are interrelated . for the pure multiplicative process where is the exponent of the power - law distribution , , of the interevent time .in general , for relatively slow fluctuations of , the distribution density of the flow , is mostly conditioned by the multiplier .as far as the point process model has recently been analysed rather properly here we will not repeat the analysis and only present some new illustrations . , ( a ) , of the flow , ( b ) , and of the power spectra , ( c ) , for different point processes with the slow diffusion - like motion of the average interevent time .different symbols correspond to different types of the generation of the interevent sequences.,title="fig : " ] , ( a ) , of the flow , ( b ) , and of the power spectra , ( c ) , for different point processes with the slow diffusion - like motion of the average interevent time .different symbols correspond to different types of the generation of the interevent sequences.,title="fig : " ] , ( a ) , of the flow , ( b ) , and of the power spectra , ( c ) , for different point processes with the slow diffusion - like motion of the average interevent time .different symbols correspond to different types of the generation of the interevent sequences.,title="fig : " ] figure [ spp : dist ] demonstrates that for essentially different distributions of , the power spectra and distribution densities of the point processes are similar .further we proceed to the flow consisting of the pulses of different durations and application of this approach for modeling of the internet traffic .when the occurrence times of the pulses are uncorrelated and distributed according to the poisson process , the power spectrum of the random pulse train is given by the carlson s theorem where is the fourier transform of the pulse .suppose that the random parameters of the pulses are the duration and the area ( integral ) of the pulse .we can take the form of the pulses as where is the characteristic duration of the pulse .the value of the exponent corresponds to the fixed height but different durations , the telegraph - like pulses , whereas corresponds to constant area pulses but of different heights and durations , and so on . for the power - law distribution of the pulse durations , from eqs .and we have the spectrum for when the expression may be approximated as therefore , the random pulses with the appropriate distribution of the pulse duration ( and area ) may generate signals with the power - law distribution of the spectrum with different slopes .so , the pure noise generates , e.g. , the fixed area ( ) with the uniform distribution of the durations ( ) sequences of pulses , the fixed height ( ) with the uniform distribution of the inverse durations and all other sequences of random pulses satisfying the condition .in such a case from eq .we have this section we will apply the results of the section [ sec : flow ] for modeling the internet traffic .the incoming traffic consists of sequence of packets , which are the result of the division of the requested files by the network ( tcp ) protocol .maximum size of the packet is bytes .therefore , the information signal is as the point process with the pulse area bytes .further , we will analyse the flow of the packets and will measure the intensity of the flow in packets per second .in such a system of units in eq .we should put .we exploit the empirical observation that the distribution of the file sizes may be described by the positive cauchy distribution with the empirical parameter bytes .this distribution asymptotically exhibits the pareto distribution and follows the zipf s law .the files are divided by the network protocol into packets of the maximum size of bytes or less . in the internet traffic the packets spread into the poissonian sequence with the average inter - packet time ( see fig . [tit : packets ] ) .the total incoming flow of the packets to the server consists of the packets arising from the poissonian request of the files with the average interarrival time of files .the files are requested from different servers located at different distance .this results in the distribution of the average inter - packet time in some interval . for reproduction of the empirical distribution of the interpacket time we assume the uniform distribution of in some interval $ ] , similarly to the mcwhorter model of noise . as a result , the presented model reproduces sufficiently well the observable non - poissonian distribution of the arrival interpacket times and the power spectral density , as well ( see fig . [tit : dist ] ) .in the paper it was shown that the processes exhibiting noise and the power - law distribution of the intensity may be generated starting from the signals as sequences of constant area pulses with the correlated appearance times as well as of different size poissonian pulses .combination of both approaches enables the modeling of the signals in the internet traffic . , ( a ) , and the power spectra , ( b ) , for the simulated point process ( open circles ) and the empirical data ( open squares ) .the used parameters are as in the empirical data , and ,title="fig : " ] , ( a ) , and the power spectra , ( b ) , for the simulated point process ( open circles ) and the empirical data ( open squares ) .the used parameters are as in the empirical data , and ,title="fig : " ] .[ tit : dist ]the support by the lithuanian state science and studies foundation is acknowledge .
we present analytical and numerical results of modeling of flows represented as the correlated non - poissonian point process and as the poissonian sequence of pulses of the different size . both models may generate signals with the power - law distributions of the intensity of the flow and the power - law spectral density . furthermore , different distributions of the interevent time of the point process and different statistics of the size of pulses may result in noise with . combination of the models is applied for modeling of the internet traffic .
device - to - device ( d2d ) communications have recently attracted a lot of attention as a means to enhance cellular network performance by direct communication between physically proximal cellular devices . however , their incorporation introduces challenges to system design as there are issues related to interference management and sharing of system resources .although d2d transmissions can be viewed as a degenerated case of an ad - hoc network , therefore allowing for the incorporation of the many techniques already proposed in that field of research , this view neglects the availability of cellular infrastructure that can be utilized to more efficiently perform tasks such as establishment of d2d links and resource allocation .there has been much research related to ( network - assisted ) d2d communications , usually by formulation of appropriate optimization problems ( see for a recent literature review ) .however , this approach does not lead to analytical results on the performance of d2d communications , which are of interest as they provide quantitative insights on the benefits of d2d communications as well as guidelines on related aspects of system design . to this end, tools from stochastic geometry that have been successfully applied previously for ad - hoc and cellular networks analysis have been recently incorporated in d2d - related studies . analytical evaluation of d2d communications under this framework is investigated in for various system model assumptions , e.g. , overlay / underlay d2d communications , power control , e.t.c .however , the issue of resource management ( and its potential benefit ) has not been addressed , i.e. , d2d transmissions are treated as an uncoordinated ad - hoc network where interferers that are in ( very ) close proximity may exist . in ,this deficiency is partially overcome by assuming a random time - frequency hopping channel access scheme for d2d transmissions .this approach improves performance as the probability of having close - by interferers on the same subchannel is reduced .however , it is expected that a more intelligent scheduling scheme , assisted by the cellular infrastructure , will provide better performance . in this paper , analytical evaluation of overlay - inband d2d communications withnetwork - assisted ( coordinated ) scheduling is investigated . to this end, a simple scheduling scheme is assumed that takes into account only local ( per cell ) topological information of the d2d links .stochastic geometry tools are utilized in order to obtain analytical expressions for the interferers density as well as the d2d link signal - to - interference - ratio ( sir ) distribution . the resulting integral - form expression for the sir distributioncan be easily evaluated numerically and provides very good accuracy as evident by comparison with simulation results .in addition , it allows to efficiently perform design optimization tasks and offers an example case study , showing that , under optimized system parameters , availability of d2d communications with coordinated scheduling enhances system performance by increasing the average user rate as well as the maximum allowable d2d link distance .the paper is organized as follows .section ii describes the system model as well as scheduling schemes for d2d transmissions . in section iii , an analytical expression for the sir distribution under coordinated schedulingis derived .section iv provides numerical results validating the accuracy of the analysis and a case study example of the benefits of ( coordinated ) d2d communications as overlay in the downlink of a cellular network .section v concludes the paper. _ notation : _ , denote the sets of real and integer numbers , respectively . denotes the probability measure , the expectation operator , written as when conditioned on the event , is the norm of and is the area of . denotes the ball centered at with radius , denotes the complement of with respect to ( w.r.t ) , and is the null set .the indicator function is denoted by , and is the integer floor operator .a hybrid network is considered that consists of both cellular and d2d links .the locations of the access points ( aps ) are modeled as a homogeneous poisson point process ( ppp ) of density . in order to avoid intersystem interference ,an overlay - inband d2d communications scheme is assumed , where d2d transmissions are performed on an exclusively assigned part of the available spectrum .the d2d transmitters ( txs ) are distributed according to a homogeneous ppp of density , independent of .each d2d tx is associated with a unique receiver ( rx ) that is located at a fixed ( worst - case ) distance away with isotropic direction .random link distances for the d2d links can be easily incorporated in the considered framework by an extra averaging operation .all nodes in the system are equipped with a single antenna , with txs having full buffers and rxs treating interference as noise , i.e. , no sophisticated decoding algorithms are considered .all d2d txs transmit with the same fixed power , normalized to unity , and ( thermal ) noise power at rxs is assumed negligible compared to interference . in order to control the interference level ,the dedicated bandwidth for d2d transmissions is partitioned into subchannels ( scs ) of equal size and each d2d tx transmits in only one of them ( the possibility of one sc used by many d2d links is allowed ) .clearly , the sc allocation scheme , which , in turn , will affect the choice of , is of critical importance .an uncoordinated ( probabilistic ) scheduling scheme is the simplest option , defined as follows . _uncoordinated scheduling _ : each d2d tx randomly and independently selects one of the available scs for transmission . note that under uncoordinated scheduling there are no guarantees on the interference level as it is possible to have many close - by d2d txs using the same sc .intuitively , the probability of this event can be made ( arbitrarily ) small by increasing , however , with the cost of reduced bandwidth utilization .an alternative option is to exploit the cellular infrastructure in order to coordinate d2d transmissions that are in close proximity .defining the ( voronoi ) cell of an ap as the set , the coordinated scheduling scheme considered in this paper is the following .[ defn]definition _ coordinated scheduling _ : each ap independently allocates scs to the d2d txs located within by the following procedure . 1 .the d2d txs are randomly partitioned into groups of size and one group of size ( if ) .members of each group are randomly ordered .sc allocation for each group is performed serially , with the -th d2d tx of the group assigned a unique , randomly selected sc out of the set of scs not previously allocated to one the first d2d txs of the group .coordinated scheduling provides soft guarantees on the interference level as it results in mutual orthogonal transmissions for d2d txs _ of the same group _ , although it is possible that the same sc may be assigned to more than one d2d txs belonging to different groups .specifically , given , , each sc will be allocated to either or d2d txs .however , a ( small ) possibility of having excessively many nearby d2d txs using the same sc still exists , as it may happen that close - by txs located near the common edge of adjacent cells are assigned the same sc .in this section , the sir distribution under the aforementioned scheduling schemes is investigated analytically . for the analysis , is conditioned on including a particular ( typical ) tx positioned , without loss of generality , at a distance from the origin , where its intended ( typical ) rx resides .note that , by the properties of the ppp , this conditioning is equivalent to adding the point to , i.e. , , conditioned on the existence of , is distributed as the original , non - conditioned process . under the assumptions of sec .ii , the signal - to - interference - ratio ( sir ) at the typical rx on the sc used by the typical d2d link is with , where are the mutually independent fading coefficients between txs and the typical receiver at the origin whose marginal distribution is exponential of unit mean ( rayleigh fading ) , and is the path loss exponent .the point process corresponds to the d2d txs using the same sc as the typical tx and depends on the scheduling scheme . note that by the symmetry of the system model , ( [ eq : sir ] ) holds for any sc that the typical d2d tx transmits on , which will be assumed in the following to be the first . under uncoordinated scheduling, is the point process resulting from independent thinning of with a retention probability , therefore , it is a homogeneous ppp with density . in this case , ( [ eq : sir ] ) corresponds to the sir of the well - studied bipolar ad - hoc network with poisson distributed interferers whose distribution is given in the following lemma .the complementary cumulative distribution function ( ccdf ) of the sir with uncoordinated scheduling is where . equation ( [ eq : adhoc_cov ] ) verifies the intuition that increasing improves sir due to a decrease of the number of interferers on the considered sc .it is clear that there is no difference between coordinated and uncoordinated scheduling for , therefore , it will be assumed in the following that . in order to obtain the sir distribution under coordinated scheduling it is necessary to obtain the statistics of the interference power or , equivalently , its laplace transform . to this end, the following assumption is employed that significantly simplifies the analysis .each d2d rx is located within the same cell as its corresponding d2d tx .clearly , this assumption is valid with high probability when is sufficiently small .let denote the ap closest to the typical d2d link .then , assumption 1 allows to express the interference power _ conditioned on _ , , as where denotes the intracell interference power generated by the point process and denotes the intercell interference power generated by the process . by the properties of the ppp , and are independent ppps .since coordinated scheduling decisions within are independent of decisions in other cells , it follows that [ lem]lemma and are independent . however , in contrast to the uncoordinated case , and are not the result of independent thinning of and , respectively , since the coordinated scheduling process introduces correlation among sc allocations to the d2d txs within each cell .therefore , they can not be claimed to be ppp and their actual distribution must be derived , which is a difficult task . on the other hand ,the semi - random manner under which coordinated scheduling is performed suggests that this correlation is small and , therefore , the following approximation is expected to be accurate .[ as]assumption and are homogeneous ppps .the term homogeneous in the above assumption is understood as referring only to the subset of where the processes are not identically , i.e. , has a constant , non - zero density only in and zero density in , and similarly for .assumption 2 is critical as it allows to incorporate well known analytical tools for ppps , which , however , require knowledge of the densities of , , provided in the following . under assumption 2 , the density of equals and the density of equals where is the ( upper ) incomplete gamma function .recall that the density of a homogeneous point process equals the average number of points within any bounded subset of divided by its area .consider _ any _ cell , , with d2d txs located within it , and let denote the number of d2d txs assigned sc 1 .noting that , with probability 1 , , /|\mathcal{c}_i|\nonumber\\ { } & { = } & \mathbb{e}(k_i)/(n|\mathcal{c}_i|),\end{aligned}\ ] ] where the last equation is obtained by noting that that can be verified by straightforward computations . since , ( [ eq : l_out ] ) follows .now consider cell and let denote the number of d2d txs located within _ in addition to _ the typical tx , and the number of d2d txs assigned sc 1 .then , /|\mathcal{c}_0|\nonumber\\ { } & \overset{(a)}{= } & \mathbb{e}[(k_0/n)\mathbb{i}(k_0 \geq n)]/|\mathcal{c}_0|\nonumber\\ { } & { = } & \frac{1}{n|\mathcal{c}_0|}\sum_{k = n}^{\infty}k \mathbb{p}(k_0=k)\nonumber\\ { } & \overset{(b)}{= } & \frac{\lambda_d}{n } e^{-\lambda_d|\mathcal{c}_0| } \sum_{k = n}^{\infty}\frac{(\lambda_d|\mathcal{c}_0|)^{k-1}}{(k-1)!},\end{aligned}\ ] ] where ( a ) follows by noting that for , and ( b ) since is a poisson random variable of mean . employing the series representation of for ( * ? ? ?8.352.7 ) results in ( [ eq : l_in ] ) .proposition 1 provides some initial insights on the interference levels provided by coordinated scheduling in terms of density of interferers .in particular , the interferers density outside is the same as in the uncoordinated case , i.e. , coordination does not provide any benefit in this respect , which is not surprising as scheduling decisions are taken independently per cell . on the other hand, coordination manages to reduce the density of intra - cell interferers as it can be directly verified that .note also that is a monotonically increasing function of , with for , i.e. , _ for a fixed value of _ , the benefit of coordinated scheduling diminishes as the expected number of d2d txs within increases .having specified the above densities , the laplace transform of and is provided in the following lemma .[ lem]lemma the laplace transform of is the laplace transform of is similar to ( [ eq : l_i_in ] ) with and in place of and , respectively .unfortunately , is a convex polyhedron which makes even numerical evaluation of ( [ eq : l_i_in ] ) impractical .clearly , a simple approximation of is required and it is natural to consider a circular shape area whose radius depends on the distance between the typical d2d rx and ap .in particular , the following two approximations may be employed for .[ as]assumption is a circular region equal to ( see fig .1 ) 1 . , or 2 . note that , by the rotational invariance of the ppps , has been considered to lie on the x - axis for simplicity . is chosen as a conservative approximation of , since and the typical d2d is placed right on its edge .approximation attempts to remedy the latter issue by covering the typical d2d tx from all sides .it can be easily seen that neither approximation will be a good fit for all realizations of .however , they do allow for a computationally tractable expression of the sir distribution under coordinated scheduling , conditioned on , or , equivalently , on .[ prop]proposition under assumptions 1 - 3 , the ccdf of sir with coordinated scheduling and conditioned on the distance is given by ( 9 ) ( shown at the top of the page ) , where , , , under the approximation , and , , , under the approximation .starting from ( [ eq : sir ] ) , where the last equation follows from ( [ eq : i2 ] ) and lemma 2 . using the laplace transform formula of lemma 3 , replacing with one of the approximations in assumption 3 and writing the integrals in polar coordinates results in ( 9 ) .the integral term of ( 9 ) is in a form that allows for numerical computation and can also be written in closed form for the case of .in addition , it can be easily seen that it is non - negative for any , which , after comparing with ( [ eq : adhoc_cov ] ) , shows that the sir under coordinated scheduling stochastically dominates sir under uncoordinated scheduling , i.e. , coordinated scheduling provides at least as good performance as uncoordinated scheduling in terms of sir . in order to obtain an -independent sir distribution the statistics of must be determined . for small , is close to the distance between and whose distribution is known , suggesting the following approximation .[ as]assumption distance is rayleigh distributed with .the unconditioned sir distribution can now be obtained by simple averaging . under assumptions 1 - 4 ,the ccdf of sir with coordinated scheduling equals with as in ( 9 ) .in order to assess the accuracy of the analytical results for the coordinated case , simulations were performed assuming , , which corresponds to an average of 10 d2d txs ( links ) per cell . in each simulation run , a realization of and was generated in an area of size large enough to include 30 aps on average and the sir of the typical link was measured .the results depicted were obtained by averaging over independent runs with the value of set to .figure 2 shows the cumulative distribution function ( cdf ) of sir under coordinated scheduling with ( recall that the average distance between typical d2d tx and closest ap is ) .note that there are on average d2d txs located closer to the typical d2d rx than the typical d2d tx .performance for various is depicted , as obtained by simulations as well as the analytical expression of corollary 1 .in addition , the performance of uncoordinated scheduling ( lemma 1 ) is also shown without corresponding simulation results as ( [ eq : adhoc_cov ] ) is exact .it can be seen that for , performance is extremely poor due to large interference , whereas provides significant performance improvement .the analytical expression for the sir is very close to the simulation results under the approximation , whereas approximation gives a conservative estimate ( upper bound ) of performance that is nevertheless tighter than the upper bound provided by the uncoordinated performance .coordinated scheduling outperforms uncoordinated scheduling as predicted by the analytical results providing a and db gain with , , respectively , for an outage probability of .since the concept of d2d communications relies on taking advantage of the proximity between communicating devices , it is of interest to examine performance as a function of d2d link distance .figure 3 shows the outage probability for sir values , , db , and , as a function of , under coordinated scheduling ( simulation and analytical results ) as well as under uncoordinated scheduling .it can be seen , that , in both cases , outage probability increases with and .coordinated scheduling always outperforms uncoordinated scheduling with most significant gains for values of up to about .note that the analytical results are very close to the simulation results for this range of .larger values of result in diminishing the gain offered by coordination , which is actually eliminated completely for very large .this is due to d2d txs and corresponding rxs residing in different cells , which is also the reason why analytical results become inaccurate for very large since assumptions 1 and 4 no longer hold .however , this is not a serious shortcoming as this region of operation ( very large ) is not of interest for d2d communications .although sir performance can be improved by increasing , this comes at a cost of reduced bandwidth utilization , which suggests that must be optimized according to some rate - related criterion . to this end , the analytical expressions of sec .iii can be employed for an efficient numerical search of the optimal . as an application ,the following simple case study is considered that provides some insights on the benefits of d2d communications as overlay to the downlink of a cellular network . in particular , consider the presence of cellular rxs , i.e. , rxs whose data are generated from sources that can not allow for d2d communication , with locations modeled as an independent homogeneous ppp of density .these rxs are served by their closest ap via a simple time - division - multiple - access ( tdma ) scheduling scheme , with all aps in the system transmitting with the same power .the sir threshold model for achieved rates is considered , i.e. , a spectral efficiency of bits / hz per channel use is achieved as long as the connection sir is above a threshold . assuming a common sir threshold for both cellular and d2d communications , the average user rate , taking into account the effects of tdma for cellular users and for d2d communications , is defined as where , , is the number of cellular rxs located within a random cell , is the portion of the downlink cellular bandwidth devoted for d2d communications , and , stand for the sir experienced by cellular and d2d rxs , respectively .the distribution of and have been exactly computed in and , respectively .therefore , is a function of , and that can be optimized with the analytical formulas for of sec .figure 4 shows as a function of for , , db , and ( fair bandwidth partition ) . for each ,the optimal value of was obtained by a numerical search . for reference ,performance when d2d communications are not supported is also shown , corresponding to the case and . for the coordinated case , performance obtained using ( time - consuming ) stochastic optimization based on simulation is also shown , matching very well the analytical results .it can be seen that introducing d2d communications to the system enhances the average user rate for values of up to about of the average distance from the closest ap , above which d2d communications perform worse than cellular and contribute negatively to the average user rate .this maximum distance suggests design guidelines , e.g. , on the device - discovery training sequence / power , so as not to allow establishment of d2d links of distance greater than this threshold .it can also be seen that coordinated scheduling outperforms uncoordinated scheduling , and allows for about increased maximum allowed d2d link distance .the advantage of coordinated scheduling would be greater if reliability constraints , e.g. , on sir , were considered .finally , performance with is also shown , which is much inferior both in terms of performance and maximum d2d link distance , indicating the importance of allowing for more than one scs .this paper investigated the performance of overlay - inband d2d communications with coordinated scheduling directed by the cellular infrastructure .a simple scheduling scheme was assumed for which analytical expressions of the density of interferers as well as the sir distribution were obtained , the latter validated by simulations .it was shown that coordination provides significant gains compared to an uncoordinated scheduling scheme , both in terms of sir as well as average user rate for a system supporting cellular and overlay d2d communications .the analytical formulas can be utilized for system design analysis / optimization , e.g. , for obtaining the maximum d2d link distance above which d2d communications are not beneficial compared to cellular .this work has been performed in the context of the art - comp pe7(396 )_ `` advanced radio access techniques for next generation cellular networks : the multi - site coordination paradigm '' _ , thales - intention and thales - endecon research projects , within the framework of operational program `` education and lifelong earning '' , co - financed by the european social fund ( esf ) and the greek state .g. fodor , e. dahlman , g. mildh , s. parkvall , n. reider , g. mikls , and z. turani , `` design aspects of network assisted device - to - device communications , '' _ ieee commun . mag .3 , pp . 170177 , mar .2012 .k. huang , v. k. n. lau , and y. chen , spectrum sharing between cellular and mobile ad hoc networks : transmission - capacity trade - off , in _ ieee j. sel .areas commun ._ , vol . 27 , no . 7 , pp . 12561267 , sep .
in this paper , analytical assessment of overlay - inband device - to - device ( d2d ) communications is investigated , under cellular - network - assisted ( coordinated ) scheduling . to this end , a simple scheduling scheme is assumed that takes into account only local ( per cell ) topological information of the d2d links . stochastic geometry tools are utilized in order to obtain analytical expressions for the interferers density as well as the d2d link signal - to - interference - ratio distribution . the analytical results accuracy is validated by comparison with simulations . in addition , the analytical expressions are employed for efficiently optimizing the parameters of a cellular system with overlay d2d communications . it is shown that coordinated scheduling of d2d transmissions enhances system performance both in terms of average user rate as well as maximum allowable d2d link distance .
the manipulation of atomic or molecular quantum dynamics commonly uses coherent quantum control , which may be extremely useful for a large variety of problems .the dynamical evolution of a closed quantum system under the action of a collection of coherent controls ( e.g. , rabi frequencies of the applied laser field ) is described by the equation ,\qquad\rho|_{t=0}=\rho_0\label{eq1}\ ] ] here is the system density matrix at time ( for an -level quantum system , the set of all density matrices is , where is the set of complex matrices ) , is the free system hamiltonian describing evolution of the system in the absence of control fields and each is an operator describing the coupling of the system to the control field .coherent control of a closed system induces a unitary transformation of the system density matrix and may have some limitations .the first limitation is due to the fact that unitary transformations of an operator preserve its spectrum ; thus the spectrum of is the same at any and , for example , a mixed state will always remain mixed .a second limitation is that a control which is optimal for some initial state may be not optimal for another initial state even if and have the same spectrum .this limitation originates from the reversibility of unitary evolution and is due to the fact that if .to overcame these limitations at least to some degree , control by measurements or incoherent control may be used , and in this work incoherent control by the environment ( ice ) is discussed .some general mathematical notions for the controlled quantum markov dynamics are formulated in ref . .the necessity to consider incoherent control relies also on the fact that coherent control of quantum systems ( e.g. , of chemical reactions ) in the laboratory is often realized in a medium ( solvent ) which interacts with the controlled system and plays the role of the environment .furthermore , then environment may be also affected to some degree by the coherent laser field , thus effectively realizing incoherent control of the system .moreover , laser sources of coherent radiation at the present time have practical limitations , and some frequencies are very expensive to generate compared to the respective sources of incoherent control ( e.g. , incoherent radiation as considered in sec . [ sec2.1 ] of this work ) .thus the latter incoherent control can be used in some cases to reduce the total cost of quantum control .this paper summarizes recent results specifically for incoherent control by the environment ( ice ) .a general theoretical formulation for incoherent control is provided in sec .[ sec2 ] , followed by the examples of control by incoherent radiation ( sec .[ sec2.1 ] ) and control through collisions with particles of a medium ( e.g. , solvent , gas , etc .[ sec2.2 ] ) .relevant known results about controllability and the structure of control landscapes for open quantum systems in the kinematic picture are briefly outlined in sec .[ sec : kraus ] .the dynamical evolution of an open quantum system under the action of coherent controls in the markovian regime is described by a master equation +{\cal l}\rho_t\label{eq2}\ ] ] the interaction with the environment modifies the hamiltonian part of the dynamics by adding an effective hamiltonian term to the free hamiltonian .another important effect of the environment is the appearance of the term which describes non - unitary aspects of the evolution and is responsible for decoherence .this term in the markovian regime has the general gorini - kossakowski - sudarshan - lindblad ( gksl ) form where are some operators acting in the system hilbert space .the explicit form of the gksl term depends on the particular type of the environment , on the details of the microscopic interaction between the system and the environment , and on the state of the environment .the coherent portion of the control in ( [ eq2 ] ) addresses only the hamiltonian part of the evolution while the gksl part remains fixed ( for the analysis of controllability properties for markovian master equations under coherent controls see for example , ref .however , the generator can also be controlled to some degree . for a fixed system - environmental interaction ,the generator depends on the state of the environment , which can be either a thermal state at some temperature ( including the zero temperature vacuum state ) or an arbitrary non - equilibrium state .such a state is characterized by a ( possibly , time dependent ) distribution of particles of the environment over their degrees of freedom , which are typically the momentum and the internal energy levels parameterized by some discrete index ( e.g. , for photons denotes polarization , for a gas of -level particles denotes the internal energy levels ) .denoting the density at time of the environmental particles with momentum and occupying an internal level by , and the corresponding gksl generator as ] , called the _ objective functional _ , represents the physical system s property which we want to minimize . the term ] defined by ( [ o1 ] ) [ and similarly for the objective functionals defined by ( [ o2 ] ) and ( [ o3 ] ) ] or of the corresponding performance index . non - equilibrium radiation is characterized by its distribution in photon momenta and polarization . for control with distribution of incoherent radiation the magnitude of the photon momentum can be exploited along with the polarization and the propagation direction in cases where polarization dependence or spatial anisotropy is important ( e.g. , for controlling a system consisting of oriented molecules bound to a surface ) .a thermal equilibrium distribution for photons at temperature is characterized by planck s distribution where is the speed of light , and are the planck and the boltzmann constants which we set to one below .non - equilibrium incoherent radiation may have a distribution given as an arbitrary non - negative function .some practical means to produce non - equilibrium distributions in the laboratory may be based either on filtering thermal radiation or on the use of independent monochromatic sources .the master equation for an atom or a molecule interacting with a coherent electromagnetic field and with incoherent radiation with a distribution in the markovian regime has the form : +{\cal l}_{\rm rad}[n_{{\bf k}}(t)]\rho_t\ ] ] the coherent part of the dynamics is generated by the free system s hamiltonian with eigenvalues , forming the spectrum , and the corresponding projectors , the effective hamiltonian resulting from the interaction between the system and the incoherent radiation , dipole moment , and electromagnetic field .the gksl generator induced by the incoherent radiation with distribution function has the form ( e.g. , see ref . ) \rho= \sum\limits_{{\omega}\in\omega}[{\gamma}^+_{\omega}(t)+{\gamma}^-_{-{\omega}}(t ) ] ( 2\mu_{\omega}\rho\mu^\dagger_{\omega}-\mu^\dagger_{\omega}\mu_{\omega}\rho-\rho\mu^\dagger_{\omega}\mu_{\omega})\ ] ] here the sum is taken over the set of all system transition frequencies , , and the coefficients \ ] ] determine the transition rates between energy levels with transition frequency .the transition rates depend on the photon density .the form - factor determines the coupling of the system to the -th mode of the radiation .equation ( [ r : eq1 ] ) together with the explicit structure ( [ eq5 ] ) of the gksl generator provides the theoretical formulation for analysis of control by incoherent radiation . the numerical simulations illustrating the capabilities of learning control by incoherent radiation to prepare prespecified mixed states from a pure state is available along with a theoretical analysis of the set of stationary states for the generator for some models .incoherent control by radiation can extend the capabilities of coherent control by exciting transitions between the system s energy levels for which laser sources are either unavailable at the present time or very expensive compared with the corresponding sources of incoherent radiation . provides a simple experimental realization of the combined coherent ( by a laser ) and incoherent ( by incoherent radiation emitted by a gas - discharge lamp ) control of certain excitations in kr atoms .this section considers incoherent control of quantum systems through collisions with particles of a surrounding medium ( e.g. , a gas or solvent of electrons , atoms or molecules , etc . ) .this case also includes coherent control of chemical reactions in solvents if the coherent field addresses not only the controlled system but the solvent as well . the particles of the medium in this treatment serve as the control and the explicit characteristic of the medium exploited to minimize the performance index is in general a time dependent distribution of the medium particles over their momenta and internal energy levels .this distribution is formally described by a non - negative function , whose value ( where , and ) has the physical meaning of the density at time of particles of the surrounding medium with momentum and in internal energy level . in this scheme oneprepares a suitable , in general non - equilibrium , distribution of the particles in the medium such that the medium drives the system evolution through collisions in a desired way. it may be difficult to practically create a desired non - equilibrium distribution of medium particles over their momenta .in contrast , a non - equilibrium distribution in the internal energy levels can be relatively easily created , e.g. , by lasers capable of exciting the internal levels of the medium particles or through an electric discharge .then the medium particles can affect the controlled system through collisions and this influence will typically depend on their distribution .a well known example of such control is the preparation of population inversion in a he ne gas - discharge laser . in this systeman electric discharge passes through the he ne gas and brings the he atoms into a non - equilibrium state of their internal degrees of freedom .then he ne collisions transfer the energy of the non - equilibrium state of the he atoms into the high energy levels of the ne atoms .this process creates a population inversion in the ne atoms and subsequent lasing .a steady electric discharge can be used to keep the gas of helium atoms in a non - equilibrium state to produce a cw he ne laser .this process can serve as an example of incoherent control through collisions by considering the gas of he atoms as the control environment ( medium ) and the ne atoms as the system which we want to steer to a desired ( excited ) state .quantum systems controlled through collisions with gas or medium particles in certain regimes can be described by master equations with gksl generators whose explicit structure is different from the generator describing control by incoherent radiation .if the medium is sufficiently dilute , such that the probability of simultaneous interaction of the control system with two or more particles of the medium is negligible , then the reduced dynamics of the system will be markovian and will be determined by two body scattering events between the system and one particle of the medium .below we provide a formulation for control of quantum systems by a dilute medium , although the assumption of diluteness is not a restriction for ice , and dense mediums might be used for control as well . the master equation for a system interacting with coherent fields and with a dilute medium of particles with mass has the form ( [ eq3 ] ) with the generator={\cal l}_{\rm medium}[n_{{{\bf k}},\alpha}(t)] ] provides the general formulation for theoretical analysis of control by a coherent field and by a non - equilibrium medium with density . ,( b ) , and ( c ) .each case shows : the objective function vs ga generation , the optimal distribution vs momentum , and the evolution of the diagonal elements of the density matrix for the optimal distribution . in the plots for the objective functionthe upper curve is the average value for the objective function and the lower one is the best value in each generation . ] as a simple illustration of such incoherent control , fig .[ fig1ldl ] reproduces the numerical results from ref . for optimally controlled transfer of a pure initial state of a four - level system into three different mixed target states [ i.e. , the objective function ( [ o2 ] ) is chosen ] .the control is modelled by collisions with a medium prepared in a static non - equilibrium distribution whose form is optimized by learning control using a genetic algorithm ( ga ) based on the mutation and crossover operations .since the initial and target states have different spectra , they can not be connected by a unitary evolution induced by coherent control. however , fig .[ fig1ldl ] shows that ice through collisions can work perfectly for such situations .physically admissible evolutions of an -level quantum system can be represented by cp , trace preserving maps ( _ kraus maps _ ) .a map is positive if for any such that : .a linear map is cp if for any the map is positive ( here denotes the identity map in ) .a cp map is called trace preserving if for any .the conditions of trace preservation and positivity for physically admissible evolutions are necessary to guarantee that maps states into states .the condition of complete positivity has the following meaning .consider the elements of as operators of some -level ancilla system which does not evolve , i.e. , its evolution is represented by the identity mapping .suppose that the -level system does not interact with the ancilla .then the combined evolution of the total system will be represented by the map and the condition of complete positivity requires that for any this map should transform all states of the combined system into states , i.e. to be positive .any cp , trace preserving map can be expressed using the kraus operator - sum representation as where are the kraus operators subject to the constraint to guarantee trace preservation .this constraint determines a complex stiefel manifold whose points are matrices ( i.e. , each is a column matrix of ) satisfying the orthogonality condition .the explicit evolution in ( [ ice : eq4 ] ) is unlikely to be known for realistic systems .however , since this evolution is always a cp , trace preserving map , it can be represented in the kraus form assume that any kraus map can be generated in this way using the available coherent and incoherent controls and .then effectively the kraus operators can be considered as the controls [ instead of and ] which can be optimized to drive the evolution of the system in a desired direction .this picture is called _ the kinematic picture _ in contrast with _ the dynamical picture _ of sec .[ sec2 ] . in the next two subsections we briefly outline the controllability and landscape properties in the kinematic picture .any classical or quantum system at a given time is completely characterized by its state .the related notion of state controllability refers to the ability to steer the system from any initial state to any final state , either at a given time or asymptotically as time goes to infinity , and the important problem in control analysis is to establish the degree of state controllability for a given control system .assuming for some finite - level system that the set of admissible dynamical controls generates arbitrary kraus type evolution , the following theorem implies then that the system is completely state controllable .[ t1 ] for any state of an -level quantum system there exists a kraus map such that for all states .* proof . * consider the spectral decomposition of the final state , where is the probability to find the system in the state ( and ) .choose an arbitrary orthonormal basis in the system hilbert space and define the operators the operators satisfy the normalization condition and thus determine the kraus map .the map acts on any state as and thus satisfies the condition of the theorem . potential importance of this result is that it shows that there may exist a single incoherent evolution which is capable for transferring all initial states into a given target state , and moreover , the target state can be an arbitrary pure or a mixed state .thus this theorem shows that non - unitary evolution can break the two general limitations for coherent unitary control described in the second paragraph in the introduction . in the kinematic description , under the assumption that any kraus map can be generated , the objective functional becomes a function on the stiefel manifold . in practice, various gradient methods may be used to minimize such an objective function .if the objective function has a local minimum then gradient based optimization methods can be trapped in this minimum and will not provide a true solution to the problem .for such an objective function , if the algorithm stops in some minimum one can not be sure that this minimum is global and therefore this solution may be not satisfactory .this difficulty does not exist if _ a priori _ information about absence of local minima for the objective function is available as provided by the following theorem for a general class of objective functions of the form ={{\rmtr}\,}[(\sum_{i=1}^{{\lambda}}k_i\rho k^\dagger_i)o] ] on the stiefel manifold does not have local minima or maxima ; it has global minimum manifold , global maximum manifold , and possibly saddles whose number and the explicit structure depend on the degeneracies of and .the case has been considered in detail in ref . , where the global minimum , maximum , and saddle manifolds are explicitly described for each type of initial state . in particular , it is found that the objective function for a non - degenerate target operator and for a pure ( i.e. , such that ) does not have saddle manifolds ; for the completely mixed initial state , has one saddle manifold with the value of the objective function ; and for any partially mixed initial state has two saddle manifolds corresponding to the values of the objective function , where ] ) .the case of arbitrary is considered in ref .this paper outlines recent results for incoherent control of quantum systems through their interaction with an environment .a general formulation for incoherent control through gksl dynamics is given , followed by examples of incoherent radiation and a gaseous medium serving as the incoherent control environments .the relevant known results on controllability of open quantum systems subject to arbitrary kraus type dynamics , as well as properties of the corresponding control landscapes , are also discussed .this work was supported by the nsf and aro .a. pechen acknowledges also partial support from the rffi 08 - 01 - 00727-a and thanks the organizers of the 28-th conference on quantum probability and related topics ( cimat - guanajuato , mexico , 2007 ) prof .r. quezada batalla and prof .l. accardi for the invitation to present a talk on the subject of this work .99 a. g. butkovskiy and y. i. samoilenko _ control of quantum - mechanical processes and systems _ ( nauka , moscow , 1984 ) ; + a. g. butkovskiy and y. i. samoilenko _ control of quantum - mechanical processes and systems _( kluwer , dordrecht , 1990 ) ( engl .transl . ) .d. tannor and s. a. rice , _ j. chem. phys . _ http://dx.doi.org/10.1063/1.449767[*83 * , 5013 ( 1985 ) ] .a. p. pierce , m. a. dahleh and h. rabitz , _ phys .http://dx.doi.org/10.1103/physreva.37.4950[*37 * , 4950 ( 1988 ) ] .r. s. judson and h. rabitz , _ phys .lett . _ http://dx.doi.org/10.1103/physrevlett.68.1500[*68 * , 1500 ( 1992 ) ] .w. s. warren , h. rabitz and m. dahleh , _ science _ * 259 * , 1581 ( 1993 ) .s. a. rice and m. zhao _ optical control of molecular dynamics _( wiley , new york , 2000 ) .h. rabitz , r. de vivie - riedle , m. motzkus and k. kompa , _ science _ * 288 * , 824 ( 2000 ) . m. shapiro and p. brumer _ principles of the quantum control of molecular processes _ ( wiley - interscience , hoboken , nj , 2003 ) .i. a. walmsley and h. rabitz , _ physics today _ * 56 * , 43 ( 2003 ) . m. dantus and v. v. lozovoy , _ chem .rev . _ * 104 * , 1813 ( 2004 ) .l. accardi , s. v. kozyrev and a. n. pechen , _ qp pq : quantum probability and white noise analysis _ vol .* xix * ed .l. accardi , m. ohya and n. watanabe ( world sci . pub .co. , singapore ) , 1 ( 2006 ) ; _ e - print _ http://xxx.lanl.gov/abs/quant-ph/0403100 .d. dalessandro _ introduction to quantum control and dynamics _( chapman and hall , boca raton , 2007 ) . s. g. schirmer , a. i. solomon and j. v. leahy , _ j. phys . a : math .gen . _ http://dx.doi.org/10.1088/0305-4470/35/18/309[*35 * , 4125 ( 2002 ) ] .r. vilela mendes and v. i. manko , _ phys .http://dx.doi.org/10.1103/physreva.67.053404[*67 * , 053404 ( 2003 ) ] .a. mandilara and j. w. clark , _ phys .http://dx.doi.org/10.1103/physreva.71.013406[*71 * , 013406 ( 2005 ) ] .l. roa , a. delgado , m. l. ladron de guevara and a. b. klimov , _ phys . rev .http://dx.doi.org/10.1103/physreva.73.012322[*73 * , 012322 ( 2006 ) ] .a. pechen , n. ilin , f. shuang and h. rabitz , _ phys .http://dx.doi.org/10.1103/physreva.74.052102[*74 * , 052102 ( 2006 ) ] ; + _ e - print _ http://xxx.lanl.gov/abs/quant-ph/0606187 . f. shuang , a. pechen , t .- s . ho and h. rabitz , _ j. chem . phys ._ http://dx.doi.org/10.1063/1.2711806[*126 * , 134303 ( 2007 ) ] ; + _ e - print _ http://xxx.lanl.gov/abs/quant-ph/0609084 .a. pechen and h. rabitz , _ phys .http://dx.doi.org/10.1103/physreva.73.062102[*73 * , 062102 ( 2006 ) ] ; + _ e - print _ http://xxx.lanl.gov/abs/quant-ph/0609097 .r. romano and d. dalessandro , _ phys .http://dx.doi.org/10.1103/physreva.73.022323[*73 * , 022323 ( 2006 ) ] .l. accardi and k. imafuku , _ qp pq : quantum probability and white noise analysis _ vol .* xix * ed .l. accardi , m. ohya and n. watanabe ( world sci . pub .co. , singapore ) , 28 ( 2006 ) .y. ding _ et al _ , _ rev .sci . instruments _ * 78 * , 023103 ( 2007 ) .v. p. belavkin , _ automatia and remote control _ * 44 * , 178 ( 1983 ) ; + _ e - print _ http://arxiv.org/abs/quant-ph/0408003 . v. gorini , a. kossakowski and e. c. g. sudarshan , _ j. math .* 17 * , 821 ( 1976 ) .g. lindblad , _ comm .phys . _ * 48 * , 119 ( 1976 ) . c. altafini , _ j. math .* 44 * , 2357 ( 2003 ) .m. grace , c. brif , h. rabitz , i. a. walmsley , r. l. kosut and d. a. lidar , _ j. phys .b : at . mol ._ * 40 * , s103 ( 2007 ) ; _ e - print _ http://xxx.lanl.gov/abs/quant-ph/0702147 .m. grace , c. brif , h. rabitz , d. a. lidar , i. a. walmsley and r. l. kosut , _ j. modern optics _ * 54 * , 2339 ( 2007 ) .v. e. tarasov , _j. phys . a : math .gen . _ http://dx.doi.org/10.1088/0305-4470/35/25/305[*35 * , 5207 ( 2002 ) ] .r. wu , a. pechen , c. brif and h. rabitz , _ j. phys . a : math .http://dx.doi.org/10.1088/1751-8113/40/21/015[*40 * , 5681 ( 2007 ) ] ; + _ e - print _ http://xxx.lanl.gov/abs/quant-ph/0611215. l. accardi , y. g. lu and i. v. volovich _ quantum theory and its stochastic limit _( springer , berlin , 2002 ) .r. dmcke , _ comm_ * 97 * , 331 ( 1985 ) .l. accardi , a. n. pechen and i. v. volovich , _ infin . dimens .probab . and relat. topics _ * 6 * , 431 ( 2003 ) ; _ e - print _ http://xxx.lanl.gov/abs/math-ph/0206032 .a. pechen , _ qp - pq : quantum probability and white noise analysis _ vol . * xviii * ed m. schrmann and u. franz ( world sci . pub .co. , singapore ) , 428 ( 2005 ) ; + _ e - print _ http://xxx.lanl.gov/abs/quant-ph/0607134 .d. e. goldberg _ genetic algorithms in search , optimization and machine learning _( addison - wesley , reading , ma , 1989 ) .k. kraus _ states , effects , and operations _( springer , berlin , new york , 1983 ) .a. pechen , d. prokhorenko , r. wu and h. rabitz _ j. phys . a : math_ http://dx.doi.org/10.1088/1751-8113/41/4/045205[*41 * , 045205 ( 2008 ) ] ; _ e - print _ http://xxx.lanl.gov/abs/0710.0604 .r. wu , a. pechen , h. rabitz , m. hsieh and b. tsou , j. math .2008 ( at press ) ; + _ e - print _ http://xxx.lanl.gov/abs/0708.2119 .
conventional approaches for controlling open quantum systems use coherent control which affects the system s evolution through the hamiltonian part of the dynamics . such control , although being extremely efficient for a large variety of problems , has limited capabilities , e.g. , if the initial and desired target states have density matrices with different spectra or if a control field needs to be designed to optimally transfer different initial states to the same target state . recent research works suggest extending coherent control by including active manipulation of the non - unitary ( i.e. , incoherent ) part of the evolution . this paper summarizes recent results specifically for incoherent control by the environment ( e.g. , incoherent radiation or a gaseous medium ) with a kinematic description of controllability and landscape analysis .
light is a natural candidate to transmit information across large networks due to its high speed and low propagation losses .a major obstacle to building more advanced optical networks is the lack of an all - optically controlled device that can robustly delay or store optical wave - packets over a tunable amount of time . in the classical domain, such a device would enable all - optical buffering and switching , bypassing the need to convert an optical pulse to an electronic signal .in the quantum realm , such a device could serve as a memory to store the full quantum information contained in a light pulse until it can be passed to a processing node at some later time .a number of schemes to coherently delay and store optical information are being actively explored .these range from tunable coupled resonator optical waveguide ( crow ) structures , where the propagation of light is dynamically altered by modulating the refractive index of the system , to electromagnetically induced transparency ( eit ) in atomic media , where the optical pulse is reversibly mapped into internal atomic degrees of freedom .while these schemes have been demonstrated in a number of remarkable experiments , they remain difficult to implement in a practical setting . here, we present a novel approach to store or stop an optical pulse propagating through a waveguide , wherein coupling between the waveguide and a nearby nano - mechanical resonator array enables one to map the optical field into long - lived mechanical excitations .this process is completely quantum coherent and allows the delay and release of pulses to be rapidly and all - optically tuned .our scheme combines many of the best attributes of previously proposed approaches , in that it simultaneously allows for large bandwidths of operation , on - chip integration , relatively long delay / storage times , and ease of external control . beyond light storage, this work opens up the intriguing possibility of a platform for quantum or classical all - optical information processing using mechanical systems .an optomechanical crystal is a periodic structure that constitutes both a photonic and a phononic crystal .the ability to engineer optical and mechanical properties in the same structure should enable unprecedented control over light - matter interactions .planar two - dimensional ( 2d ) photonic crystals , formed from patterned thin dielectric films on the surface of a microchip , have been succesfully employed as nanoscale optical circuits capable of efficiently routing , diffracting , and trapping light .fabrication techniques for such 2d photonic crystals have matured significantly over the last decade , with experiments on a si chip demonstrating excellent optical transmission through long ( ) linear arrays of coupled photonic crystal cavities . in a similar si chip platform it has recently been shown that suitably designed photonic crystal cavities also contain localized acoustic resonances which are strongly coupled to the optical field via radiation pressure .these planar optomechanical crystals ( omcs ) are thus a natural candidate for implementation of our proposed slow - light scheme . and , whose resonance frequencies differ by the frequency of the mechanical mode . both optical modes leak energy into the waveguide at a rate and have an inherent decay rate .the mechanical resonator optomechanically couples the two optical resonances with a cross coupling rate of .( b)a simplified system diagram where the classically driven cavity mode is effectively eliminated to yield an optomechanical driving amplitude between the mechanical mode and cavity mode .( c ) frequency - dependent reflectance ( black curve ) and transmittance ( red ) of a single array element , in the case of no optomechanical driving amplitude ( dotted line ) and an amplitude of ( solid line ) .the inherent cavity decay is chosen to be .( inset ) the optomechanical coupling creates a transparency window of width for a single element and enables perfect transmission on resonance , .( d ) energy level structure of simplified system .the number of photons and phonons are denoted by and , respectively . the optomechanical driving amplitude couples states while the light in the waveguide couples states .the two couplings create a set of -type transitions analogous to that in eit.[fig : system],width=642 ] in the following we consider an optomechanical crystal containing a periodic array of such defect cavities ( see figures [ fig : system](a),(b ) ) .each element of the array contains two optical cavity modes ( denoted ) and a co - localized mechanical resonance .the hamiltonian describing the dynamics of a single element is of the form _= _ 1+_2+_m^+h(+^)(+).[eq : h ] here are the resonance frequencies of the two optical modes , is the mechanical resonance frequency , and are annihilation operators for these modes . the optomechanical interaction cross - couples the cavity modes and with a strength characterized by and that depends linearly on the mechanical displacement . while we formally treat as quantum mechanical operators , for the most part it also suffices to treat these terms as dimensionless classical quantities describing the positive - frequency components of the optical fields and mechanical position .in addition to the optomechanical interaction described by equation ( [ eq : h ] ) , the cavity modes are coupled to a common two - way waveguide ( described below ) .each element is decoupled from the others except through the waveguide .the design considerations necessary to achieve such a system are discussed in detail in the section `` optomechanical crystal design . '' for now , we take as typical parameters thz , ghz , mhz , and mechanical and ( unloaded ) optical quality factors of ( room temperature)- ( low temperature ) and , where is the mechanical decay rate and is the intrinsic optical cavity decay rate .similar parameters have been experimentally observed in other omc systems . in practice , one can also over - couple cavity mode to the waveguide , with a waveguide - induced optical decay rate that is much larger than . for the purpose of slowing light , the cavity modes will be resonantly driven by an external laser , so that to good approximation can be replaced by its mean - field value .we furthermore consider the case where the frequencies are tuned such that .keeping only the resonant terms in the optomechanical interaction , we arrive at a simplified hamiltonian for a single array element ( see figure [ fig : system](b ) ) , h_=_1+_m^+_m(t)(e^-i(_1-_m)t+h.c.).[eq : hlinear ] here we have defined an effective optomechanical driving amplitude and assume that is real .mode thus serves as a `` tuning '' cavity that mediates population transfer ( rabi oscillations ) between the `` active '' cavity mode and the mechanical resonator at a controllable rate , which is the key mechanism for our stopped - light protocol . in the following analysis, we will focus exclusively on the active cavity mode and drop the `` '' subscript .a hamiltonian of the form ( [ eq : hlinear ] ) also describes an optomechanical system with a single optical mode , when the cavity is driven off resonance at frequency and corresponds to the sidebands generated at frequencies around the classical driving field . for a single system, this hamiltonian leads to efficient optical cooling of the mechanical motion , a technique being used to cool nano - mechanical systems toward their quantum ground states .while the majority of such work focuses on how optical fields affect the mechanical dynamics , here we show that the optomechanical interaction strongly modifies optical field propagation to yield the slow / stopped light phenomenon .equation ( [ eq : hlinear ] ) is quite general and thus this phenomenon could in principle be observed in any array of optomechanical systems coupled to a waveguide . in practice , there are several considerations that make the 2d omc `` ideal . ''first , our system exhibits an extremely large optomechanical coupling and contains a second optical tuning cavity that can be driven resonantly , which enables large driving amplitudes using reasonable input power . using two different cavitiesalso potentially allows for greater versatility and addressability of our system .for instance , in our proposed design the photons in cavity are spatially filtered from those in cavity 2 .second , the 2d omc is an easily scalable and compact platform . finally , as described below ,the high mechanical frequency of our device compared to typical optomechanical systems allows for a good balance between long storage times and suppression of noise processes .we first analyze propagation in the waveguide when is static during the transit interval of the signal pulse . as shown in the appendix , the evolution equations in a rotating frame for a single element located at position along the waveguide are given by & = & -+i_m+i((z_j)+(z_j))+(z_j),[eq : dadt ] + & = & -+i_m+(t).[eq : dbdt ] equation ( [ eq : dadt ] ) is a standard input relation characterizing the coupling of right- ( ) and left - propagating ( ) optical input fields in the waveguide with the cavity mode . here is the total optical cavity decay rate , is quantum noise associated with the inherent optical cavity loss , and for simplicity we have assumed a linear dispersion relation in the waveguide .equation ( [ eq : dbdt ] ) describes the optically driven mechanical motion , which decays at a rate and is subject to thermal noise .the cavity mode couples to the right - propagating field through the equation ( + ) ( z , t)=i(z - z_j)+ik_0,[eq : waveeqn ] where .we solve the above equations to find the reflection and transmission coefficients of a single element for a right - propagating incoming field of frequency ( see appendix ) . in the limit where , and defining , r(_k)=- , while .example reflectance and transmittance curves are plotted in figure [ fig : system](c ) . for any non - zero , a single element is perfectly transmitting on resonance , whereas for resonant transmission past the cavity is blocked .when , excitation of the cavity mode is inhibited through destructive interference between the incoming field and the optomechanical coupling . in eit , a similar effect occurs via interference between two electronic transitions .this analogy is further elucidated by considering the level structure of our optomechanical system ( figure [ fig : system](d ) ) , where the interference pathways and the `` ''-type transition reminiscent of eit are clearly visible .the interference is accompanied by a steep phase variation in the transmitted field around resonance , which can result in a slow group velocity . these steep features and their similarity to eit in a single optomechanical systemhave been theoretically and experimentally studied , while interference effects between a single cavity mode and two mechanical modes have also been observed . .the optical cavity modes of each element leak energy into the waveguide at a rate and have an inherent decay rate .the mechanical resonator of each element has frequency and is optomechanically coupled to the cavity mode through a tuning cavity ( shown in figure [ fig : system ] ) with strength .( b ) the band structure of the system , for a range of driving strengths between and .the blue shaded regions indicate band gaps , while the color of the bands elucidates the fractional occupation ( red for energy in the optical waveguide , green for the optical cavity , and blue for mechanical excitations ) .the dynamic compression of the bandwidth is clearly visible as .( c ) band structure for the case is shown in greater detail .( d ) the fractional occupation for each band in ( c ) is plotted separately .it can be seen that the polaritonic slow - light band is mostly mechanical in nature , with a small mixing with the waveguide modes and negligible mixing with the optical cavity mode .zoom - ins of figures ( c ) and ( d ) are shown in ( e ) and ( f ) .[ fig : bandstructure],width=642 ] from for a single element , the propagation characteristics through an infinite array ( figure [ fig : bandstructure](a ) ) can be readily obtained via band structure calculations . to maximize the propagation bandwidth of the system, we choose the spacing between elements such that where is a non - negative integer . with this choice of phasing the reflections from multiple elements destructivelyinterfere under optomechanical driving .typical band structures are illustrated in figures [ fig : bandstructure](b)-(f ) .the color coding of the dispersion curves ( red for waveguide , green for optical cavity , blue for mechanical resonance ) indicates the distribution of energy or fractional occupation in the various degrees of freedom of the system in steady - state . far away from the cavity resonance ,the dispersion relation is nearly linear and simply reflects the character of the input optical waveguide , while the propagation is strongly modified near resonance ( ) . in the absence of optomechanical coupling ( ) , a transmission band gap of width forms around the optical cavity resonance ( reflections from the bare optical cavity elements constructively interfere ) . in the presence of optomechanical driving , the band gap splits in two ( blue shaded regions ) and a new propagation band centered aroundthe cavity resonance appears in the middle of the band gap . for weakdriving ( ) the width of this band is , while for strong driving ( ) one recovers the `` normal mode splitting '' of width .this relatively flat polaritonic band yields the slow - light propagation of interest .indeed , for small the steady - state energy in this band is almost completely mechanical in character , indicating the strong mixing and conversion of energy in the waveguide to mechanical excitations along the array .it can be shown that the bloch wavevector near resonance is given by ( see appendix ) k_0+++.[eq : keff ] the group velocity on resonance , can be dramatically slowed by an amount that is tunable through the optomechanical coupling strength .the quadratic and cubic terms in characterize pulse absorption and group velocity dispersion , respectively . in the relevant regime where , these effects are negligible within a bandwidth .the second term is the bandwidth over which certain frequency components of the pulse acquire a -phase shift relative to others , leading to pulse distortion .this yields a bandwidth - delay product of ( , ( 6n^2)^1/3)[eq : bwdelayproduct ] for static and negligible mechanical losses .when intrinsic optical cavity losses are negligible , and if one is not concerned with pulse distortion , light can propagate over the full bandwidth of the slow - light polariton band and the bandwidth - delay product increases to ( see appendix ) .on the other hand , we note that if we had operated in a regime where , constructive interference in reflection would limit the bandwidth - delay product to , independent of system size . in the static regime ,the bandwidth - delay product obtained here is analogous to crow systems . in the case of eit , a static bandwidth - delay product of results , where od is the optical depth of the atomic medium .this product is limited by photon absorption and re - scattering into other directions , and is analogous to our result in the case of large intrinsic cavity linewidth . on the other hand , when is negligible , photons are never lost and reflections can be suppressed by interference .this yields an improved scaling or , depending on whether one is concerned with group velocity dispersion . in atomic media, the weak atom - photon coupling makes achieving very challenging .in contrast , in our system as few as elements would be equivalently dense .we now show that the group velocity can in fact be adiabatically changed once a pulse is completely localized inside the system , leading to distortion - less propagation at a dynamically tunable speed .in particular , by tuning , the pulse can be completely stopped and stored .this phenomenon can be understood in terms of the static band structure of the system ( figure [ fig : bandstructure ] ) and a `` dynamic compression '' of the pulse bandwidth .the same physics applies for crow structures , and the argument is re - summarized here .first , under constant , an optical pulse within the bandwidth of the polariton band completely enters the medium .once the pulse is inside , we consider the effect of a gradual reduction in .decomposing the pulse into bloch wavevector components , it is clear that each bloch wavevector is conserved under arbitrary changes of , as it is fixed by the system periodicity .furthermore , transitions to other bands are negligible provided that the energy levels are varied adiabatically compared to the size of the gap , which translates into an adiabatic condition .then , conservation of the bloch wavevector implies that the bandwidth of the pulse is dynamically compressed , and the reduction in slope of the polariton band ( figure [ fig : bandstructure ] ) causes the pulse to propagate at an instantaneous group velocity without any distortion . in the limit that , the polaritonic band becomes flat and completely mechanical in character , indicating that the pulse has been reversibly and coherently mapped onto stationary mechanical excitations within the array .we note that since is itself set by the tuning cavities , its rate of change can not exceed the optical linewidth and thus the adiabaticity condition is always satisfied in the weak - driving regime .the maximum storage time is set by the mechanical decay rate , . for realistic system parameters ghz and , this yields a storage time of . in crow structures ,light is stored as circulating fields in optical nano - cavities , where state of the art quality factors of limit the storage time to ns .the key feature of our system is that we effectively `` down - convert '' the high - frequency optical fields to low - frequency mechanical excitations , which naturally decay over much longer time scales .while storage times of ms are possible using atomic media , their bandwidths so far have been limited to mhz . in our system , bandwidths of ghzare possible for realistic circulating powers in the tuning cavities. the major source of error in our device will be mechanical noise , which through the optomechanical coupling can be mapped into noise power in the optical waveguide output . in our system ,mechanical noise emerges via thermal fluctuations and stokes scattering ( corresponding to the counter - rotating terms in the optomechanical interaction that we omitted from equation ( [ eq : hlinear ] ) ) . to analyze these effects, it suffices to consider the case of static , and given the linearity of the system , no waveguide input ( such that the output will be purely noise ) . for a single array element , the optomechanical driving results in optical cooling of the mechanical motion , with the mechanical energy evolving as ( see appendix ) = -_m(e_m-_m|n_)-e_m+(e_m+_m).[eq : demdt ] the first term on the right describes equilibration with the thermal surroundings , where is the bose occupation number at the mechanical frequency and is the bath temperature .the second ( third ) term corresponds to cooling ( heating ) through anti - stokes ( stokes ) scattering , with a rate proportional to .the stokes process is suppressed relative to the anti - stokes in the limit of good sideband resolution . for an array of elements ,a simple upper bound for the output noise power at one end of the waveguide is given by , where is the steady - state solution of equation ( [ eq : demdt ] ) .the factor of accounts for the optical noise exiting equally from both output directions , is the optically - induced mechanical energy dissipation rate , and describes the waveguide coupling efficiency .the term represents the transduction of mechanical to optical energy and is essentially the price that one pays for down - converting optical excitations to mechanical to yield longer storage times in turn , any mechanical noise gets `` up - converted '' to optical energy ( whereas the probability of having a thermal optical photon is negligible ) . in the relevant regime where , ( _ m|n_+()^2 ) .this noise analysis is valid only in the weak - driving regime ( ) .the strong driving regime , where the mechanical motion acquires non - thermal character and can become entangled with the optical fields , will be treated in future work . at room temperature , is large and thermal noise will dominate , yielding a noise power of nw per element for previously given system parameters and .this is independent of provided that , which reflects the fact that all of the thermal heating is removed through the optical channel.for high temperatures , the thermal noise scales inversely with , and the use of high - frequency mechanical oscillators ensures that the noise remains easily tolerable even at room temperature . thermal noise in the high - frequency oscillatorcan essentially be eliminated in cryogenic environments , which then enables faithful storage of single photons .intuitively , a single - photon pulse can be stored for a period only as long as the mechanical decay time , and as long as a noise - induced mechanical excitation is unlikely to be generated over a region covering the pulse length and over the transit time .the latter condition is equivalent to the statement that the power in the single - photon pulse exceeds . while we have focused on the static regime thus far , when thermal heating is negligible , realizing in the static case in fact ensures that the inequality holds even when is time - varying .physically , the rate of stokes scattering scales linearly with while the group velocity scales inversely , and thus the probability of a noise excitation being added on top of the single - photon pulse is fixed over a given transit length . in a realistic setting ,the optomechanical driving amplitude itself will be coupled to the bath temperature , as absorption of the pump photons in the tuning cavities leads to material heating .to understand the limitations as a quantum memory , we have numerically optimized the static bandwidth - delay product for a train of single - photon pulses , subject to the constraints , , and . as a realistic model for the bath temperature , we take , where is the base temperature and is a temperature coefficient that describes heating due to pump absorption ( see appendix ) . using mk and ,we find , which is achieved for parameter values , .1 ghz , and mhz .highlighted in the center .the unit cell contains two coupled l2 defect cavities ( shaded in grey ) with two side - coupled linear defect optical waveguides ( shaded in red ) .the different envelope functions pertain to the odd and even optical cavity supermodes ( green solid and dashed lines , respectively ) , the odd mechanical supermode ( blue solid line ) , and the odd and even optical waveguide modes ( red solid and dashed lines ) .the displacement field amplitude of the mechanical mode and in - plane electric field amplitude of the optical mode are shown in ( b ) and ( c ) , respectively , for a single l2 defect cavity .( d ) bandstructure of the linear - defect waveguide ( grey ) and the zone folded superlattice of the entire coupled - resonator system ( red ) . the cavity mode ( green )crosses the superlattice band at mid - zone , and the waveguide - cavity interaction is shown in more detail in the insets ( e ) and ( a ) for the even ( ) and odd ( ) supermodes , respectively .[ fig : omc_model],width=642 ] a schematic showing a few periods of our proposed 2d omc slow - light structure is given in figure [ fig : omc_model ] .the structure is built around a `` snowflake '' crystal pattern of etched holes into a silicon slab .this pattern , when implemented with a physical lattice constant of , snowflake radius , and snowflake width ( see figure [ fig : omc_model](a ) ) , provides a simultaneous phononic bandgap from to ghz and a photonic pseudo - bandgap from to thz ( see appendix ) . owing to its unique bandgap properties, the snowflake patterning can be used to form waveguides and resonant cavities for both acoustic and optical waves simply by removing regions of the pattern .for instance , a single point defect , formed by removing two adjacent holes ( a so - called `` l2 '' defect ) , yields the co - localized phononic and photonic resonances shown in figures [ fig : omc_model](b ) and ( c ) , respectively .the radiation pressure , or optomechanical coupling between the two resonances can be quantified by a coupling rate , , which corresponds to the frequency shift in the optical resonance line introduced by a single phonon of the mechanical resonance .numerical finite - element - method ( fem ) simulations of the l2 defect indicate the mechanical resonance occurs at , with a coupling rate of to the optical mode at frequency ( free - space optical wavelength of ) . in order to form the double - cavity system described in the slow - light scheme above, a pair of l2 cavities are placed in the near - field of each other as shown in the dashed box region of figure [ fig : omc_model](a ) .modes of the two degenerate l2 cavities mix , forming supermodes of the double - cavity system which are split in frequency .the frequency splitting between modes can be tuned via the number of snowflake periods between the cavities .as described in more detail in the appendix , it is the optomechanical cross - coupling of the odd ( ) and even ( ) optical supermodes mediated by the motion of the odd parity mechanical supermode ( ) of the double - cavity that drives the slow - light behaviour of the system . since is a displacement field that is antisymmetric about the two cavities , there is no optomechanical self - coupling between the optical supermodes and this mechanical mode . on the other hand ,the cross - coupling between the two different parity optical supermodes is large and given by . by letting , and be the annihilation operators for the modes , and , we obtain the system hamiltonian of equation ( [ eq : h ] ) .the different spatial symmetries of the optical cavity supermodes also allow them to be addressed independently . to achieve thiswe create a pair of linear defects in the snowflake lattice as shown in figure [ fig : omc_model](a ) , each acting as a single - mode optical waveguide at the desired frequency of roughly ( see figure [ fig : omc_model](d ) . sending light down both waveguides , with the individual waveguide modes either in or out of phase with each other ,will then excite the even or odd supermode of the double cavity , respectively .the waveguide width and proximity to the l2 cavities can be used to tune the cavity loading ( see appendix ) , which for the structure in figure [ fig : omc_model](a ) results in the desired .it should be noted that these line - defect waveguides do not guide phonons at the frequency of , and thus no additional phonon leakage is induced in the localized mechanical resonance .the full slow - light waveguide system consists of a periodic array of the double - cavity , double - waveguide structure .the numerically computed band diagram , for spacing periods of the snowflake lattice between cavity elements ( the superlattice period ) , is shown in figure [ fig : omc_model](d ) .this choice of superlattice period results in the folded superlattice band intersecting the ( ) cavity frequency at roughly mid - zone , corresponding to the desired inter - cavity phase shift of . a zoom - in of the bandstructure near the optical cavity resonances is shown in figures [ fig : omc_model](e ) and ( f ) . in figure[ fig : omc_model](e ) the even parity supermode bandstructure is plotted ( i.e. , assuming the even supermode of the double - waveguide is excited ) , whereas in figure [ fig : omc_model](f ) it is the odd parity supermode bandstructure . with , andthe attenuation per unit cell ( solid black line ) are shown as a function of detuning of the pump beam from the pump cavity frequency .the dotted line is the approximate expression derived for the attenuation , .the grey region indicates the band gap in which the pump cavities can not be excited from the waveguide .the trade - off between small pump input powers and low pump attenuation factors is readily apparent in this plot .[ fig : inputpower],width=642 ] a subtlety in the optical pumping of the periodically arrayed waveguide system is that for the ( ) optical cavity resonance at , there exists a transmission bandgap . in order to populate cavity , then , and to create the polaritonic band at ,the pump beam must be slightly off - resonant from , but still at .we achieve this by choosing a double - cavity separation ( periods ) resulting in a cavity mode splitting ( ) slightly smaller than the mechanical frequency ( ) , as shown in figures [ fig : omc_model](e ) and ( f ) . by changing the detuning between and , a trade - off can be made between the attenuation of the pump beam per unit cell , , and total required input power , shown in figure [ fig : inputpower ] . in [ app : input ] we show that the total attenuation per unit cell , is given by .interestingly , by using higher input powers such that , it is in principle possible to eliminate completely effects due to absorption in the array , which may lead to inhomogeneous pump photon occupations .the possibility of using optomechanical systems to facilitate major tasks in classical optical networks has been suggested in several recent proposals .this present work not only extends these prospects , but proposes a fundamentally new direction where optomechanical systems can be used to control and manipulate light at a quantum mechanical level .such efforts would closely mirror the many proposals to perform similar tasks using eit and atomic ensembles . at the same time , the optomechanical array has a number of novel features compared to atoms , in that each element can be deterministically positioned , addressed , and manipulated , and a single element is already optically dense .furthermore , the ability to freely convert between phonons and photons enables new possibilities for manipulating light through the manipulation of sound .taken together , this raises the possibility that mechanical systems can provide a novel , highly configurable on - chip platform for realizing quantum optics and `` atomic '' physics .this work was supported by the darpa / mto orchid program through a grant from afosr .dc acknowledges support from the nsf and the gordon and betty moore foundation through caltech s center for the physics of information .asn acknowledges support from nserc .m.h . acknowledges support from the u.s .army research office muri award w911nf0910406 .here we derive the equations of motion for an array of optomechanical systems coupled to a two - way waveguide . because each element in the array couples independently to the waveguide , it suffices here to only consider a single element , from which the result for an arbitrary number of elements is easily generalized .we model the interaction between the active cavity mode and the waveguide with the following hamiltonian , h_&=&_-^dk(ck-_1)-_-^dk(ck+_1)- + & & g_-^dz(z - z_j)(((z)+(z))+h.c.).[eqsi : hcavwg ] here are annihilation operators for left- and right - going waveguide modes of wavevector , and is the frequency of the cavity mode . for conveniencewe have defined all optical energies relative to , and assumed that the waveguide has a linear dispersion relation .the last term on the right describes a point - like coupling between the cavity ( at position ) and the left- and right - going waveguide modes , with a strength .the operator physically describes the annihilation of a right - going photon at position and is related to the wavevector annihilation operators by ( with a similar definition for ) .( [ eqsi : hcavwg ] ) resembles a standard hamiltonian used to formulate quantum cavity input - output relations , properly generalized to the case when the cavity accepts an input from either direction .note that we make the approximation that the left- and right - going waves can be treated as separate quantum fields , with modes in each direction running from .this allows both the left- and right - going fields to separately satisfy canonical field commutation relations , =\left[{\mbox{}}(z),{\mbox{}}(z')\right]=\delta(z - z')$ ] , while commuting with each other .thus each field contains some unphysical modes ( _ e.g. _ , wavevector components for the right - going field ) , but the approximation remains valid as long as there is no process in the system evolution that allows for the population of such modes . from the hamiltonian above , one finds the following heisenberg equation of motion for the right - going field , ( + ) ( z)=(z - z_j)+ik_0,[eqsi : waveeqn ] where .a similar equation holds for .the coupling of the cavity mode to a continuum of waveguide modes leads to irreversible decay of the cavity at a rate .below , we will show that is related to the parameters in the hamiltonian by . with this identification ,one recovers eq .( 5 ) in the main text .the heisenberg equation of motion for the cavity mode is given by = ig((z_j)+(z_j ) ) .to cast this equation into a more useful form , we first integrate the field equation ( [ eqsi : waveeqn ] ) across the discontinuity at , ( z_j^+ ) & = & ( z_j^-)+ , + ( z_j^- ) & = & ( z_j^+)+ .we can define and as the input fields to the cavity .it then follows that = ig(+)- , and thus we indeed see that the waveguide induces a cavity decay rate . in the case where the cavity has an additional intrinsic decay rate , a similar derivation holds to connect the intrinsic decay with some corresponding noise input field . from these considerations , andincluding the opto - mechanical coupling , one arrives at eq .( 3 ) in the main text , = -+i_m+i((z_j)+(z_j))+(z_j).[eqsi : dadt ] finally , we consider the equation of motion for the mechanical mode given by eq .( 4 ) in the main text , = -+i_m_1+(t).[eqsi : dbdt ] the zero - mean noise operator must accompany the decay term in the mechanical evolution in order to preserve canonical commutation relations of at all times . in the case where the decay causes the mechanical motion to return to thermal equilibrium with some reservoir at temperature ,the noise operator has a two - time correlation function given by , where is the bose occupation number at the mechanical frequency .first we derive the reflection and transmission coefficients for a single element in the case of constant opto - mechanical driving amplitude . given the linearity of the system , it suffices to treat eqs .( [ eqsi : waveeqn ] ) , ( [ eqsi : dadt ] ) , and ( [ eqsi : dbdt ] ) as classical equations for this purpose , and furthermore to set the noise terms , . for concreteness, we will consider the case of an incident right - going cw field in the waveguide . upon interaction with the opto - mechanical system at , the total right - going field can be written in the form a_r(z)=e^ikz - i_kt((-z+z_j)+t(_k)(z - z_j ) ) , while the left - going field is given by . here is the unit step function , is the detuning of the input field from the cavity resonance , and are the reflection and transmission coefficients for the system . at the same time, we look for solutions of the cavity field and mechanical mode of the form and .the coefficients can be obtained by substituting this ansatz into eqs .( [ eqsi : waveeqn ] ) , ( [ eqsi : dadt ] ) , and ( [ eqsi : dbdt ] ) .this yields the reflection coefficient given by eq .( 6 ) in the main text , while the transmission coefficient is related by . to calculate propagation through an array of elements ,it is convenient to introduce a transfer matrix formalism .specifically , the fields immediately to the right of the opto - mechanical element ( at ) can be related to those immediately to the left ( at ) in terms of a transfer matrix , ( c a_r(z_j^+ ) + a_l(z_j^+ ) ) = m_om ( c a_r(z_j^- ) + a_l(z_j^- ) ) , where m_om= ( cc t^2-r^2 & r + -r & 1 ) .[ eqsi : mom ] on the other hand , free propagation in the waveguide is characterized by the matrix , ( c a_r(z+d ) + a_l(z+d ) ) = m_f ( c a_r(z ) + a_l(z ) ) , where m_f= ( cc e^ikd & 0 + 0 & e^-ikd ) .[eqsi : mf ] the transfer matrix for an entire system can then be obtained by successively multiplying the transfer matrices for a single element and for free propagation together . in particular , the transfer matrix for a single `` block '' , defined as interaction with a single opto - mechanical element followed by free propagation over a distance to the next opto - mechanical element , is given by , and the propagation over blocks is simply characterized by . before studying the propagation through the entire array , we first focus on the propagation past two blocks , .because we want our device to be highly transmitting when the optomechanical coupling is turned on , we choose the spacing between consecutive blocks to be such that , where is an integer .physically , this spaces consecutive elements by an odd multiple of , where is the resonant wavelength , such that the reflections from consecutive elements tend to destructively interfere with each other . this can be confirmed by examining the resulting reflection coefficient for the two - block system , r_2=-+o(_k^3 ) , where denotes matrix elements of .note that the reflection coefficient is now suppressed as a quadratic in the detuning , whereas for a single element is linear . in the above equation ,we have made the simplifying approximation that , since in realistic systems the dispersion from free propagation will be negligible compared to that arising from interaction with an opto - mechanical element .now we can consider transmission past pairs of two blocks ( _ i.e. _ , elements in total ) . because the reflection is quadratic in the detuning , its effect on the total transmission is only of order ( because the lowest order contribution is an event where the field is reflected twice before passing through the system ) .thus , up to , the total transmission coefficient is just given by , where is the transmission coefficient for a two - block system .it is convenient to write in terms of an effective wavevector , which leads to eq .( 7 ) in the main text .performing a similar analysis for the case where , where reflections from consecutive elements interfere constructively , one finds that the bandwidth - delay product for the system does not improve with the system size .in this section we derive the optical cooling equation given by eq .( 9 ) in the main text .we begin by considering the system hamiltonian for a single element ( eq . ( 1 ) of main text ) , in the case where the tuning mode is driven on resonance and can be approximated by a classical field , , _ om=_1+_m^+h(+^)(_2e^-i_2t+_2^e^i_2t).[eqsi : h ] defining a detuning indicating the frequency difference between the two cavity modes , we can re - write eq .( [ eqsi : h ] ) in a rotating frame , _ om=-_l+_m^+_m(+^)(+),[eqsi : hrot ] where ( we have re - defined the phases such that is real ) . in the weak driving limit ( ) , the cavity dynamics can be formally eliminated to arrive at effective optically - induced cooling equations for the mechanical motion . in particular ,the opto - mechanical coupling terms and induce anti - stokes and stokes scattering , respectively . these processes yield respective optically - induced cooling ( ) and heating ( ) rates _ = . in the casewhere , the cooling process is resonantly enhanced by the cavity , yielding a cooling rate as given in the main text . also in this case , the optical heating rate is given by .this leads to the net cooling dynamics given by eq .( 9 ) in the main text , = -_m(e_m-_m|n_th)-e_m+(e_m+_m).[eqsi : demdt ] because the optical cooling process removes phonons from the mechanical system via optical photons that leak out of the cavity , one can identify as the amount of optical power that is being leaked by the cavity in the anti - stokes sideband during the cooling process .similarly , the cavity leaks an amount of power in the stokes sideband .we have ignored this contribution in eq .( 10 ) in the main text , because its large frequency separation ( ) from the signal allows it to be filtered out , but otherwise it approximately contributes an extra factor of to the last term in eq .finally , we remark that the expression for given by eq .( 10 ) represents an upper bound in that it does not account for the possibility that the output spectrum from a single element may exceed the transparency bandwidth , which could cause some light to be absorbed within the system after multiple reflections and not make it to the end of the waveguide .for simplicity , here we work only with the classical equations so that the intrinsic noise terms in the heisenberg - langevin equations can be ignored .we begin by transforming eqs .( [ eqsi : dadt ] ) and ( [ eqsi : dbdt ] ) to the fourier domain , 0 & = & ( i_k-)a + i_mb+i(a_r , in(z_j)+a_l , in(z_j)),[eqsi : dadt0 ] + 0 & = & ( i_k-)b+i_ma . to simplify the notation , we define operators at the boundaries of the unit cells ( immediately to the left of an optomechanical element ) given by , .it is also convenient to re - write the transfer matrix in the form m_om = ( cc 1- & - + & 1 + ) , [ eqsi : newmom ] with the parameter given by ( _ k ) = [ eqsi : beta ] .the transfer matrix describing propagation to the next unit cell can subsequently be diagonalized , , with the diagonal matrix given by d= ( cc e^ikd & 0 + 0 & e^-ikd ) .[eqsi : d ] physically , this diagonalization corresponds to finding the bloch wavevectors of the periodic system .the dispersion relation for the system can be readily obtained through the equation ( k(_k ) d ) = ( k d ) - i ( _ k ) ( k d)[eqsi : dispersion ]. writing in terms of , we arrive at .as described previously , the desirable operation regime of the system is such that the phase imparted in free propagation should be . for concreteness , we set here , satisfying this condition .for the frequencies of interest , which easily satisfy the condition and ignoring the intrinsic loss , the simple approximate dispersion formula ( k(_k ) d ) = - can be found .this dispersion relation yields two bandgaps , which extend from and , in the weakly coupled eit regime .we therefore have three branches in the band structure , with the narrow central branch having a width of .this branch has an optically tunable width and yields the slow - light propagation .the dispersive and lossy properties of the array can also be found by analyzing eq .( [ eqsi : dispersion ] ) perturbatively .expanding eq .( [ eqsi : dispersion ] ) as a power series in , we find k(_k ) = k_0+++ + o(_k^4),[eqsi : k ] which agrees with eq .( 7 ) in the main text . in our system ,the bloch functions are hybrid waves arising from the mixing of optical waveguide , optical cavity and mechanical cavity excitations .it is therefore of interest to calculate the hybrid or polaritonic properties of these waves , by studying the energy distribution of each bloch mode .the number of photons in the waveguide can be found by taking the sum of the left- and right - moving photons in a section of the device . over one unit cell, one obtains n _=( |c_j|^2 + hybrid bloch wave may be found by considering the symmetry transformation used to diagonalize the unit - cell transmission matrix . defining to be the amplitude of the bloch mode of interest, one finds and , while from the properties of the symmetry matrix , . from here we can deduce the number of excitations in the waveguide , the optical cavity and the mechanical cavity for a given bloch wave amplitude : n _ & = & |c_j|^2 , + n_o & = & |a|^2 + & = & |c_j + d_j|^2 + & = & | s_11 + s_21|^2 |c_j|^2,[eqsi : nofinal ] + n_m & = & |b|^2 + & = & |a|^2 .we then define the fractional occupation in the mechanical mode by ( with analogous definitions for the other components ) .these relations were used to plot the fractional occupation and colored band diagrams shown in the main text .the technicalities associated with pumping the optomechanical crystal array system are subtly distinct from those in its atomic system analogue .due to the periodic nature of the structure and its strongly coupled property ( ) , a bandgap in the waveguide will arise at the frequency of the `` tuning '' or pump cavities .this prevents these cavities from being resonantly pumped via light propagating in the waveguide .this problem may be circumvented by making the pump frequency off - resonant from ( but still at ) , and by for example changing the splitting to be less than the mechanical frequency .interestingly , the periodic nature of the system also allows one to suppress attenuation of the pump beam through the waveguide ( which would cause inhomogeneous driving of different elements along the array ) .here we calculate the waveguide input powers required to drive the pump cavities .we then find the effect of the cavity dissipation rate on the beam intensity , to provide estimates for the power drop - off as a function of distance propagated in the system .finally we note that there is a trade - off between required pump intensity and pump power drop - off . in other words , by tuning the pump beam to a frequency closer to the pump cavity resonance , the required input power is reduced , but the attenuation per unit cell is increased . to calculate the waveguide input power , we start by considering the _ net _ photon flux in the right moving direction , _r = |c_j|^2 - |d_j|^2 . using the properties of the bloch transformation matrix and equation ( [ eqsi : nofinal ] ), this expression may written in terms of the number of photons in the optical cavity of interest ( ) , _ r = n_o , j , where _ 1(_k ) = [ eqsi : beta1 ] .the required input power for the system parameters studied in the main text is shown in figure [ fig : inputpower ] . to find the attenuation per unit cell , with , of this pump beam, we use a perturbative approach similar to that used to find the polaritonic band dispersion . by expanding the bloch - vector as and using equation ( [ eqsi : dispersion ] ) ,we find , and implying that . this approximate expression is shown along with the exact calculated values for the attenuation in figure [ fig : inputpower ] .for the theoretical demonstration of our slow - light scheme , we confine ourselves to a simplified model of an optomechanical crystal where only the two - dimensional maxwell equations for te waves and the equation of elasticity for in - plane deformations of a thick slab are taken into account .these equations approximate fairly accurately the qualitative characteristics of in - plane optical and mechanical waves in thin slabs , and become exact as the slab thickness is increased . in this way , many of the intricacies of the design of high- photonic crystal cavities ( which are treated elsewhere ) may be ignored , and the basic design principles can be demonstrated in a slightly simplified system .the two - dimensional optomechanical crystal ( 2domc ) system used here utilizes the `` snowflake '' design , which provides large simultaneous photonic and phononic bandgaps in frequency .here we choose to use optical wavelengths in the telecom band , _i.e. _ , corresponding to a free - space wavelength of . for this wavelength, we found that the crystal characterized by a lattice constant , snowflake radius , and width , shown in figure [ fig : bandstructure ] , should work well .we begin our design by focusing on the creation of a single optomechanical cavity on the 2domc , with one relevant optical and mechanical mode .this cavity is formed by creating a point defect , consisting of two adjacent removed holes ( a so - called `` l2 '' cavity ) .we calculate the optical and mechanical spectra of this cavity using comsol , a commercial fem package , and find a discrete set of confined modes . of these ,one optical and one mechanical mode were chosen , exhibiting the most promising value of the opto - mechanical coupling strength ( see below for calculation ) .these modes are shown in figure [ fig : bandstructure](b ) and ( c ) , and were found to have frequencies and , respectively .from here we move to designing the nearly - degenerate double optical cavity system with large cross - coupling rates .as two separate l2 cavities are brought close to one another , their interaction causes the formation of even and odd optical and mechanical super - modes with splittings in the optical and mechanical frequencies .this splitting may be tuned by changing the spacing between the cavities .we take the even and odd optical modes of this two - cavity system as our optical resonances at and .the optomechanical coupling arises from a shift in the optical frequency caused by a mechanical deformation .our hamiltonian for the single cavity system can then be written as = ( ) + _ m , where is the quantized displacement of the mechanical mode , and is the characteristic per - phonon displacement amplitude . the deformation - dependent frequency may be calculated to first order in using a variant of the feynman - hellman perturbation theory , the johnson perturbation theory , which has been used successfully in the past to model optomechanical crystal cavities .the hamiltonian is then given to first order by = _o + _ m + g ( + ) , where is the optical mode frequency in absence of deformation and g = .here , and are the optical mode electric field , optical mode displacement field and mechanical mode displacement field , respectively , is the thickness of the slab , and is the dielectric constant . these concepts can be extended to optically multi - mode systems , represented by the hamiltonian = _ i _ o , i + _ m + _ i , j g_i , j ( + ) , where now the cross - coupling rates can be calculated by the following expression : g_i , j = .we denote this expression for convenience as . for the modes of the l2 cavity shown in figures [ fig : bandstructure](b ) and ( c ) ,the optomechanical coupling was calculated to be for silicon .when two cavities are brought in the vicinity of each other , super - modes form as shown in figure [ fig : bandstructure](a ) .we denote the symmetric ( ) and antisymmetric ( ) combinations by and .these modes can be written in terms of the modes localized at cavity 1 and 2 , e_= q_= . by symmetry ,the only non - vanishing coupling term involving the ( antisymmetric mechanical ) mode is . assuming that the two cavities are sufficiently separated , we can approximate . for the super - modes of interest , . ) .( c ) the optical band structure exhibits a single band that passes through the resonance frequency of the optical mode .the waveguide acts as a single - mode optical waveguide with field pattern shown in ( d ) .the component of the guided optical mode of opposite symmetry is shown in ( e ) for completeness .[ fig : waveguide],width=642 ] a line defect on an optomechanical crystal acts as a waveguide for light . here , the line defects used consist of a removed row of holes , with the rows above and below shifted towards one another by a distance , such that the distance between the centers of the snowflakes across the line defect is ( see figure [ fig : waveguide](a ) ) .the waveguide was designed such that mechanically , it would have no bands resonant with the cavity frequency ( see figure [ fig : waveguide](b ) ) and would therefore have no effect on the mechanical factors .optically , it was designed have a single band crossing the cavity frequency ( see figure [ fig : waveguide](c ) ) and would therefore serve as the single - mode optical waveguide required by the proposal .the band structure of the mechanical waveguide was calculated using comsol , while for the optical simulations , mpb was used .( in arbitrary units ) , shows the leakage of photons out of the double - cavity system .this induces a loss rate on the optical modes , which was used to calculate the extrinsic coupling rate , plotted in ( b ) for various values of .( c ) plot of the mechanical quality factor due to thermoelastic damping , as a function of the ambient temperature .( d ) the time - harmonic component of the temperature field is plotted at various times during a mechanical oscillation period .[ fig : coupling_calcs],width=642 ] by bringing the optical waveguide near our cavity , the guided modes of the line - defect are evanescently coupled to the cavity mode , and a coupling between the two may be induced , as shown in figure [ fig : coupling_calcs](a ) .control over this coupling rate is achieved at a coarse level by changing the distance between the cavity and waveguide , _i.e. _ , the number of unit cells between them .we found a distance of 6 rows to be sufficient in placing our coupling rate in a desirable range . at this point, a fine tuning of the coupling rate may be accomplished by adjustment of the waveguide width parameter , described previously .the achievable values of are plotted against in figure [ fig : coupling_calcs](b ) .for the final design , is used . to simulate this coupling rate, we performed finite - element simulations using comsol where we placed the waveguide near our cavity , and placed absorbing boundaries at the ends of the waveguide away from the cavity .the resulting time - averaged poynting vector is plotted in figure [ fig : coupling_calcs](a ) , showing how the power flows out of the system .the achievable storage times of our system are determined by the lifetimes of the mechanical resonances . since we use a phononic crystal ,all clamping losses have been eliminated .however , other fundamental sources of mechanical dissipation remain , and here we provide estimates for one of these , the component due to thermoelastic damping ( ted ) .using the comsol finite - element solver , we solved the coupled thermal and mechanical equations for this system . in these simulations the change in the thermal conductivity and heat capacity of silicon with temperature were taken into account .the ted - limited quality factors , are plotted in figure [ fig : coupling_calcs](d ) . in these simulations, we see that for the mode simulated , surpasses at bath temperatures of .to illustrate some representative results of these simulations , we have plotted the change in temperature field from the ambient temperature versus the phase of the mechanical oscillation in figure [ fig : coupling_calcs](d ) . at , there are variations in temperature despite the displacement field being uniformly at this time .this shows that at these frequencies , the temperature does not follow adiabatically the displacement . as mentioned in the main text , in a realistic setting the optomechanical driving amplitude itself will be coupled to the bath temperature through absorption of optical pump photons in the tuning cavities .this optical pump heating of the structure is important in estimating the practical limits of the optomechanical system for quantum applications where thermal noise impacts system performance . as a realistic model for the bath temperature in our proposed silicon optomechanical crystal array ,we take , where is the base temperature and is a temperature coefficient that describes the temperature rise in each cavity per stored cavity photon due to optical absorption .our estimate of for a thin - film silicon photonic crystal structure is as follows .the absorbed power for photons stored in a cavity is given simply by the optical ( intrinsic ) linewidth of the cavity .if we assume _ all _ of this power is being converted to heat , the change of temperature is , where is the effective thermal resistance of the silicon structure .there are a number of sources in the literature for in relevant photonic crystal geometries .we choose here to use the value for a two - dimensional crystal system in silicon , k / w , which yields a per photon temperature rise of assuming an intrinsic loss rate of rad / s ( ) .10 url # 1`#1`urlprefix[2]#2 [ 2][]#2
one of the major advances needed to realize all - optical information processing of light is the ability to delay or coherently store and retrieve optical information in a rapidly tunable manner . in the classical domain , this optical buffering is expected to be a key ingredient to managing the flow of information over complex optical networks . such a system also has profound implications for quantum information processing , serving as a long - term memory that can store the full quantum information contained in an optical pulse . here we suggest a novel approach to light storage involving an optical waveguide coupled to an optomechanical crystal array , where light in the waveguide can be dynamically and coherently transferred into long - lived mechanical vibrations of the array . under realistic conditions , this system is capable of achieving large bandwidths and storage / delay times in a compact , on - chip platform .
with growing penetration of renewable energy sources in modern power grids , _ demand side management _ has been gaining attention as a means of achieving better balancing between supply and demand ( , , ) .indeed , it appears that the higher intermittency and lack of dispatchability associated with increased dependence on renewable energy sources can be taken care of more effectively by electrical loads than by conventional generators , which are typically not designed for fast up- and down regulation ( , ) . various technologies are currently being considered in the context of demand side management ; coordinated charging of batteries , e.g. , in electric vehicles ( ) , deliberate scheduling of loads with flexible deadlines ( ) as well as allowing local consumers with slow dynamics ( large time constants ) to store more or less energy at convenient times and thereby adjusting the momentary consumption ( e.g. , ) , among others . in particular , so - called _ thermostatically controlled loads _ ( tcls ) , such as deep freezers , refrigerators , local heat pumps etc . , are showing great potential in this context , since they account for a large volume of consumption in most countries with significant renewable penetration for instance , as of 2009 , about 87 percent of all us homes were equipped with air conditioning .thus , at least in theory , manipulating the operating conditions of large populations of units slightly while avoiding discomfort to end users appears to be attractive , given that it can be achieved simply by broadcasting setpoint changes .control strategies based on this principle were considered in , , and , among others . in these strategies ,subtle changes in the thermostat setpoint temperature ( less than a degree celcius ) are transmitted to all the participating consumers simultaneously .such small variations in the thermostat setpoints are expected to remain almost unnoticed by the customers , but if a sufficiently large number of units are shifted in the same direction , the overall power consumption can be shifted quickly by significant amounts , allowing to compensate for power production fluctuations in the grid .there is an intrinsic problem with this approach , however .when the setpoint is changed concurrently in a large number of tcls , their on / off cycles tend to become _ synchronized _ , which leads to large unwanted fluctuations ( damped oscillations ) in the overall power consumption , .if the entire population of tcls consists of identical units ( _ homogeneous _ ) , it is possible to counteract the fluctuations by means of a centralized control law . in the much more realistic case of non - identical ( _ heterogeneous _ ) units , on the other hand , the fluctuations are far harder to remove using centralized control strategies . to the authors knowledge ,the only solution strategy presented in the literature so far is , where a strategy in which the individual tcls deliberately slow down their transition in order to avoid synchronization was proposed . in this paper ,we propose a novel , decentralized algorithm for avoiding synchronization without having to actively communicate between units .the algorithm proposed here is fundamentally different from the one presented in ) in that it first adjusts the temperature bands in which the tcls operate to achieve early ( randomized ) desynchronization , and then makes use of a technique inspired by contention - based media access protocols to adaptively adjust the on / off cycles to achieve as little synchronization among units as possible .we present a formal proof of convergence of homogeneous populations to this desynchronized status , as well as simulations that indicate that the algorithm is able to effectively dampen power consumption oscillations for both homogeneous and heterogeneous populations of thermostatically controlled loads .the outline of the rest of the paper is as follows .section [ sec : problem ] first presents the simplified tcl model we shall employ , along with the synchronization issue .section [ sec : algorithm ] then presents the proposed algorithm along with the aforementioned convergence result . section [ sec : example ] illustrates the feasibility of the approach through two simulation examples .finally , section [ sec : disc ] offers some concluding remarks .we consider a population of individual temperature controlled loads .the tcls are modeled in a generic manner , which is deliberately kept as simple as possible . nonetheless ,as will be illustrated with a few simulation examples , the behavior of a large population of simple units can be fairly complex .the individual tcls are modeled as follows .let the internal and ambient temperatures of the volumes ( living spaces , refrigerators , cold storages etc . ) affected by the action of the heating / cooling hardware of the consumer be denoted and , respectively , and assume that the hardware is purely on / off - regulated . a simple model for the internal temperaturecan then be formulated as , , : for , where is the thermal capacitance of the consumer , is the corresponding thermal resistance and is the ( constant ) heating / cooling power supplied by the hardware when switched on . is a binary switching variable that determines whether or not the hardware is turned on ; basically , it switches status whenever the internal temperature encounters the limits of a pre - set temperature span \subset { { \mathbb{r}}} ] , and their on / off cycles are thus desynchronized , yielding a fairly smooth power consumption trajectory . at time h , all the tcls are subjected to a common step in the setpoint of 0.5 . as can be seen , the overall power consumption drops by an amount proportional to the size of the step , but significant oscillations appear in the power consumption due to synchronization of the individual units on / off cycles .figure [ fig:10000tcls ] illustrates the behavior of a population of tcls under the same simulation circumstances , but with chosen from a normal distribution with mean 5 and spread 0.5 kwh ( again , only the temperature profiles of 25 tcls are shown in the figure ) .a step of 0.5 in the setpoint is broadcast to all the tcls at time h. notice how the power consumption is quite smooth before the step because the on / off cycles of the tcls are desynchronized , whereas immediately after the step the tcls become synchronized and the power consumption again exhibits large fluctuations , which in this case die out slowly .the oscillations die out faster if the parameters vary more ( e.g. , if is chosen from a distribution with larger spread ) ; furthermore , the amplitude is relatively larger for larger populations of units .however , qualitatively the behavior remains the same .the oscillations are known as _ parasitic oscillations _ and are hard to remove via centralized control signals ( ) . in order to reduce the aforementioned oscillations in power consumption after the setpoint change broadcasts ,it is clearly necessary to deliberately desynchronize the tcls without interrupting their operation .however , a centralized algorithm for doing so is undesirable ; it is considered infeasible to keep close track of the internal states ( temperatures , set points etc . )of all the units in a centralized manner , since doing so would require regular measurement feedback from all the devices , which would give rise to a very heavy communication and computational load on the system . thus , we look for a low - complexity , _ decentralized _ algorithm that can be implemented locally in each tcl , and which satisfies the following requirements : 1 . the algorithm may only use local information ; this means that the tcls may not communicate with each other , and labels identifying each individual tcl may not be pre - assigned 2 .communication with the power supply utility must be limited to broadcast from the utility ; no communication originating from the tcls is allowed 3 . the general operation of each individual tcl may not be altered in a manner that is detrimental to the user comfort ; for example , the unit may not be deliberately kept turned off or on long enough for the temperature to leave the interval ] , causing the dynamics of unit momentarily to be governed by the dynamics this operation rapidly induces a large degree of desynchronization among the population of tcls without violating requirement r3 , as illustrated in figure [ fig : initialdesync ] .however , randomly choosing the temperature bounds may yield very tight bounds , which in turn may give rise to constant rapid on / off - switching .this is generally not desirable , so we drive the bounds back to their original settings by introducing the dynamics for some appropriately chosen , as also illustrated in the figure .( 0,0)(80,60 ) ( 10,5)(75,5 ) ( 8,0 ) ( 3 , 3)[r] ( 75,4 ) ( 3 , 5)[r] ( 20,4)(20,6 ) ( 19,0 ) ( 3 , 5)[r] ( 10,5)(10,55 ) ( 5,50 ) ( 3 , 5)[r] ( 10,25)(20,25)(20,35)(70,35 ) ( 2,22)(3 , 3)[r] ( 10,10)(20,10)(20,20)(70,20 ) ( 2,7)(3 , 3)[r] ( 2,37 ) ( 3 , 5)[r] ( 10,40)(20,40)(20,50)(70,50 ) ( 35,16)(3 , 3)[r] ( 35,50 ) ( 3 , 5)[r] ( 20,37)(42,45)(70,50 ) ( 20,33)(42,25)(70,20 ) ( 20,42)(42,48)(70,50 ) ( 20,28)(42,22)(70,20 ) ( 20,45)(42,49)(70,50 ) ( 20,25)(42,21)(70,20 ) ( 10,30)(14,35)(19,40 ) ( 19,40)(21,36.3)(23,32 ) ( 23,32)(27,37)(32,42 ) ( 32,42)(37,34)(45,24.5 ) ( 45,24.5)(60,42)(70,49.5 ) ( 72,48 ) ( 3 , 3)[r] ( 10,30)(14,35)(19,40 ) ( 19,40)(21,36.3)(26,26 ) ( 26,26)(36,39)(47,48.5 ) ( 47,48.5)(58,30)(66,20.5 ) ( 66,20.5)(68,23)(70,25 ) ( 72,24 ) ( 3 , 3)[r] ( 10,30)(14,35)(19,40 ) ( 19,40)(25,28)(28,23 ) ( 28,23)(40,38)(52,49.5 ) ( 52,49.5)(60,35)(70,22 ) ( 72,20 ) ( 3 , 3)[r] now , even though most of the synchronization among the population of tcls is likely to have disappeared with the random shrinking of the operating interval , it may still happen that some tcls remain synchronized , or close to synchronized .the second component of the proposed algorithm is designed to deal with any ` left - over ' synchronization among tcls that are close to being identical . here , we make use of the fact that the behavior of the tcls is periodic in steady state operation .let the period be denoted .essentially , this part of the desynchronization algorithm repeatedly forces the devices to switch status at some point within the interval ] denotes the time of the enforced switch within the following period , modulo .the enforced switch timings are initially distributed randomly ; the tcl which randomly obtains the smallest value will then not be able detect a switch prior to its own .when it recognizes this fact , it sets its own switch timing to and maintains it at that value ( modulo ) ; it is thus not necessary to assign a global label to the first and/or last tcl a priori .let denote the time of the forced transition , modulo , at iteration ; that is , .we gather these switch timings in a vector ^{n} ] to denote the operation of picking a random value from a uniform distribution over the interval $ ] .armed with algorithm 1 , we re - visit the simulation examples in section [ sec : problem ] . figure [ fig : ex1 ] shows the temperature curves of a subset of a population of 10,000 identical tcls , simulated under the same conditions as above , i.e. , / kw , kwh , kw , , , and .the top subplot shows the behavior without desynchronization , while the bottom plot shows the behavior with algorithm 1 applied .the temperature curves clearly look more ` jumbled ' after the step , which indicates a greater degree of desynchronization , and the operation intervals can be seen to return to their original size after in the space of a few hours .h. top : no desynchronization ; bottom : with synchronization ] figure [ fig : ex2 ] shows the corresponding power curves .the effect of the desynchronization is very obvious almost immediately after the step , as the amplitude of already the first peak is less than the case without desynchronization , and most of the oscillations are suppressed after about one period .we also notice that there are some oscillations remaining , which will gradually die out due to the long - term behavior of the adaptive algorithm .h. full line : with desynchronization ; dashed : without synchronization ] figure [ fig : ex3 ] shows a simulation with a heterogenous population of tcls . here , as opposed to figure [ fig : ex2 ] , the parasitic oscillations die out in the case without desynchronization as well .however , algorithm 1 clearly speeds up the process , however .note that the oscillations that can be seen before the setpoint change is due to the initialization of the tcl on / off states , which does not match exactly with the steady - state distribution ( the desynchronization is not active before h ) .note also that , by changing the setpoint of all the tcls in the simulation by half a degree , it was possible to reduce the mean total power consumption by approximately 2 mw after the step .h. full line : with desynchronization ; dashed : without synchronization ]this paper presented a novel algorithm for counter - acting unwanted oscillations caused by synchronization of populations of temperature controlled loads , which requires neither central management of the individual units nor communication between units .the algorithm comprises two main part , a ` fast ' randomization of the temperature bands within which the tcls operate , and a ` slow ' adaptive adjustment of enforced switch timings , which maximizes desynchronization over time .we presented a formal proof of convergence of homogeneous populations to the desynchronized status , as well as simulations that indicate that the algorithm is able to effectively dampen power consumption oscillations for both homogeneous and heterogeneous populations .however , the simulations also indicate that even with active desynchronization , it is hard to avoid large peaks right after a common broadcast setpoint step .thus , high - level model - based control along the lines of the work presented in , will probably be preferable to simple steps .callaway . tapping the energy storage potential in electric loads to deliver load following and regulation , with application to wind energy . _ energy conversion and management _ , 500 ( 9):0 13891400 , 2009 .malhame and c.y .chong . electric load model synthesis by diffusion approximation of a higher - order hybrid - state stochastic system . _ ieee transactions on automatic control _ , 300 ( 9):0 854860 , 1985 . k. mets , t. verschueren , w. haerick , c. develder , and f. de turck . optimizing smart energy control strategies for plug - in hybrid electric vehicle charging . in _network operations and management symposium workshops _, pages 293299 .ieee , 2010 .a. mohsenian - rad , v.w.s .wong , j. jatskevich , r. schober , and a. leon - garcia .autonomous demand - side management based on game - theoretic energy consumption scheduling for the future smart grid ._ ieee transactions on smart grid _ , 10 ( 3):0320331 , 2010 . c. perfumo , e. kofman , j.h .braslavsky , and j.k .load management : model - based control of aggregate power for populations of thermostatically controlled loads ._ energy conversion and management _ , 550 ( 1):0 3648 , 2012 .m. petersen , j. bendtsen , and j. stoustrup .optimal dispatch strategy for the agile virtual power plant imbalance compensation problem . in _ proc . of american control conference _ ,montreal , canada , june 2012 .
this paper considers demand side management in smart power grid systems containing significant numbers of thermostatically controlled loads such as air conditioning systems , heat pumps , etc . recent studies have shown that the overall power consumption of such systems can be regulated up and down centrally by broadcasting small setpoint change commands without significantly impacting consumer comfort . however , sudden simultaneous setpoint changes induce undesirable power consumption oscillations due to sudden synchronization of the on / off cycles of the individual units . in this paper , we present a novel algorithm for counter - acting these unwanted oscillations , which requires neither central management of the individual units nor communication between units . we present a formal proof of convergence of homogeneous populations to desynchronized status , as well as simulations that indicate that the algorithm is able to effectively dampen power consumption oscillations for both homogeneous and heterogeneous populations of thermostatically controlled loads .
one of the important pursuits in the field of quantum computation is the determination of computationally efficient ways to synthesize any desired unitary gate from fundamental quantum logic gates .of special interest are the bounds on the number of one and two qubit gates required to perform a desired unitary operation ( termed the gate complexity of the unitary ) .this may be considered a measure of how efficiently an operation may be implemented using fundamental gates .an approach linking efficient quantum circuit design to the problem of finding a least path - length trajectory on a manifold was taken up in ; the path length was related to minimizing a cost function to an associated control problem ( with _ specific _ riemannian metrics as the cost function ) .this approach was later generalized in to use a more general class of riemannian metrics as cost functions to obtain bounds on the complexity . in authors use pontryagins maximum principle from optimal control theory to obtain a minimum time implementation of quantum algorithms .alternative techniques using lie group decomposition methods to obtain the optimal sequence of gates to synthesize a unitary were developed in . in this articlewe use the method of dynamic programming from the theory of optimal control to determine the sequence of one and two qubit gates which implement a desired unitary .we solve the problem using one and two qubit hamiltonians as the control vector fields , in contrast to the approach in which used the concept of preferred hamiltonians .in addition we demonstrate numerical results on an example problem in and obtain controls which may be then be used to split any gate into a product of fundamental unitaries .the organization of this article is as follows . in section [ sec : preliminaries ] we provide a review of the definitions of gate complexity , approximate gate complexity and the control problem associated with the bounds on these quantities .we describe , in section [ sec : dynproggatecomplexity ] how this control problem can be solved using dynamic programming techniques from mathematical control theory .this is followed by a demonstration , in section [ sec : exampleproblem ] , of the theory developed to an example problem on ; wherein simulation results and sample optimal trajectories are obtained .in this section we recall the notion of gate complexity and its relation to the cost function for an associated control problem as in . as outlined in ( * ? ? ? * chapter 4 ) , in quantum computation each desired quantum algorithm may be defined by a sequence of unitary operators .each of these is an element of the lie group and represents the action of the algorithm on an -qubit input .the gate complexity of a unitary is the minimal number of one and two qubit gates required to synthesize exactly , without help from ancilla qubits .the complexity of the algorithm is a measure of the scaling of the amount of basic resources required to synthesize the algorithm , with respect to input size . in practicehowever , computations need not be exact . to perform a desired computation , it may suffice to synthesize a unitary with accuracy , i.e . here denotes the standard matrix norm in a particular representation of the group .this notion gives rise to the definition of the approximate gate complexity as the minimal number of one and two qubit gates required to synthesize up to accuracy .we outline below a control problem on the lie group such that the cost function associated with it provides upper and lower bounds on the gate complexity problem .the system evolution for the control problem occurs on with associated lie algebra .note that in this article the definition of is taken to be the collection of traceless hermitian matrices ( which differs from the mathematicians convention by a factor of ) .the system equation contains a set of right invariant vector fields which correspond to a set of one and two qubit hamiltonians . the lie algebra generated by the set is assumed to be .this assumption along with the fact that is compact imply that the system is controllable (and hence the minimum time to move between any two points on is finite ) .the system dynamics for the gate design problem is described as follows : with the initial condition and with a bound on the norm of the available control vector fields using any suitable norm on their matrix representation .the control is an element of the class of piecewise continuous functions with their range belonging to a compact subset of the real -dimensional euclidian space ( ) .we denote this class of functions by .hence .given a control signal and an initial unitary the solution to eq at time is denoted by . in addition , by a simple time reversal argument it can be seen that the problem of obtaining a desired unitary gate starting from the identity can be reframed as a problem of reaching the identity element starting at .note the difference between the system described herein and that outlined in the control problem in . in the present case ,the control hamiltonians used are the ones which generate the one and two qubit unitaries ; therefore the concept of preferred vs allowed hamiltonians is not utilized here .the controls used may in fact be any bracket - generating subset of the vector fields which generate one and two qubit unitaries .we now define some terms to be used in this article : for and time to reach the identity using control : : + the infimum in eq is infinite if the terminal constraint is not attained .cost function : : given the dynamics in eq with control where is continuous and has a finite positive maximum and minimum i.e .note that as long as has a minimum greater than zero , the bounds on it can be recast in the form .optimal cost function : : + this optimal cost function will be used to provide bounds on the gate complexity .hence the control problem is to find the values of in order to optimize the cost function .the boundedness of the time taken to achieve the desired objective ( due to controllability ) , together with the boundedness of , implies that the cost function is bounded .in addition , the control problem may also be generalized to systems evolving on any compact connected lie group with the cost being dependent on both and .we now recall results on the relation between the cost of the associated control problem and both the upper bound on the approximate gate complexity and the lower bound on the gate complexity .we define \},\end{aligned}\ ] ] where is the set of one and two qubit unitary gates . hence the total time to construct from or vice - versais at - most .therefore for any element of we have from we have that a given unitary in can be approximated to using one and two qubit unitary gates .hence the upper bound on the approximate gate complexity satisfies this motivates the solution to certain related optimal control problems in order to obtain bounds on the complexity of related quantum algorithms .in addition , the solutions to such optimal control problems help determine the sequence of one and two qubit gates used to generate the desired unitary as described in the following section .in this section we introduce the tools of dynamic programming which have had widespread application in control theory .we then apply this theory to solve the control problem associated with determination of the bounds on gate complexity and explain how to use the solution to the control problem to obtain the sequence of gates required to reach any given unitary . the dynamic programming principle states that :an optimal policy has the property that , whatever the initial state and optimal first decision may be , the remaining decisions constitute an optimal policy with regard to the state resulting from the first decision . the use of this principle involves recursively solving a problem by breaking it down into several sub - problems followed by determining the optimal strategy for each of those sub - problems .for example , consider the following problem : [ example : dpp ] let the state of a system be described by a vector in some vector space . moving from one point to anotheris done by exerting a control chosen from a compact control set ( we assume full controllability of the system using controls from this set ) . any control exerted starting at a point in this space leads to a point denoted as , while incurring a cost .this cost of exerting the control may depend on the point at which the control starts being applied as well as on the control signal itself .the objective is to reach from a point to a point in that vector space while incurring as low a cost as possible .let the cost of reaching from any point using control be denoted by .the cost at any point is denoted by .the dynamic programming principle implies : also from the definition of we have that for any , there exists a control s.t hence, in the limit we have that : equations and imply the dynamic programming equation solving a control problem using this principle involves setting up and solving a recursion equation as above .there are several references which provide a detailed and rigorous introduction to this theory viz .we now apply this theory to obtain bounds on the gate complexity . by the procedure described in example ( [ example : dpp ] ) in the previous sub - section , we have that the dynamic programming equation for the optimal cost function in eq is for all initial points . here and is the indicator function of the set taking on the value of inside the set and zero outside it .thus , is if .now we describe a formal derivation to obtain the differential version of the dynamic programming equation(dpe ) . for sufficiently small have from eq that : now transposing to the right hand side , dividing by and taking the limit as we obtain : \right\}\\ 0 = & \mathop { \sup } \limits_{v\,\in\,\mathbf{v } } \left\{-\ell(v ) - dc(u_0)\left[-i\,\{\sum_{k=1}^m v_k(t ) h_k\ } x\right]\right\}.\end{aligned}\ ] ] in the equation above denotes the derivative of the function at a point on the lie group .hence the function ( eq ) satisfies \right\}=0 , \label{eq : hjbgeneralresult}\end{aligned}\ ] ] termed the hamilton - jacobi - bellman ( hjb ) equation . in principlethis solution can be used to obtain bounds on the gate complexity as indicated in eqns and .assuming regularity conditions , the optimal control policy is generated by the synthesis equations given below ( * ? ? ?* section 1.5 ) . is optimal for an initial state if and only if for all ( almost everywhere ) , where -\ell(v)\right\}.\label{eq : generalcontrolsynthesis}\end{aligned}\ ] ] in these expressions we define to be the solution to the differential equation eq at time with control history .we numerically synthesize the optimal controls using techniques such as in ( * ? ? ?* chapter 3 ) where the solutions to the discretized version tend towards the solution of the original continuous description .the solution to the hjb gives the control sequence to be applied to reach the identity element .this is the crucial step in the control synthesis . from the baker - campbell - hausdorff formula we know that where and are any hermitian operators .this implies that in a one qubit system , a unitary generated by any element of the lie algebra of a one qubit system can be approximated as closely as desired by the product of unitaries generated by flowing along the available one and two qubit control hamiltonians ( given the bracket generating assumption mentioned previously ) .we now recall two statements regarding the universality of gates and gate synthesis using a product of two level unitaries below . 1 . a single qubit and a c - not gate are universal i.e produce any two level unitary ( * ? ? ?* section 4.5.2 ) .an arbitrary unitary matrix on a -dimensional hilbert space can be written exactly as a product of two level unitaries ( * ? ? ?* section 4.5.1 ) .these statements together with eq indicate that once we have a control vector from the solution of the hjb equation , it is possible to synthesize a one and two qubit gate sequence to approximate a desired unitary to as good an accuracy as required .we now use the theory introduced to consider an example on the special unitary group .we wish to construct any element of using the available hamiltonians and . the system dynamics on is given by : where and in this case .this cost function in effect measures the distance along the manifold to generate the desired unitary . due to the fact that cost function does not depend on the magnitude of the control signal applied, the problem essentially involves choosing the direction to flow along ( with maximum magnitude ) , at each point on the manifold in order to reach the destination in the smallest possible time . hence the direction ( and thus the path )is chosen in order to minimize the the distancealong the manifold .thus this minimum time control problem is related to the original gate complexity problem .the minimum time problem in quantum mechanics has also attracted interest in other articles .the hjb for the cost function is : \right\ } = 0\label{eq : hjbexamplesu2}\end{aligned}\ ] ] where .the optimal control is chosen according to the eq .now , in order to obtain the numerical solution to this problem we proceed as follows . instead of using the value function directly, it is advantageous to use the monotone transformation ( kruskov transform ) which leads to the following hjb equation ( using ( * ? ? ?* proposition 2.5 ) ) : with the hamiltonian term being the same as in eq .the function can be interpreted as a discounted minimum time function for the system in eq .therefore from the dynamic programming principle would satisfy this normalization ( discounting ) is useful for better numerical convergence and is also used in the uniqueness proofs of the solutions to the dynamic programming equations . to obtain a numerical solution to the dynamic programming problem , we parameterize points in using a mapping of the form from the euclidian space .note that the parametrization is not unique , since multiple points in the three dimensional euclidian space map to the same point in .note that since hjb equation in this case is linear in , the optimal lies on the boundary of the compact set at ( almost ) every time instant .this simplification reflects the results from where determining the geodesic ( which are paths of constant magnitude of the velocity ) involves choosing an optimal direction along which to flow .the method of discretization of eqns , which are used to obtain the simulation results , will be described later in this article .figure [ fig : timecomparisonsu2ckt ] indicates the slices along a quadrent of the co - ordinate axes of the actual minimum time function ( which corresponds via eq to the normalized minimum function , obtained by solving the hjb eq ) .the figure is presented as a gray - scale image in a three dimensional grid .the axes correspond to the three parameters used for the representation of as described above .a lighter shading indicates a larger value of the minimum time function at a point , while a darker shading implies a smaller time to reach the identity element when starting from that point .time optimal trajectories for this example are indicated in figure [ fig : mintimepathexamplesu2 ] .note that the non - uniqueness of the representation leads to having to carefully interpret the paths when they are shown in flat space .observe that since there is no direct vector field to control along , the path to the identity starting from a point along is not a straight line unlike in the case of the other two axes ( and ) .the discretization of this system for obtaining numerical solutions to the hjb equation ( [ eq : hjbdiscounted : su2example ] ) is carried out using the finite difference procedure in ( * ? ? ?* section 6.5 ) . in three dimensional euclidian space with a grid spacing of the space and basis vectors ,the value iteration equation ( which is the iteration of the cost function , say ) is given by : where here are the th components of the vector valued function .note that the optimal control for this system is a specific case of eq , where the possible values of the spatial co - ordinates are the locations of the grid points which in turn depend on the mesh generated for the discretization . these discretized equations and the controls resulting therefrom are used to obtain the simulation results indicated in the figures in this article .once the controls are determined , we can generate the one and two qubit unitaries which efficiently approximate this control trajectory as explained in section [ sec : obtaininggateimplementations ] .in this article we have described the use of the dynamic programming method to solve the efficient gate synthesis problem and have demonstrated a proof of principle of this technique by obtaining a complete solution to an example problem of a single qubit .a comparison between the method introduced and algebraic decomposition based approaches ( such as applications of the methods in ) , is shown in ; wherein it is demonstrated that the results obtained by a decomposition based method agree well , to within the error bounds of the discretization , with those resulting from the dynamic programming based control method .the methods in the present article are sufficiently general to be able to be used with various cost functions such as the ones in as well those used in geometric approaches to the problem as in .the simulations in this work are based on theoretical results which are quite involved .a rigorous and complete development of the proofs of the foundations of this article will be deferred to a future publication .the numerical procedures outlined herein generalize to higher dimensional cases with the crucial limiting factor being the time taken and storage requirements for these computations ( which increases dramatically with the dimension of the system ) .the treatment of problems of direct interest to gate complexity will require an analysis of unitaries on three or more qubits .owing to the curse of dimensionality , further work is required to develop computational methods of greater efficiency in order to use the dynamic programming technique to investigate these problems of practical interest .t. schulte - herbrggen , a. sprl , n. khaneja , and sj glaser .optimal control - based efficient synthesis of building blocks of quantum algorithms : a perspective from network complexity towards time complexity ., 72(4):42331 , 2005 .
the relationship between efficient quantum gate synthesis and control theory has been a topic of interest in the quantum control literature . motivated by this work , we describe in the present article how the dynamic programming technique from optimal control may be used for the optimal synthesis of quantum circuits . we demonstrate simulation results on an example system on , to obtain plots related to the gate complexity and sample paths for different logic gates .
systems that generate texts from non - linguistic data are known as data - to - text ( d2 t ) systems .one of the main concerns in d2 t is to achieve proper definitions of the words and terms which are included in the automatically generated texts .this task is often performed in d2 t using analytical studies on sets of human - produced corpus texts and associated data , or through experiments that allow to define the underlying semantics of the generated expressions .for instance , provided a thorough analysis of the use of temporal expressions such as `` by the evening '' or `` by midday '' by different forecasters and described their inconsistencies .this analysis led to crisp interval definitions of these expressions , which were subsequently used in the sumtime - mousam system .d2 t systems such as the aforementioned sumtime - mousam or roadsafe generate texts that include vague terms and expressions for referring to time intervals and geographical zones .these expressions are vague in the sense that they do not indicate specific times or zones , e.g , as in `` by the evening '' , in opposition to `` between 6 pm and 1 am '' .however , their underlying semantics are defined by means of crisp approaches , such as the time intervals in sumtime - mousam or the grid - based partition of the geography in roadsafe .research and techniques on fuzzy linguistic descriptions of data have been proposed as a means to address the problem of extracting linguistic information from data using vague terms inherent to human language for d2 t . in this context, there is currently a high interest within d2 t for exploring the use of fuzzy sets to , when feasible , better grasp the semantics of vague or imprecise words and expressions .some examples of this trend include the application of possibility theory to model and convey temporal expressions using modal verbs , or the use of fuzzy properties in the problem of referring expression generation .the textual weather forecast generator galiweather also used fuzzy sets and fuzzy quantified statements to model and compute expressions describing the cloud coverage variable .particularly , the work here described will focus on the problem of generating geographical referring expressions , such as `` northern scotland '' , `` south of spain '' or `` coast of galicia '' . our setting anddeparting point is the methodology proposed in , that suggests merging the traditional empirical approaches utilized in d2 t with the imprecision management capabilities of fuzzy sets and their application in linguistic descriptions of data .the methodology described in ( depicted in fig .[ methodology ] ) , considers a series of tasks that should be performed to generate geographical referring expressions based on fuzzy properties : 1 .an exploratory analysis of the problem from a general perspective ( already described in ) .2 . a proper empirical definition of the primitive descriptors , based on data gathered from users .an study on how to lexicalize the possible occurrences of the descriptors ( e.g. by means of combination , ` north ' and ` east ' = ` northeast ' ) and how to generate the referring expressions .an algorithm that implements the referring expression generation strategies determined in the previous step .an evaluation of the algorithm that generates the geographical referring expressions .this paper describes our approach to the second task , i.e. , defining fuzzy geographical descriptors empirically ( highlighted in fig .[ methodology ] ) , which can then be used to characterize both individual geographical locations and regions . for this , the rest of the paper is structured in three sections . section [ modeling ] encompasses the description of the whole approach , which is tested on a realistic use case .particularly , in section [ survey ] we describe the survey that allowed us to gather data to apply the method described and evaluated in section [ method ] .section [ discussion ] provides a discussion on several aspects of our approach .finally , section [ conclusion ] provides some ending remarks about this work and points at potential future extensions .our approach for this task is composed of two main elements : a survey that gathers geographical data about the descriptors from users ( details are given in sec . [ survey ] ) and a method that builds their corresponding fuzzy definitions ( the methodology is described in sec .[ method ] ) . as in any d2 t system, the procedure we have followed corresponds to a knowledge acquisition task .this task is meant to capture the vagueness that , for given linguistic terms ( geographical descriptors in our case ) , arises from having different interpretations from different users .although the methodology and our specific approach for modeling fuzzy geographical descriptors are meant to be general and domain - independent , we are following a realistic use case to support this research .specifically , our use case is focused around some of the most common geographical descriptors which , for instance , have been used in the roadsafe system .such descriptors are modeled here in the context of the galician region ( spain ) , based on the interpretation of subjects who are assumed to have a minimum knowledge about the galician geography and that are potential end users ( hearers / readers ) of automatically generated texts which include geographical references , such as weather forecasts .the survey was designed having in mind the lessons learnt from the preliminary experiment described in . in consequence ,the survey was prepared to be very intuitive and short , without requiring excessive effort from the participants . in a within - subjects design, participants were given 2 different geographical descriptors , and were asked to draw polygons in a map that represented their own interpretation of each descriptor .a single map was prepared for the galician region in spain , including only the geopolitical data appearing in the source map , which was provided by mapbox and further customized to show only the political borders of the region , a few location names and the altitude of the terrain .we included 2 descriptors in the survey . specifically , we incorporated into this survey two geographical descriptors that were also considered in , namely both north and south cardinal directions .based on these descriptors , the basic expressions `` northern galicia '' and `` southern galicia '' were provided to the participants ( in spanish ) .these were recruited at the high school i.e.s .a xunqueira i , located in pontevedra , one of the four province capitals in galicia .specifically , students aged between 15 and 17 years old anonymously answered the survey .the participants accessed a web interface ( see fig .[ itfsurvey ] ) that provided them with the tools needed to complete the survey .specifically the participants were asked to draw polygons with limitless points for each geographical expression .the expressions were presented to each participant in random order . before starting the survey , the students were required first to draw a simple polygon to get familiarized with the drawing tools .then , they were allowed to draw only one polygon for each descriptor at a time , although it could be erased and redrawn without restrictions before proceeding to the subsequent descriptor . after providing responses for all the descriptors ,the participants were given the possibility of providing free - text comments about any aspect of the survey .we received 99 responses in total for each descriptor , which were inspected visually .for instance , fig .[ north ] shows the graphical representation of all the answers for the `` northern galicia '' expression , with polygons mainly overlapping towards the upper side of the map and a decreasing density towards the middle of the region .a pair of the answers provided by the participants were manually removed for different reasons , including a polygon that clearly represented the other descriptor and another with an extremely deformed shape .thus , we were left with 98 polygons for `` northern galicia '' and 98 for `` southern galicia '' .the method we have followed to build the fuzzy descriptors is based on the concept of voting model defined by baldwin in .since we are aiming in our case at the modeling of fuzzy geographical descriptors , we required a procedure that is able to consider the bidimensional nature of this domain . in general terms , our method is based on the construction of a grid of points of a certain granularity , which is then used as the basis of the resulting fuzzy geographical descriptor .the points where most of the polygons drawn by the participants in the survey intersect are given higher membership degrees , while those in the opposite situation get lower membership degrees .the method we propose builds fuzzy geographical descriptors characterized by a membership function that maps a point in the geographical plane ( specified in latitude and longitude coordinates ) to a specific membership degree .formally , we model the semantics of a fuzzy geographical descriptor ( e.g. , ` north ' ) , using a function ] .md$ ] sum( ) reverse( ) sum( ) in our case , we have used the haversine function to calculate the distance between two points in kilometers , as it provides a very reasonable tradeoff between precision and computation complexity .likewise , we established that , so that when practically shares the same location with a point in the fuzzy grid , they also share the same membership degree ( ) .we evaluated the method from two different perspectives .first , we checked the balance between different grid granularities and their computational efficiency by comparing the construction of the fuzzy descriptors using different granularity values .then , we evaluated how well the fuzzy descriptors would correctly predict the answers provided by the participants in the survey . in our approach, using lower granularities means building grids that contain more points and thus grasp better the graduality of the descriptors which are built from the human interpretations .however , this also means that the computational time will increase accordingly .as engineering often involves finding a tradeoff equilibrium between precision and complexity , we evaluated if higher granularities could be a reasonable alternative to the 1% we used as basis .specifically , we computed for ` north ' ( ) using a 1% granularity grid , which is used as our baseline .then , we recalculated at 2 , 5 , and 10 % granularities .table [ eval_comp ] shows the number of points contained in each fuzzy grid and their associated computation time ( we used the python 3.5 language and the library _ shapely _ for the geospatial operations in a 2,3 ghz intel core i7 ) .results show that there is an important decrease in points and computation time even from 1% to 2% ..evaluation of the tradeoff between efficiency and approximation level of different granularities . [ cols="^,^,^,^,^,^",options="header " , ] [ precall ] these results show that the fuzzy descriptors generalize well the interpretations provided in the survey .this is due to several reasons , such as having a high number of answers , but also that both descriptors do not overlap excessively , as the points randomly chosen cover the whole test set polygons , and are not focused on the overlapping part of the descriptors . in any case , getting a lower percentage of positive hits due to the overlapping between descriptors should not be an issue , especially in the case of terms such as ` north ' and ` south ' , which are considered natural antonyms ( we will comment on this in the next section ) .it is expected that a set of points included in polygons that were meant to model one descriptor actually have a higher membership degree when evaluated by the other descriptor , as we are not in a crisp context .in our use case we performed an elicitation approach of fuzzy geographical descriptors based on a polling method , as reviewed in . given the nature of our problem , we converted a yes / no question into asking the subjects in the survey to determine , for a given descriptor , the region that they clearly consider as part of that descriptor .the fuzziness in the geographical descriptors stems then from the variance between subjects .the purpose of this specific survey is not to validate , at this stage , any postulate within fuzzy set theory , but rather to seek a proper way to elicite the concept of membership degree for geographical descriptors .consequently , we are not making assumptions about any axioms or constraints that the fuzzy descriptors should fulfill .even so , it is not out of place to remark in advance that properties such as monotonicity can play an important role in ensuring the consistency of the semantics of the descriptors we are defining .in fact , in it was evidenced that even inconsistencies may arise with just a few different expert interpretations of the same expression . in order to avoid this problem, the designers of a d2 t system may need to post - process the raw empirical definitions and add different constraints in order to make sure that the semantics of the expressions actually make sense .for instance , in our particular case it seems reasonable that is monotonic , so that for two points , located in a higher latitude , and , in a lower latitude , it holds that .this can also be the case for ` south ' or for other kind of directional descriptors such as ` west ' or ` east ' , for which the source of the monotonicity would be different ( in a different direction or across the longitude dimension ) .however , in other cases that do not rely solely on directions , but also on political or cultural knowledge , such as named regions , it could be possible to find interpretations where monotonicity may not apply . another property which is worth mentioning is the antonymy .for instance , coming back once again to our use case , we consider two geographical descriptors that are in general considered natural antonyms , ` north ' and ` south '. using the standard negation operator in fuzzy set theory , does it apply then , that ? for illustration purposes , fig .[ antonym ] shows the comparison between and .given that we havent applied any constraints to the fuzzy models built from the survey data , the difference between both descriptors is noticeable visually .in fact , similarities and differences between antonymy and negation in fuzzy logic are an important research topic . and , interpolated on a 1% granularity grid using 2% granularity fuzzy grids as base.,scaledwidth=60.0% ] both monotonicity and antonymy could have been enforced as part of the design of the surveys or experiments that , just like the one described here , allow to gather data about the interpretation of terms and expressions by user or expert subjects . for instance , we could have asked the subjects to draw both ` north ' and ` south ' polygons at the same time , or to trace a line that separates both descriptors .however , in our use case we decided to provide more freedom to the subjects to avoid as many biases as possible , even if this meant obtaining less consistent models as a result .the resulting models tend to be softly gradual , since we were able to gather more than 50 polygons per descriptor . however , the unavailability of subjects or data in general for empirically determining the semantics of words and expressions is a common problem in d2 t .this means that in many occasions one will have to do with a few experts or reduced corpora examples .translating such problem to the context of our approach , it is likely that in real situations we would achieve models of fuzzy geographical descriptors that are closer to being stratified rather than gradual . on a related note , to decide whether experts or users should provide the interpretation of the terms and expressions to be conveyed in d2 t systems is also an interesting problemthis is particularly true in scenarios where experts produce texts or reports whose final audience are not experts themselves , but other groups of users , as in weather forecasts or medical reports for patients . in such contexts , the meaning of some words and expressions ( e.g. `` north '' , `` cold '' , `` in the morning '' ) in texts produced by expertsdoes not necessarily match the interpretations of the target audience .this opens the problem of deciding if d2 t systems should generate texts that include terms and expressions semantically defined by the experts or by the end users , considering that the latter are meant to understand and make use of the information contained in the texts . in the case of our survey we did not aim at an expert group in the first place , but focused on a group of potential users of weather forecasts that include the kind of geographical expressions considered in the survey .we have presented an approach that addresses the problem of defining fuzzy geographical descriptors that aggregate the interpretation of different users , as part of the wider problem of generating geographical referring expressions . for this, we ran a survey that collected data about the geographical interpretation that users make of geographical descriptors such as ` north ' and ` south ' in the context of the galician region in spain .based on this dataset , we have proposed a method that builds fuzzy models of the geographical descriptors . as future work , we intend to run the survey here described to collect the interpretation of expert weather forecasters. this will allow us to obtain deeper insights about the differences between both groups of subjects and study the possibility of merging them into a single model .we will also explore other alternatives for building the fuzzy geographical descriptors , such as the conceptual spaces paradigm .afterwards , we will proceed with the remaining tasks described in the methodology proposed in . specifically , our objective will be to use the primitive fuzzy geographical descriptors to study the generation of actual geographical referring expressions .we plan to incorporate the resulting referring expression generation algorithm into an actual d2 t system that produces real - time descriptions of the weather state in the galician region . in the longer termour aim is to generalize and fully establish this methodology as a standard guideline for the application of fuzzy sets in d2 t contexts .this work has been funded by tin2014 - 56633-c3 - 1-r and tin2014 - 56633-c3 - 3-r projects from the spanish `` ministerio de economa y competitividad '' and by the `` consellera de cultura , educacin e ordenacin universitaria '' ( accreditation 2016 - 2019 , ed431g/08 ) and the european regional development fund ( erdf ) . the authors would also like to thank jos m. ramos gonzlez for setting up the survey at the i.e.s .a xunqueira i , as well as all anonymous subjects who contributed to this study .r. turner , s. sripada , e. reiter , and i. p. davy , `` using spatial reference frames to generate grounded textual summaries of georeferenced data , '' in _ proceedings of the fifth international natural language generation conference _ ,inlg 08.1em plus 0.5em minus 0.4em stroudsburg , pa , usa : association for computational linguistics , 2008 , pp .1624 .a. gatt and f. portet , `` multilingual generation of uncertain temporal expressions from data : a study of a possibilistic formalism and its consistency with human subjective evaluations , '' _ fuzzy sets and systems _ , vol .285 , pp . 73 93 , 2016 ,special issue on linguistic description of time series .a. gatt , n. marn , f. portet , and d. snchez , `` the role of graduality for referring expression generation in visual scenes , '' in _ proceedings of the 16th international conference on information processing and management of uncertainty in knowledge - based systems ( ipmu ) _ , 2016 , pp .191203 .a. ramos - soto , a. bugarn , s. barro , and j. taboada , `` linguistic descriptions for automatic generation of textual short - term weather forecasts on real prediction data , '' _ ieee transactions on fuzzy systems _ ,23 , no . 1 , pp . 44 57 , 2015 .r. de oliveira , y. sripada , and e. reiter , _ proceedings of the 15th european workshop on natural language generation ( enlg)_.1em plus 0.5em minus 0.4emassociation for computational linguistics , 2015 , ch .designing an algorithm for generating named spatial references , pp .127135 .a. ramos - soto , n. tintarev , r. de oliveira , e. reiter , and k. van deemter , `` natural language generation and fuzzy sets : an exploratory study on geographical referring expression generation , '' in _ proceedings of the 2016 ieee international conference on fuzzy systems ( fuzz - ieee ) _ , july 2016 , pp .
we present a novel heuristic approach that defines fuzzy geographical descriptors using data gathered from a survey with human subjects . the participants were asked to provide graphical interpretations of the descriptors ` north ' and ` south ' for the galician region ( spain ) . based on these interpretations , our approach builds fuzzy descriptors that are able to compute membership degrees for geographical locations . we evaluated our approach in terms of efficiency and precision . the fuzzy descriptors are meant to be used as the cornerstones of a geographical referring expression generation algorithm that is able to linguistically characterize geographical locations and regions . this work is also part of a general research effort that intends to establish a methodology which reunites the empirical studies traditionally practiced in data - to - text and the use of fuzzy sets to model imprecision and vagueness in words and expressions for text generation purposes . at ( current page.south ) ;
sudden cardiac death , attributable to unexpected ventricular arrhythmias , is one of the leading causes of death in the us and kills over 300,000 americans each year .the induction and maintenance of ventricular arrhythmias has been linked to single - cell dynamics . in response to an electrical stimulus , cardiac cells fire an action potential , which consists of a rapid depolarization of the transmembrane voltage ( v ) followed by a much slower repolarization process before returning to the resting value ( fig . [fig : action_potential ] ) .the time interval during which the voltage is elevated is called the action potential duration ( apd ) .the time between the end of an apd and the beginning of the next one is called the diastolic interval ( di ) . the time interval between two consecutive stimuli is called the basic cycle length ( bcl ) . when the pacing rate is slow , a periodic train of electrical stimuli produces a phase - locked steady - state response , where each stimulus gives rise to an identical action potential ( 1:1 pattern ) .when the pacing rate becomes sufficiently fast , the 1:1 pattern may be replaced by a 2:2 pattern , so - called electrical alternans , where the apd alternates between short and long values .recent experiments have established a causal link between alternans and the risk for ventricular arrhythmias .therefore , understanding mechanism of alternans is a crucial step in detection and prevention of fatal arrhythmias. cellular mechanisms of alternans have been much studied .summaries on this topic can be found in recent review articles by shiferaw et al . and weiss et al . . at the cellular level ,cardiac dynamics involves bidirectional coupling between membrane voltage ( v ) dynamics and intracellular calcium ( ca ) cycling . during an action potential ,the elevation of v activates l - type ca currents to invoke the elevation of [ ca ] , which in turn triggers ca release from the sarcoplasmic reticulum ( sr ) , a procedure known as calcium - induced - calcium release ( cicr ) .the v coupling satisfies graded release , where a larger di leads to an increase in the ca release at the following beat since it allows more time for l - type ca channels to recover . on the other hand , ca release from the sr affects the apd in two folds : to curtail the apd by enhancing the inactivation of l - type ca currents ; and to prolong the apd by intensifying na/ca exchange currents .therefore , depending on the relative contributions of and , an increase in ca release may either shorten the apd ( negative ca coupling ) or lengthen the apd ( positive ca coupling ) .there exist two main cellular mechanisms of alternans .firstly , alternans may be attributed to steep apd restitution , which is due to a period - doubling instability in the v dynamics . in this case ,ca transient alternans , as a slave variable , is induced because v regulates [ ca ] via the l - type ca currents and the sodium - calcium exchange currents .secondly , alternans may be caused by a period - doubling instability in ca cycling , which is associated with a steep relationship between the sarcoplasm reticulum ( sr ) release and sr load . in this case ,apd alternans is a secondary effect via ca coupling .for ease of reference , we call the first mechanism _ apd - driven alternans _ and the second _ ca - driven alternans_. interestingly , apd and ca transient alternans can be electromechanically ( e / m ) in phase or out of phase . in e / m in - phase alternans, a long - short - long apd pattern corresponds to a large - small - large [ ca pattern , see fig .[ fig : em_coupling ] ( a ) .in contrast , in e / m out - of - phase alternans , a long - short - long apd pattern corresponds to a small - large - small [ ca ] pattern , see fig .[ fig : em_coupling ] ( b ) . when alternans happens in isolated cells , the bidirectional coupling between apd and ca transient determines the relative phase of apd and ca transient alternans .in particular , apd - driven alternans always leads to e / m in - phase alternans whereas ca - driven alternans is e / m in phase for positive ca coupling and out of phase for negative ca coupling .the mechanism of alternans in multicellular tissue is more complicated since it involves electrotonic coupling and conduction velocity restitution .of particular interest is a phenomenon called spatially discordant alternans , in which different regions of the tissue alternate out of phase .discordant alternans is arrhythmogenic because it forms a dynamically heterogeneous substrate that may promote wave break and reentry . to study the spatiotemporal patterns of alternans , echebarria and karma derived amplitude equations that are based on apd - driven alternans .these amplitude equations not only are capable of quantitative predictions but also provide insightful understandings on the arrhythmogenic patterns .in a recent article , dai and schaeffer analytically computed the linear spectrum of echebarria and karma s amplitude equations for the cases of small dispersion and long fibers .spatial patterns of alternans have been investigated in experiments . recently , aistrup et al . used single - photon laser - scanning confocal microscopy to measure ca signaling in individual myocytes .they found that ca alternans is spatially synchronized at low pacing rates whereas dyssynchronous patterns , where a number of cells are out of phase with adjoining cells , arise when the pacing rate increases .aistrup et al . also observed subcellular alternans at fast pacing , where ca alternans is spatially dyssynchronous within a cell . using simulations of 1-d homogeneous tissue , sato et al . found that , in cardiac fibers with negative ca coupling , ca alternans reverses phase over a length scale of one cell whereas , in fibers with positive ca coupling , ca alternans changes phase over a much larger length scale .they interpreted this difference by showing that negative ca coupling tends to desynchronize two coupled cells while positive ca coupling tends to synchronize the coupled cells .motivated by the aforementioned experimental and theoretical work , this paper aims to explore spatial patterns of cardiac alternans . through extensive numerical simulations , we find that complex spatial patterns of ca alternans with phase reversals in adjacent cells can happen in homogeneous fibers with both negative and positive ca couplings .most surprisingly , we find that the spatiotemporal pattern of cardiac alternans is not determined by the pacing period alone . specifically , when calcium - driven alternans develops in multicellular tissue , there coexist multiple spatiotemporal patterns of alternans regardless of the length of the fiber , the junctional diffusion of ca , and the type of ca coupling .we further investigate the mechanism that leads to the coexistence of multiple alternans solutions .our analysis shows that multiple alternans solutions are induced because of the interaction between electrotonic coupling and an instability in ca cycling .we adopt a model of membrane dynamics that combines the calcium dynamics model developed by shiferaw et al . and the canine ionic model by fox et al . . in the following, we will refer to this model as the shiferaw - fox model .detailed formulations of the model can be found in .the shiferaw - fox model has adopted two sets of parameters in the calcium dynamics to account for negative and positive ca couplings . besides the phase difference, the two sets of parameters also produce alternans at different values of bcl . using shiferaw s default parameters , we find alternans happens at bcl ms for negative ca coupling and bcl ms for positive ca coupling . fig .[ fig : ode_bif ] shows the bifurcation diagrams in apd and peak value of [ ca ] for negative ca coupling .the bifurcation diagrams for positive ca coupling are similar and thus are not shown here .we note that , in simulations of isolated cells using the shiferaw - fox model , alternans solutions do not depend on the initial condition nor on the pacing history .however , as we will show in the following , fibers based on the shiferaw - fox model possess multiple alternans solutions , which are sensitive to the initial condition and the pacing protocol .[ c]cc & + ( a ) & ( b ) we study paced , homogeneous fibers , which can be modeled using the cable equation: where represents v , represents the effective diffusion coefficient of in the fiber , represents the transmembrane capacitance , is the total ionic current , and represents the external current stimulus .the ionic current is computed using the shiferaw - fox model .the current stimulus has duration 1 ms and amplitude 80 / .this paper studies fibers of various lengths . for two coupled cells ,we pace the left cell . for longer fibers, we pace the leftmost few cells to ensure propagation . for example , the leftmost 5 cells are paced in simulating a fiber of 100 cells .the cable equation ( [ eqn : voltage ] ) is solved using the finite difference method with a space step of cm and time step of ms .no - flux boundary conditions are imposed at both ends of the fiber . to study the onset and development of alternans , we pace both single cells and fibers of various lengths with several pacing protocols , which are briefly described below .\(i ) in the _ downsweep protocol _ ,the cell / fiber is paced periodically with period bcl until it reaches steady state .then , the pacing period is reduced by and the procedure is repeated many times . note that this protocol is also known as _ dynamic pacing protocol _ .\(ii ) the _ perturbed downsweep protocol _ , proposed by kalb et al . , can be regarded as a perturbation to the downsweep protocol . at each pacing period bcl ,the cell / fiber is first paced n beats to reach steady state . then, a longer pacing period is applied at the n+1st pacing , after which the original pacing period is applied for 10 beats to allow the tissue to recover its previous steady state .next , a shorter pacing period is applied and followed by 10 beats of the original pacing period .finally , the pacing period is reduced by and the procedure is repeated .\(iii ) to explore the possibility for multiple alternans solutions , we set up certain initial condition and pace the tissue with period bcl to reach steady state , a process we call _ direct pacing_. \(iv ) to explore the origin of an alternans pattern , we use the _ upsweep protocol _ , which is a reversed downsweep protocol .we simulate fibers using the shiferaw - fox model with both negative and positive ca couplings under various conditions .default parameters in shiferaw s code are used unless otherwise specified . despite quantitatively significant differences , we find both types of couplings lead to the coexistence of multiple alternans solutions . for clarity ,we start with the results for negative ca coupling and defer the results for positive ca coupling in a later subsection .we first consider a homogeneous fiber of 100 cells with negative ca coupling . to our surprise, numerical simulations show that when the fiber is in alternans , there coexist multiple solutions for a given pacing period .for example , fig .[ fig : dtype100 ] shows 6 selected solutions of alternans for the fiber paced at bcl= ms . here, the steady - state solutions in panels ( a - c ) are obtained using the downsweep protocol with step size =1 ms , 2 ms , and 25 ms , respectively .the pacing protocols are started from bcl=500 ms in ( a ) and ( c ) and from bcl=499 ms in ( b ) .we note that the solution of a downsweep protocol is not influenced by the initial condition at the starting , long bcl ; instead , the solution is sensitive to the step size .the steady - state solutions in panels ( d - f ) are obtained by pacing the fiber at bcl= ms with prescribed initial conditions for beats , the so - called direct pacing .the initial condition of the fiber in panel ( d ) is uniform , i.e. , all cells are assigned the same resting voltage , gating variables , and ionic concentrations . the initial condition in panel ( e )is same as that in ( d ) except that [ ca ] is assigned to be m for the first 35 cells and m for the remaining cells .interestingly , this initial condition leads to a steady - state [ ca ] pattern , which , besides the phase reversal between cells 35 and 36 , has another phase reversal between cells 11 and 12 .the initial condition in panel ( f ) is the same as that in ( d ) except that [ ca ] is randomly assigned for cells on the fiber according to a uniform distribution in the interval of m to m . in all protocols ,we pace the fiber for 200 beats at each bcl and plot the last 10 beats at bcl=375 ms .the simulation results in fig . [ fig : dtype100 ] demonstrate that the alternans on a fiber is not solely determined by the pacing period .instead , the solution is sensitive to the pacing protocol and the initial condition .it is worth noting that , besides the solutions shown in fig .[ fig : dtype100 ] , there exist many other solution patterns .in particular , there exist many complex patterns similar to fig .[ fig : dtype100 ] ( f ) . in the following, we will verify whether the phenomenon is influenced by the length of the fiber , junctional ca diffusion , or ca coupling .a. garfinkel , y.h .kim , o. voroshilovsky , z. qu , j.r .kil , m.h .lee , h.s .karagueuzian , j.n .weiss , and p.s .chen , ` preventing ventricular fibrillation by flattening cardiac restitution , ' proc .natl acad sci usa , * 97*:60616066 , 2000 .d. m. bloomfield , j. t. bigger , r. c. steinman , p. b. namerow , m. k. parides , a. b. curtis , e. s. kaufman , j. m. davidenko , t. s. shinn and j. m. fontaine , ` microvolt t - wave alternans andthe risk of death or sustained ventricular arrhythmias in patients with left ventricular dysfunction , ' j. am .cardiol . , * 47*:456463 , 2006 .v. shusterman , a. goldberg and b. london , ` upsurge in t - wave alternans and nonalternating repolarization instability precedes spontaneous initiation of ventricular tachyarrhythmias in human , ' circulation , * 113*:28802887 , 2006 .aistrup , j.e .kelly , s. kapur , m. kowalczyk , i. sysman - wolpin , a.h .kadish , and j.a .wasserstrom , pacing - induced heterogeneities in intracellular ca signaling , cardiac alternans , and ventricular arrhythmias in intact rat heart, circulation research , * 99*:6573 , 2006 .kalb , h.m .dobrovolny , e.g. tolkacheva , s.f .idriss , w. krassowska and d.j .gauthier , ` the restitution portrait : a new method for investigating rate - dependent restitution , ' j. cardiovasc ., * 15*:6987 .f. fenton , ` beyond slope one : alternans suppression and other understudied properties of apd restitution , ' kitp miniprogram on cardiac dynamics , kavli institute for theoretical physics , santa barbara , ca , july 28 , 2006 .d. sato , y. shiferaw , z. qu , a. garfinkel , j.n .weiss , a. karma , ` inferring the cellular origin of voltage and calcium alternans from the spatial scales of phase reversal during discordant alternans , ' biophys j , * 92*:l33l35 , 2007 .jordan , d.j .christini , ` characterizing the contribution of voltage- and calcium - dependent coupling to action potential stability : implications for repolarization alternans , ' am j physiol heart circ physiol , * 293*:h2109h2118 , 2007 .
cardiac alternans , a beat - to - beat alternation in action potential duration ( at the cellular level ) or in ecg morphology ( at the whole heart level ) , is a marker of ventricular fibrillation , a fatal heart rhythm that kills hundreds of thousands of people in the us each year . investigating cardiac alternans may lead to a better understanding of the mechanisms of cardiac arrhythmias and eventually better algorithms for the prediction and prevention of such dreadful diseases . in paced cardiac tissue , alternans develops under increasingly shorter pacing period . existing experimental and theoretical studies adopt the assumption that alternans in homogeneous cardiac tissue is exclusively determined by the pacing period . in contrast , we find that , when calcium - driven alternans develops in cardiac fibers , it may take different spatiotemporal patterns depending on the pacing history . because there coexist multiple alternans solutions for a given pacing period , the alternans pattern on a fiber becomes unpredictable . using numerical simulation and theoretical analysis , we show that the coexistence of multiple alternans patterns is induced by the interaction between electrotonic coupling and an instability in calcium cycling .
both problems , the inverse refractor and the inverse reflector problem , from illumination optics can be formulated in the following framework : let a point - shaped light source and a target area be given , e.g. a wall .then we would like to construct an apparatus that projects a prescribed illumination pattern , e.g. an image or a logo , onto the target .since we aim for maximizing the efficiency , we would like to construct our optical device in such a way that , neglecting losses , it redirects all light emitted by the light source to the target .we focus our attention to the design of such an optical system in the simple case that it either consists of a single free - form lens or of a single free - form mirror , see figure [ fig : parametrization ] for an illustration of the former case .our goal is now to compute the shape of the optically active surfaces , modeled as _ free - form surfaces _ , such that the desired light intensity distribution is generated on the target . since these problems from illumination optics from the mathematical point of view conceptually fall into the class of inverse problems , they are also called _ inverse reflector problem _ and _ inverse refractor problem _ , respectively . in particular , since the size of the optical system is comparable to that of the projected image , we address the case of the near field problems . , while the surrounding has the refractive index . ]there is a variety of technical applications of such optical systems , e.g. spotlights with prescribed illumination patterns used in street lamps or car headlamps , see e.g. .the authors present in a solution method for the inverse reflector problem via numerically solving a strongly nonlinear second - order _ partial differential equation ( pde ) of monge ampre type_. due to the high potential of this approach we now extend this method to the case of illumination lenses .this paper is organized as follows : since the reflector problem has been discussed in detail in we mainly focus on the refractor problem .we start with the state of the art for its solution in section [ sec : state_of_the_art ] .then , we formulate the problem via a partial differential equation of monge ampre type which we discuss in section [ sec : refractor_problem ] for the construction of a refractor . for completeness we also give the monge ampre formulation for the reflector problem in section [ sec : reflector_problem ] .next , the numerical method is explained in section [ sec : collocation ] . since this type of optical design problem raises many difficulties in the solution process we discuss in section [ sec : ma_in_optics ]how these can be resolved .finally , in section [ sec : results ] we look at numerical results for the inverse reflector and refractor problems and end this paper in section [ sec : outlook ] with our conclusions .in this section we discuss the methods available for the solution of the inverse design problems in nonimaging optics , see the monographies by chaves and by winston , miano and bentez for an introduction to nonimaging optics and the paper by patow and pueyo for a survey article on inverse surface design from the graphics community s point of view . for a detailed survey of solution techniques for the inverse reflector problem , we refer the reader to ( * ? ? ?* section 2 ) .focusing on the inverse refractor problem , in the paper by wester an buerle there is a list of approaches , a discussion on practical problems , e.g. extended sources and fresnel losses , and examples with led lighting , e.g. a lens for automotive fog light and a lens producing a logo . in the rest of this section ,we first give a short overview of other solution techniques in section [ sect : sota_nopdeapproach ] and then focus on methods based on pdes in section [ sect : sota_pdeapproach ] , which is also our problem formulation of choice .finally , we discuss some advanced topics in section [ sect : sota_advancedtopics ] and draw our conclusions in section [ sect : sota_conclusion ] .we distinguish three different groups of techniques for the solution of inverse problems in nonimaging optics , which are not based on a pde : there are methods resorting from optimization techniques , others built from cartesian ovals and a third group of methods which are geometrical constructions .[ [ optimization - approaches ] ] optimization approaches + + + + + + + + + + + + + + + + + + + + + + + there are methods for the design of optical surfaces , which are based on optimization techniques , see e.g. . starting from an initial guess ,the outline of the iterative optimization process for the determination of the optical surfaces is as follows : first , the current approximation of the optical surfaces is validated by ray tracing . in a second step , using an objective function , which is often closely related to the euclidean norm , the resulting irradiance distribution is compared to the desired one and a correction of the optical surfaces is determined .the process ends , when a suitable quality criterion is fulfilled , otherwise these two steps are repeated .the advantage of this method is that it is very flexible .however , optimization procedures are very costly because of the repeated application of the ray tracing and it is unclear if the iterative methods converge at all .[ [ cartesian - ovals - methods ] ] cartesian ovals methods + + + + + + + + + + + + + + + + + + + + + + + cartesian ovals are curves of fourth order in the plane .they can be associated with two foci such that light emitted at one focus is collected at the other focus . herethe cartesian oval coincides with the interface of two optical media with different refractive indices .cartesian ovals can be extended to surfaces in 3d with the same property . by combining clippings of several of these surfaces in an iterative procedure a new segmented surfacecan be constructed that approximates the solution .this strategy has first been developed by kochengin and oliker for the construction of solutions for the inverse reflector problem . later this has been extended to the inverse refractor problem using cartesian ovals , see also ( * ? ? ?* section 2 ) for some theoretical background , and for a collimated light beam instead of a point light source using hyperboloids .although this technique has the advantage to permit the construction of continuous but non - differentiable surfaces , the number of clippings required grows linearly in the number of pixels in the image .for example , using ellipsoids of revolution for the construction of a mirror with accuracy , the complexity of the method scales like , see , such that it quickly becomes infeasible for higher resolutions .[ [ geometric - construction - methods ] ] geometric construction methods + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + reflective and refractive free - form surfaces can also be designed by geometric approaches .probably the most famous of these techniques is the simultaneous multiple surfaces ( sms ) method extending the ideas of cartesian - oval methods , see e.g. ( * ? ? ?* chapter 8) and and the references therein .the main idea of the sms method is the simultaneous construction of two optical surfaces , e.g. both surfaces of a lens , which permits to couple two prescribed incoming wave fronts , e.g. coming from two point light sources , with two prescribed outgoing wave fronts .while in its 2d version , the method is used to design rotationally symmetric optical surfaces , in a 3d variant it is also capable to construct free - form optical surfaces .however , the authors could not find any hint on the computational costs in the literature but conjecture that this scheme is expensive especially for complex target illumination patterns . in several publications for the inverse refractor problem a pdeis derived , whose solution models the desired optical free - form surfaces , see e.g. .in these approaches usually the low wavelength limit is assumed to hold , i.e. the problems are formulated using the geometrical optics approximation . some examples for the inverse refractor problem with a more complex target illumination pattern are shown in . however , in all four articles the descriptions and discussions of the numerical methods are incomplete . to the best of the authors knowledgethe solution method is not fully documented in the literature .while we consider the case of a point light source , an interesting and closely related problem is shaping the irradiance distribution of a collimated light beam , see e.g. for the theory including some results on existence and uniqueness of solutions .we refer the reader to the monography by gutirrez for a general overview of monge ampre - type equations . since we are looking for an optical surface which redirects light coming from a source onto a target , one can model this problem in terms of optimal transportation . [[ optimal - transport ] ] optimal transport + + + + + + + + + + + + + + + + + there are also methods which are based on a problem of optimal transport which leads to monge ampre - type equations , see e.g. .first the ray mapping , i.e. the mapping of the incoming light rays onto the points at the target , is computed via an optimal transport approach . at this pointthe optical surface is still unknown but in a next step it is constructed from the knowledge of the target coordinates for each incoming light ray . in 1998parkyn already described a very similar procedure . in the current formulation of the problemonly one single idealized point light source has been used . an extension to multiple point light sourcesis discussed by lin where the optical refractors are determined from those calculated for single point light sources by a weighted least - squares approach .more techniques for the case of extended light sources can be found in the papers by bortz and shatz and wester et al . . in particular for the refractor problem , some energy is lost for the illumination of the target because of internal reflections in the lens material .a theoretical discussion of these fresnel losses can be found in the publications by gutirrez ( * ? ? ?* section 5.13 ) and gutirrez and mawi . in losses are minimized by free - form shaping of both refractive surfaces of the lens .our approach is motivated by the fact that even for the special case of a single point light source and the computation of just one surface of the lens we could not find any fully detailed method in the literature which can produce complex illumination patterns on the target area . from the authors point of view , the most promising approach is the one by solving a pde of monge ampre type .this section is devoted to the formulation of the monge ampre equation that models the near field refractor problem as given in the paper by gutirrez and huang .since the full theory is a bit involved , we restrict ourselves to a summary of the most important aspects and refer the reader to ( * ? ? ?* appendix a ) and the paper by karakhanyan and wang for the details .our notation also follows these sources .we now proceed as follows : at first , we fix the geometric setting and the implicit definition of the refracting and the target surfaces in section [ subsec : geo_setting ] .then we apply snell s law of refraction in section [ subsec : snell ] and follow the path of the light ray in section [ subsec : light_path ] . finally , in section [ subsec : ma ] we obtain the desired equation of monge ampre type . since a lens has two surfaces we need to design both of them . for simplicitywe choose a spheric inner surface , i.e. the surface which faces the light source is a subset of a sphere with center at the position of the light source .thus there is no refraction of the incoming light at this interface , the inner surface is optically inactive . it remains to compute the shape of the outer surface facing the target area . to that endlet us define the quotient of the refractive indices of the lens material and the environment .we assume that the light source illuminates a non - empty subset of the northern hemisphere of the unit sphere . the third component of an incoming light ray with direction is then given as .thus we define and parametrize our outer lens surface by the distance function , i.e. the surface is given as .the target is defined as a subset of a hypersurface implicitly given by the zero level set of a continuously differentiable function via note that for the numerical solution procedure in the newton - type method we require that is twice continuously differentiable . while in general much more complicated situations are supported , for simplicity we restrict ourselves to the case where the target is on a shifted --plane such that for a shift . to model the _ luminous intensity _ of the source we define the density function , where .the corresponding density function for the desired illumination pattern on the target is denoted by . since we want to redirect all incoming light onto the target the density functions need to fulfill the _ energy conservation _condition note that for simplicity we neglect the loss of reflected light intensity . for a more complicated derivation of a monge ampre - type equation for the refractor problem taking losses into account see .according to snell s law of refraction in vectorial notation ( see e.g. ( * ? ? ?* chapter 4.4 ) or ( * ? ? ? * chapter 12 ) ) , the direction of the light ray after refraction at the point is , where and is the outer unit normal on defined as a function on . as detailed in ( * ? ? ?* ( 2.15 ) ) , for the outer normal unit vector at we find where denotes the gradient of a function and . to ease notation , we define the utility function which represents the denominator in , i.e. next , we consider the line which contains the light ray after refraction , defined by the point and the direction vector .we now turn to finding the point where the refracted light ray hits the target . in order to determine the third component of , we first define the utility point as the intersection point of this line with the plane which is given as for a . for a proof of the existence of see ( * ? ? ?* appendix a.2 ) . using , we confirm that where the utility function is given by after refraction at the point the light ray hits the target at point given as from the third component we know that .we introduce the short notation .let us define and denote its partial derivatives by , and , respectively . a lengthy computation using standard calculus and some tensor identities of sherman morrison type yields where and .note that in a bit more involved computation along the same lines we compute with , and , where the energy conservation clearly also holds if we replace with any arbitrary subset and with , where , . by coordinate transformationthis yields the identity .finally , we can derive the monge ampre equation for the refractor problem where and is computed by see ( * ? ? ?* appendix a ) .[ [ existence - and - uniqueness - of - solutions ] ] existence and uniqueness of solutions + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + in general , for boundary value problems with monge ampre equations proving well - posedness , i.e. existence and uniqueness of the solution and continuous dependency on the parameters , is a hard problem , e.g. see ( * ? ? ?* section 1.4 ) for an example of a discretized monge ampre equation obtained by finite differences on a grid of cells which has different solutions .some theoretical results for the existence of a solution for the refractor problem under some appropriate conditions can be found in in theorem 5.8 for and theorem 6.9 for .additionally there are results on the uniqueness of the solution if just finitely many single points on the target are illuminated , see ( * ? ? ?* theorem 5.7 ) for and ( * ? ? ?* theorem 6.8 ) for . for proving existence and uniqueness of a solution one typically requires the equation of monge ampre type to be elliptic .a necessary condition is that the right - hand side of is positive .for this reason we demand that or , equivalently , .if this term is positive we can simply replace by .the inverse reflector problem can be modeled as a monge ampre - type equation very similarly to the case of the inverse refractor problem in section [ sec : refractor_problem ] , see . using the same definitions and notation as in section [ sec : reflector_problem ] and introducing the substitution , we first define we assume that , i.e. , , and .then the monge ampre equation for the inverse reflector problem reads see and for the details .the numerical solution of strongly nonlinear second - order pdes , including those of monge ampre type , is a highly active topic in current mathematical research .there are many different approaches available on the market , see the review paper by feng , glowinski and neilan and also for an overview .however , most methods are not well - suited for all equations of monge ampre type such that it remains unclear if a particular method can be successfully applied to our problems . in authors propose to use a spline collocation method which turns out to provide an efficient solution strategy for monge ampre equations arising in the inverse reflector problem . in section [ subsec : collocation ]we explain the idea of a collocation method , which reduces the problem to finding an approximation of the solution within a finite dimensional space .then we discuss the choice of appropriate basis functions in section [ subsec : splines ] .as discretization tool for the monge ampre equations arising in the reflector and refractor problem , we propose a collocation method , see e.g. bellomo et al . for examples of collocation methods applied to nonlinear problems .let the pde in and constraints on be given . in thissetting we approximate in a finite - dimensional trial subspace of , i.e. for some finite set and basis functions we choose the ansatz .next , we only require that the pde holds true on a collocation set which contains only finitely many points .so our approximation of the solution of our pde satisfies this discrete nonlinear system of equations is solved by a quasi - newton method , which uses trust - region techniques for ensuring global convergence of the method , see chapter 4.2.1 in and the references cited therein for the details and the proofs .we choose to apply a space of spline functions as ansatz space because of their advantageous properties , see e.g. for details on the theory of splines . for a given interval ] is the characteristic function of the interval \subset { \ensuremath{\mathbb{r}}} ] be a matrix . for a penalty parameter define the modified determinant \end{aligned}\ ] ] which we use instead of the determinant in the left - hand side of the monge ampre equation . for an elliptic solution of this equationthe left - hand side is exactly the same for the determinant and the modified determinant , see ( * ? ? ?* lemma 4.2 ) .furthermore , each non - elliptic solution of is not a solution of this equation , when the determinant is replaced by the modified determinant . up to nowthe refractor is at most uniquely determined up to its size .therefore we define our initial guess of the problem appropriately and search for a solution of same size requiring that holds true , see also ( * ? ? ?* section 4.5 ) . note that this condition is taken account of by the additional unknown introduced in section [ subsec : boundary_condition ] . in case that is possible that a ray of light exceeds the critical angle and total internal reflection occurs , such that this light ray does not reach the target .of course we know that this is not true for the solution , because we require that all light rays hit the target .however during the iteration process of our nonlinear solver this phenomenon can appear .if this is the case the argument of the square root in the definition of in section [ subsec : snell ] is negative at this position . to overcome this instability we replace by its stabilized counterpart .then the situation of total internal reflection is treated like the case when the light ray hits the surface exactly at the critical angle .the refracted light ray most likely also misses the target and therefore this intermediate step can not satisfy the monge ampre equation such that further iterations are performed . if total internal reflection does nt occur , which is the case we intend to have for our solution , we have and therefore obtain an equivalent problem .the convergence of newton - type methods sensitively depends on the choice of an initial guess that is close enough to the solution .we apply a nested iteration strategy in order to largely increase the stability of the solver but also in order to accelerate the solution procedure .we start with a coarse grid for the spline surface and a blurred version of the image for the illumination pattern .the blurring process is necessary because a coarse grid can not produce a very detailed image on the target area . for this reason we convolve the image , which is given as a raster graphic in our case , with a discrete version of the _ standard mollifier function _ if and zero otherwise , namely with for and indices for the pixel coordinates .if our grid has nodes we alternately increase the resolution of the grid and decrease the strength of blurring , i.e. we solve the problem for different pairs of , see also ( * ? ? ?* sections 4.3 and 5.2 ) . for the refractor problem we simply use the surface of a sphere with center at the position of the light source and a prescribed radius as initial guess . for the reflector problemwe start with a reflective surface producing a homogeneous illumination pattern on the target .we obtain this reflector by first using the method of supporting ellipsoids and our collocation technique afterwards , see also ( * ? ? ?* section 5.2.3 ) .the density function corresponds to the target illumination on and is given by bit digital grayscale images ( integer gray values in the range from to ) .since we divide by in right - hand side of the monge ampre equation , the function should be bounded away from zero . to guarantee this lower bound we adjust the image and use the modified function with , see also ( * ? ? ?* ( 5.9 ) ) .numerical experiments show that the value leads to good results . in order to satisfy the energy conservation condition the function needs to be scaled accordingly .in this section we discuss some numerical simulation results obtained by the collocation method for the inverse reflector and refractor problems . for both optical problems and all of our simulations we use the domain and a light source with a lambertian - type emission characteristics .its emitted luminous intensity is rotationally symmetric , shows a fast decay and is proportional to , where ] . for the refractor problemthe dimensions including the size of the optical surfaces are chosen very similarly to the case of the reflector problem to have a comparable situation , see figure [ fig : parametrization_lens ] . herewe use a part of the surface of a sphere with radius as initial guess , see also section [ sec : initial_guess ] , and \times [ -4,4 ] \times \ { 20 \}$ ] .as refractive indices we use for the lens representing an average glass material and for the environment .the calculated reflector or lens is verified using the ray tracing software pov - ray . in the nested iterationwe successively solve the nonlinear systems of equations for the following pairs of grid resolutions : , , , , , , , , , and , see section [ sect : nestediteration ] for the details .the newton - type method ends after at most iterations .the regularization parameter in the modified determinant as defined in section [ sec : ellipticity ] is set to which turns out to be an appropriate choice for all examples .( 0.09,-4.66 ) rectangle ( 12.76 , 4.84 ) ; the results of the numerical simulations are depicted in figure [ fig : refractor_results1 ] . in the first rowthe original test images are shown .the first three of them are chosen to examine different characteristics within the images , like thin straight lines and lettering as in the image `` boat '' , see figure [ fig : reflector_results1_a ] .different patterns of high and low contrast are present in the image `` goldhill '' , see figure [ fig : reflector_results1_b ] , in particular at the windows and roofs of the houses and the surrounding landscape in the background .the image `` mandrill '' in figure [ fig : reflector_results1_c ] shows the face of a monkey with a lot of fine details like the whiskers .the fourth and most challenging of our test pictures is the logo of our institute in figure [ fig : reflector_results1_d ] because it shows the highest possible contrast and contains jumps in the gray value from black to white. the iteration counts and timings for the numerical experiments are given in tables [ tab : reflectors_a ] and [ tab : reflectors_b ] ..number of iterations of the newton - type method for each of the ten nested iterations and overall computing time in seconds for the standard test images in figure [ fig : reflector_results1_a ] boat `` , figure [ fig : reflector_results1_b ] ' ' goldhill `` , and figure [ fig : reflector_results1_c ] ' ' mandrill . [ cols=">,^,^,^,<,^,^,^ " , ] [ tab : reflectors_b ] first , we notice that for a given original image the output images obtained by forward simulation for the reflector and refractor problem look very similar . in comparison to the original imagesthe output images are slightly blurred and have a little less contrast but visually they only differ locally at very few locations .major deviations can be observed in the background of the institute s logo which is not completely black after the forward simulation of the mirror and the lens .this is because of the minimal gray value needed to avoid the division by zero , see section [ sec : min_gray ] .we see that all of these characteristics of the first three test images are well preserved by our method .the computing time for the refractor is approximately twice as long as for the reflector but still acceptable with about minutes . for the fourth test image we had to adjust the parameters in the nested iteration process to handle the sharp edges and work with a finer grid ,see table [ tab : reflectors_b ] .we also raised the minimal gray value in section [ sec : min_gray ] from to obtaining a proportion between black and white of .these parameters lead to results showing also a very sharp logo for both the inverse refractor and reflector problems .note that nevertheless in two stages of the nested iteration for the refractor problem the quasi - newton method was stopped because the maximal number of iterations was reached without meeting the required tolerances , see table [ tab : reflectors_b ] .this happened only for two intermediate steps of the nested iteration process while we observe convergence in the last iteration , which shows us that this does not affect the overall method . in the case of the refractor problem the gray line below the lettersis irregularly illuminated and slightly too bright .nevertheless , the shape of this line is reproduced very precisely .the optically active surface of the lens for the projection of the institute s logo is displayed in figure [ fig : refractor_results2 ] .note that the characters used in the logo can be recognized on the surface .we observe that they cover about the half of the lens surface while this is not the case in the original image .of course this is what we expect because we want to redirect a maximal amount of incoming light onto these letters .for the efficient and stable solution of the inverse reflector and refractor problems we propose a numerical b - spline collocation method which is applied to the formulation of the inverse optical problems as partial differential equations of monge ampre type and appropriate boundary conditions .several challenges for the construction of a stable numerical solution method have been met , e.g. we detailed how to enforce ellipticity constraints to ensure uniqueness of the solution and how to handle the involved boundary conditions .a nested iteration approach simultaneously considerably improves the convergence behavior and speeds up the numerical procedure . for the inverse refractor problemour algorithm provides a reliable and fast method to compute one of the two surfaces of the lens under the assumption of a point - shaped light source . shaping the second surface of the lens , e.g. to minimize fresnel losses , and exploring possible solution strategies for the problem for extended real light sources are topics of upcoming research .the authors are deeply indebted to professor dr .wolfgang dahmen for many fruitful and inspiring discussions on the topic of solving equations of monge ampre type .we thank elisa friebel , silke glas , and gudula kmmer for proofreading the manuscript ., _ generalized functional method of nonimaging optical design _ , in nonimaging optics and efficient illumination systems iii , 2006 , p. 633805 .doi : http://dx.doi.org/10.1117/12.678600[10.1117/12.678600 ] ., _ solving the monge - ampre equations for the inverse reflector problem _ , math .models methods appl ., 25 ( 2015 ) , pp .doi : http://dx.doi.org/10.1142/s0218202515500190[10.1142/s0218202515500190 ] ., _ recent developments in numerical methods for fully nonlinear second order partial differential equations _ , siam rev . , 55 ( 2013 ) , pp .doi : http://dx.doi.org/10.1137/110825960[10.1137/110825960 ] ., _ a numerical method for the elliptic monge - ampre equation with transport boundary conditions _ , siam j. sci .comput . , 34 ( 2012 ) , pp .doi : http://dx.doi.org/10.1137/110822372[10.1137/110822372 ] ., _ the monge ampre equation _44 of progress in nonlinear differential equations and their applications , birkhuser , basel , 2001 .doi : http://dx.doi.org/10.1007/978-1-4612-0195-3[10.1007/978-1-4612-0195-3 ] ., _ fully nonlinear pdes in real and complex geometry and optics _ , vol . 2087 of lecture notes in math . ,springer , heidelberg , 2014 , ch .refraction problems in geometric optics , pp. 95150 .doi : http://dx.doi.org/10.1007/978-3-319-00942-1_3[10.1007/978-3-319-00942-1_3 ] ., _ determination of reflector surfaces from near - field scattering data .ii : numerical solution _ , numer . math ., 79 ( 1998 ) , pp .doi : http://dx.doi.org/10.1007/s002110050351[10.1007/s002110050351 ] ., _ designing freeform lenses for intensity and phase control of coherent light with help from geometry and mass transport _ , arch . ration ., 201 ( 2011 ) , pp . 10131045 .doi : http://dx.doi.org/10.1007/s00205-011-0419-x[10.1007/s00205-011-0419-x ] ., _ differential equations for design of a freeform single lens with prescribed irradiance properties _ , opt . engrg ., 53 ( 2014 ) , p. 031302 .doi : http://dx.doi.org/10.1117/1.oe.53.3.031302[10.1117/1.oe.53.3.031302 ] ., _ illumination lenses designed by extrinsic differential geometry _ , in proceedings of spie , vol .3482 , spie , bellingham , wa , 1998 , pp .doi : http://dx.doi.org/10.1117/12.322042[10.1117/12.322042 ] ., _ a survey of inverse surface design from light transport behavior specification ._ , computer graphics forum , 24 ( 2005 ) , pp. 773789 .doi : http://dx.doi.org/10.1111/j.1467-8659.2005.00901.x[10.1111/j.1467-8659.2005.00901.x ] ., _ tailoring freeform lenses for illumination _ , in proceedings of spie , j. m. sasian and p. k. manhart , eds .4442 of novel optical systems design and optimization iv , spie , bellingham , wa , 2001 , pp .doi : http://dx.doi.org/10.1117/12.449957[10.1117/12.449957 ] ., _ efficient optimal design of smooth optical freeform surfaces using ray targeting _ , opt .commun . , 300 ( 2013 ) , pp .doi : http://dx.doi.org/10.1016/j.optcom.2013.02.067[10.1016/j.optcom.2013.02.067 ] ., _ freeform illumination design : a nonlinear boundary problem for the elliptic monge ampre equation _ , opt .lett . , 38 ( 2013 ) , pp .doi : http://dx.doi.org/10.1364/ol.38.000229[10.1364/ol.38.000229 ] .
we consider the inverse refractor and the inverse reflector problem . the task is to design a free - form lens or a free - form mirror that , when illuminated by a point light source , produces a given illumination pattern on a target . both problems can be modeled by strongly nonlinear second - order partial differential equations of monge ampre type . in [ math . models methods appl . sci . 25 ( 2015 ) , pp . 803837 , doi : http://dx.doi.org/10.1142/s0218202515500190[10.1142/s0218202515500190 ] ] the authors have proposed a b - spline collocation method which has been applied to the inverse reflector problem . now this approach is extended to the inverse refractor problem . we explain in depth the collocation method and how to handle boundary conditions and constraints . the paper concludes with numerical results of refracting and reflecting optical surfaces and their verification via ray tracing . inverse refractor problem , inverse reflector problem , elliptic monge ampre equation , b - spline collocation method , picard - type iteration 35j66 , 35j96 , 35q60 , 65n21 , 65n35 * ocis . * ( 000.4430 ) numerical approximation and analysis , ( 080.1753 ) computation methods , ( 080.4225 ) nonspherical lens design , ( 080.4228 ) nonspherical mirror surfaces , ( 080.4298 ) nonimaging optics , ( 100.3190 ) inverse problems
recent advancements in nanotechnology have offered several practical approaches for the realization of physical , biological or even hybrid nanomachines . typically , such nanomachines are a few tens nanometers in size and are able to perform simple tasks such as computing , storage , sensing and actuation .nanomachines with communication capabilities can be interconnected to form a nanonetwork through which complex tasks that may be relevant and necessary for realizing different biomedical , environmental and industrial applications can be executed in a collaborative manner .one of the most promising paradigms to interconnect nanomachines , especially the biological and bio - physical hybrid ones , to set up a nanonetwork is molecular communication .molecular communication involves the transmission of information encoded using molecules that physically travel from a transmitter nanomachine to a receiver nanomachine .several types of molecular transport mechanisms have been studied so far .broadly , they involve either passive transport of molecules utilizing free particle diffusion dynamics or active transport of molecules using bacterial chemotaxis and molecular motors that generate motion . in this work , we consider the former diffusion - based passive molecular transport mechanism .recently , kuran et al .proposed two different modulation schemes for realizing molecular communication via diffusion : concentration shift keying ( csk ) and molecule shift keying ( mosk ) .they are analogous to amplitude shift keying and frequency shift keying , respectively , which have been popularly used in electromagnetic communication for many decades .csk uses different concentrations ( or simply number ) of molecules to uniquely represent different information symbols while mosk uses different types of molecules for such purpose . for the transmission of information bits in one mosk - modulated symbol , different types of molecules are required .hence , the complexity of transmitter and receiver nanomachines increases as increases and thus mosk may not be practical . consider serial transmission of csk - modulated information symbols over a time - slotted diffusion channel as described in section ii .in such a diffusion channel , for a transmitted information symbol , molecular concentration ( or alternately number of messenger molecules ) observed at the receiver nanomachine is initially ( i.e. at the start of a slot ) zero and it quickly increases until reaching its maximum .then , the molecular concentration slowly decreases over time resulting in a long - tailed molecular concentration signal that may stretches over several slot durations .messenger molecules belonging to the tail of the molecular concentration signal of _ previous _ information symbols thus act as the source of interference to the molecular concentration signal of the _ current _ information symbol .such interference is called inter symbol interference ( isi ) . in and , authors have shown that isi significantly deteriorates the decoding performance of the receiver nanomachine .motivated with this problem , in this letter , we present a robust modulation scheme , zebra - csk , that attempts to reduce isi as much as possible .we consider a molecular communication system consisting of a pair of transmitter and receiver nanomachines .the transmitter nanomachine is fixed at the origin of an unbounded three - dimensional stationary fluidic environment while the receiver nanomachine is distance apart from the transmitter nanomachine .furthermore , we consider the molecular communication system is time - slotted with a slot duration of , and the transmitter and receiver nanomachines are perfectly synchronized . at the transmitter nanomachine ,a binary csk modulation is employed ; impulse of molecules are released at the start of the slot for binary symbol while no molecules are released for binary symbol .the encoded information symbols will be conveyed to the receiver nanomachine through molecular diffusion .once a molecule arrives at the receiver nanomachine , it will be removed from the communication medium .we assume that all molecules are homogeneous and diffuses independently with respect to other molecules with a common diffusion coefficient found using the einstein relation where is the boltzmann constant ( ) , is the temperature in kelvin , is the viscosity of the fluidic medium , and is the common radius of the molecules .the receiver nanomachine counts the total number of received molecules ( denoted by a random variable ) at the end of each time slot and applies the following decision rule to determine the transmitted binary symbol : where is a pre - specified threshold .following two classes of molecules are used in the proposed zebra - csk : ( i ) messenger molecules for encoding information symbols , and ( ii ) inhibitor molecules for supressing residual messenger molecules from the previous information symbol .[ pe1 ] fig .1 depicts the high - level flowchart of zebra - csk . in zebra - csk , as the name suggests , types of messenger molecules in subsequent information symbols are altered from _ type a _ messenger molecules ( denoted as m ) to _ type b _ messenger molecules ( denoted as m ) , or vice versa , while the information encoding mechanism is similar with that of the conventional csk .furthermore , each type of messenger molecules are accompanied by inhibitor molecules of the other type . in other words ,m molecules are accompanied with the inhibitor of m molecules ( denoted as i ) while m molecules are accompanied with the inhibitor of m molecules ( denoted as i ) .[ pe1 ] fig .2 illustrates temporal molecular - type alternation of both messenger and inhibitor molecules in the proposed zebra - csk . in a given slot duration ,when one type of messenger molecules are released along with inhibitor of the other type messenger molecules , the late - arriving residual messenger molecules from the previous symbol ( which are of the other type ) are acted upon by the inhibitor molecules . as a consequence, isi is reduced. however , it is noteworthy to mention that magnitude of the reduction in isi depends on the efficiency of the used inhibitor molecules .the impulse of messenger molecules released from the transmitter nanomachine spread out through the fluid medium via diffusion and hence the random motion of the messenger molecules can be represented as brownian motion . following the well developed literature on analysis of brownian motion , probability density function of the time a messenger molecule requires to reach the receiver nanomachine ( referred to as absorbance time of brownian particle in the literature )can be written as utilizing ( 3 ) , the probability that a molecule reaches the receiver nanomachine within a current slot of can be obtained as define to be a binary random variable indicating whether a -th molecule among molecules released by the transmitter nanomachine at the beginning of a slot arrives at the receiver nanomachine before the slot ends , i.e. if the molecule arrives within and otherwise .thus , considering , we have where is a random variable denoting the total number of molecules received at the receiver nanomachine within .given the number of the transmitted molecules , from ( 5 ) it can be seen that follows the binomial distribution , i.e. . for large , distribution of can be approximated as guassian distribution with the knowledge of its mean and variance as note that distribution of does not consider the late arriving messenger molecules from the previous symbols .let be a random variable denoting the total number of messenger molecules received at the receiver nanomachine among the molecules released in the previous slot .it is evident that is also binomially distributed as , and can be approximated as where for a given inhibition efficiency of the inhibitor molecules .based on ( 6 ) and ( 7 ) the distribution of a random variable representing the number of messenger molecules that may cause inter symbol interference , denoted by , thus can be expressed as utilizing distributions of and in ( 6 ) and ( 9 ) , we next calculate the channel capacity and also derive the symbol error performance of the receiver nanomachine for the proposed modulation scheme .consider serial transmission of information symbols where binary symbols and occur with _ a priori _ probabilities equal to and , respectively .in such a serial transmission , probabilities of correct detection of a transmitted symbol at the receiver nanomachine ( probability that is received when is transmitted and is received when is transmitted ) , can be expressed as , \\text{and}\end{aligned}\ ] ] where is a binary random variable indicating the information symbol transmitted in the previous slot , is the the tail probability of the gaussian probability distribution function , ^{1/2}} ] and ^{1/2}}$ ] . on the other hand , probabilities of erroneous detection of a transmitted symbol at the receiver nanomachine ( probability that is received when is transmitted and is received when is transmitted ) , can be expressed as + q^2 \big [ 1- q(a_3)\big].\end{aligned}\ ] ] total probability that a symbol is decoded erroneously at the receiver nanomachine , denoted as , is the sum of and , thus , using ( 12 ) and ( 13 ) , + q^2 \big [ 1- q(a_3)\big].\ ] ] next , we determine capacity of the considered molecular communication system , i.e. , the maximum rate of transmission between the transmitter nanomachine and the receiver nanomachine . this can be calculated utilizing the seminal shannon s formula , which defines the capacity as the maximum mutual information between the transmitted symbol and the received symbol as where note that can be calculated using ( 10)-(13 ) while the marginal probabilities and can be obtained using and .3 shows mutual information between the zebra - csk modulated binary symbols transmitted by the transmitter nanomachine and binary symbols received by the receiver nanomachine .for all three considered cases of inhibition efficiency of the inhibitor molecules ( viz . and ), mutual information increases with increase in the detection threshold value until reaching its maximum but starts to fall with further increase in the detection threshold .note that zebra - csk with null inhibition efficiency ( i.e. ) is logically equivalent to the conventional csk . from the figure it is evident that higher mutual information can be achieved between zebra - cskmodulated transmitted symbols and received symbols by selecting a relatively smaller detection threshold than the one that maximizes the mutual information between csk modulated symbols and the received symbols .the higher mutual information in zebra - csk thus results in higher channel capacity .for instance , for the considered , , and values , channel capacity of zebra - csk is and higher than that of csk when and , respectively .4 shows symbol error performance of the receiver nanomachine while demodulating zebra - csk modulated binary symbols .the symbol error probability of zebra - csk is significantly lower than that of the conventional csk for most of the possible detection threshold values that are smaller than the optimum detection threshold value for csk .interestingly , the higher the inhibition efficiency of the inhibitor molecules is , the higher will be the reduction in the symbol error probability .for instance , for the considered , , and values , minimum achievable symbol error probability of zebra - csk ( i.e at optimum detection threshold ) is when and when .these are clearly and improvement over the minimum achievable symbol error probability of in the conventional csk .[ pe1 ] [ pe2 ] [ pe3 ] fig .5 shows symbol error performance of the receiver nanomachine located at various distance from the transmitter nanomachine . for a given detection threshold , the symbol error probability increases as the distance between transmitter and receiver nanomachine increases .symbol error probability of zebra - csk is always lower than that of the conventional csk regardless of the distance between transmitter and receiver nanomachine .it is note worthy to mention that , except for very higher inhibition efficiencies closer to , symbol error performance gain of zebra - csk over csk ( i.e difference between the symbol error probabilities of csk and zebra - csk ) increases with an increase in the distance between transmitter and receiver nanomachine .in this paper , we have proposed a new modulation technique ( named as zebra - csk ) for transmitting information symbols over diffusive channel in nanoscale molecular communication networks .the proposed modulation technique is robust in the sense that it reduces inter symbol interferences among subsequently transmitted information symbols .reduction in the inter symbol interference depends on the efficiency of the molecules used to inhibit the messenger molecules . through numerical analysiswe have shown that symbol detection error probability of a receiver nanomachine is lower for demodulating zebra - csk modulated binary information symbols than demodulating the conventional csk modulated binary information symbols . as a future work, we plan to analyze and quantify the performance of the proposed zebra - csk for quaternary and even higher order modulations .
diffusion - based molecular communication over nanonetworks is an emerging communication paradigm that enables nanomachines to communicate by using molecules as the information carrier . for such a communication paradigm , concentration shift keying ( csk ) has been considered as one of the most promising techniques for modulating information symbols , owing to its inherent simplicity and practicality . csk modulated subsequent information symbols , however , may interfere with each other due to the random amount of time that molecules of each modulated symbols take to reach the receiver nanomachine . to alleviate inter symbol interference ( isi ) problem associated with csk , we propose a new modulation technique called zebra - csk . the proposed zebra - csk adds _ inhibitor molecules _ in csk - modulated molecular signal to selectively suppress isi causing molecules . numerical results from our newly developed probabilistic analytical model show that zebra - csk not only enhances capacity of the molecular channel but also reduces symbol error probability observed at the receiver nanomachine . diffusion , inter symbol interference , modulation , molecular communications , nanonetworks
this methodology to codify and extract symbolic knowledge from a nn is very simple and efficient for the extraction of comprehensible rules from medium - sized data sets .it is , moreover , very sensible to attribute relevance . in the theoretical point of viewit is particularly interesting that restricting the values assumed by neurons weights restrict the information propagation in the network , thus allowing the emergence of patterns in the neuronal network structure . for the case of linear neuronal networks , having by activation function the identity truncate to 0 and 1 ,these structures are characterized by the occurrence of patterns in neuron configuration directly presentable as formulas in logic .
this work describes a methodology that combines logic - based systems and connectionist systems . our approach uses finite truth - valued ukasiewicz logic , wherein every connective can be defined by a neuron in an artificial network . this allowed the injection of first - order formulas into a network architecture , and also simplified symbolic rule extraction . for that we trained a neural networks using the levenderg - marquardt algorithm , where we restricted the knowledge dissemination in the network structure . this procedure reduces neural network plasticity without drastically damaging the learning performance , thus making the descriptive power of produced neural networks similar to the descriptive power of ukasiewicz logic language and simplifying the translation between symbolic and connectionist structures . we used this method for reverse engineering truth table and in extraction of formulas from real data sets . - [ [ section ] ] there are essentially two representation paradigms , usually taken very differently . on one hand , symbolic - based descriptions are specified through a grammar that has fairly clear semantics . on the other hand , the usual way to see information presented using a connectionist description is its codification on a neural network ( nn ) . artificial nns , in principle , combine - among other things - the ability to learn and robustness or insensitivity to perturbations of input data . nns are usually taken as black boxes , thereby providing little insight into how the information is codified . it is natural to seek a synergy integrating the _ white - box _ character of symbolic base representation and the learning power of artificial neuronal networks . such neuro - symbolic models are currently a very active area of research : for the extraction of logic programs from trained networks see . our approach to neuro - symbolic models and knowledge extraction is based on a comprehensive language for humans , representable directly in a nn topology and able to be used . this is done on knowledge - based networks , to generate the initial network architecture from crude symbolic domain knowledge . in the other direction , the hardest problem , neural language can be translated into a symbolic language . however in this processes is used by identifing the most significant determinants of decision or classification . hence , any individual unit must be associated with a single concept or feature of the problem domain . in this work we used a first - order language wherein formulas are interpreted as nns . in this framework formulas are simple to inject into a multilayer feed - forward network , and the system is free from the need of giving interpretation to hidden units in the problem domain . our approx to the generation of neuro - symbolic models used ukasiewicz logic . this type of many - valued logic has a very useful property motivated by the linearity of logic connectives . every logic connective can be defined by a neuron in an artificial network having , by activation function , the identity truncated to zero and one . this allows the direct codification of formulas into network architecture , and simplifies the extraction of rules . multilayer feed - forward nn , having this type of activation function , can be trained efficiently using the levenderg - marquardt ( lm ) algorithm , and the generated network can be simplified quickly using the `` optimal brain surgeon '' algorithm proposed by b. hassibi , d. g. stork and g.j . stork . this strategy has good performance when applied to the reconstruction of formulas from truth tables . in this type of reverse engineering problem , we presuppose no noise . however , the process is stable for the introduction of gaussian noise . this motivates its application to extract comprehensible symbolic rules from real data . [ [ section-1 ] ] classical propositional logic is one of the earliest formal systems of logic . the algebraic semantics of this logic are given by boolean algebra . both , the logic and the algebraic semantics have been generalized in many directions . many - valued logics , is one of this generalizations , and can be conceived as a set of formal representation languages that proven to be useful for both real world and computer science applications . in applications of many - valued logic , like fuzzy logic , the properties of boolean conjunction are too rigid , this is overtake extending a new binary connective , , usually called _ fusion_. the generalization of boolean algebra can be based in the relationship between conjunction and implication given by these equivalences , can be used to present implication as a generalized inverse for conjunction . these two operators are defined in a partially ordered set of truth values , , thereby extending the two - valued set of an boolean algebra . if has more than two values , the associated logics are called a _ many - valued logics_. a many - valued logic having $ ] as set of truth values is called a _ fuzzy logic_. in this type of logics a continuous fusion operator is known as a _ t_-norm . the following are example of continuous -norms : 1 . _ ukasiewicz _ -norm : . 2 . product -norm : usual product between real numbers . 3 . gdel -norm : . the fuzzy logic defined using _ ukasiewicz _ -norm is called ukasiewicz logic ( logic ) and the corresponding propositional calculus has a nice complete axiomatization . in this type of logic the implication , is called _ residuum _ operator , and is given by . like first - order languages , in logic , sentences are usually built from ( countable ) set of propositional variables , the fusion operator , implication , and the truth constant 0 . further connectives are defined as follows : [ cols="<,^ , < " , ] the extracted formula is -similar , with to the original nn . formula misses the classification for 40 cases . note that the symbolic model is stable , the bad performance of representation do not affect the model .
the universe after the last scattering can be probed only through observing the distribution of luminous objects , either directly or indirectly via the weak lensing effect .this is why several wide - field and/or deep surveys of galaxies , clusters and quasars are planned and operating at various wavelengths .the purpose of such cosmological surveys is two - fold ; to understand the nature of the astronomical objects themselves , and to extract the cosmological information . from the latter point of view , which we will pursue throughout this paper ,the objects serve merely as tracers of dark matter in the universe .since such _ luminous _ objects should have been formed as a consequence of complicated astrophysical processes in addition to the purely gravitational interaction , it is quite unlikely that they faithfully trace either the spatial distribution of dark matter or its redshift evolution .rather it is natural to assume that they sample the dark matter distribution in a biased manner . to describe the _ biasing _ more specifically ,define the density contrasts of galaxies and mass at a position and redshift smoothed over the scale as here and in what follows we use the words , `` mass '' and `` dark matter '' , interchangeably , and `` galaxies '' to imply luminous objects ( galaxies , clusters , quasars , etc . ) in a collective sense .the simplest , albeit most _ unlikely _ , possibility is that they are proportional to each other : while the proportional coefficient was assumed to be constant when the concept of the biasing was first introduced in the cosmology community ( kaiser 1984 ; davis et al . 1985 ; bardeen et al.1986 ) , it has been subsequently recognized that should depend on and ( e.g. , fry 1996 ; mo & while 1996 ) . as a matter of fact, it is more realistic to abandon the simple linear biasing ansatz ( [ eq : linbias ] ) completely , and formulate the biasing in terms of the conditional probability function of at a given . then equation ( [ eq : linbias ] ) is rephrased as obviously the relation between and is neither linear nor deterministic .this general concept nonlinear and stochastic biasing was introduced and developed in a seminal paper by dekel & lahav ( 1999 ) , which inspired numerous recent activities in this field ( e.g. , pen 1998 ; taruya , koyama & soda 1999 ; blanton et al .1999 ; matsubara 1999 ; taruya 2000 ; somerville et al . 2000 ; taruya et al .2000 ) .then the crucial question is the physical interpretation of .there are ( at least ) two different interpretations for its physical origin .the first is based on the fact that is meaningless unless one specifies many _ hidden _ variables characterizing the galaxies in the catalogue , for instance , their luminosity , mass of dark matter and gas , temperature , physical size , formation epoch and merging history , among others .this list is already long enough to convince one for the inevitably broad distribution of . in this spirit ,blanton et al . ( 1999 ) proposed that the gas temperature of a local patch is the important variable to control on the basis of cosmological hydrodynamical simulations .the other adopts the view that our universe is fully specified by the primordial density field of dark matter . according to this interpretation, all the properties of any galaxy should be in principle computable given an initial distribution of dark matter field in the entire universe , at least in a statistical sense .actually this is exactly what the cosmological hydrodynamical simulations attempt to do .the gas temperature of a local patch , for instance , should be determined by a non - local attribute of the dark matter fluctuations .clearly the above interpretations are not conflicting , but rather stress the two different aspects of the physics of galaxy formation which is poorly understood at best . in this paper , we present an analytical model for nonlinear stochastic biasing by combining the above two interpretations in a sense .specifically we derive the joint probability function of and , , from the distributions of the formation epoch and mass of halos on the basis of the extended press - schechter theory .in contrast to previous work which were based on the results of numerical simulations ( kravtsov & klypin 1999 ; blanton et al .1999 ; somerville et al .2000 ) , our model provides , for the first time , an analytic expression for . thus one can compute the various biasing parameters in an arbitrary cosmological model at a given redshift and a smoothing length in a straightforward fashion .we derive the joint probability function assuming the primordial random - gaussianity of the dark matter density field , and thus the results are sufficiently general .we note here , however , that our primary purpose is to present a general formulation to predict biasing properties of halos , and not to make detailed predictions at this point .in fact we adopt a few approximations with limited validity in presenting specific examples , but this is not essential in our paper and the resulting predictions can be improved in a straightforward manner if other analytical / numerical approximations become available .nevertheless we would like to emphasize that our simple analytical prescription largely explains the basic features of the biasing parameters reported in the previous numerical simulations ( kravtsov & klypin 1999 ; somerville et al .thus our prescription is supposed to capture the most important processes in the halo biasing .we organize the paper as follows . in [ sec : formalism ] , we describe a general formalism for the one - point statistics of the galaxy and the mass distributions from a point of view of the hidden variable interpretation of the nonlinear and stochastic biasing theory . applying this general formalism , we develop a model for dark matter halo biasing treating the halo mass and formation epoch as the hidden variables in [ sec : model ] .the resulting expression for the conditional probability function , , can be numerically evaluated using the extended press - schechter theory , and we show various predictions for the halo biasing in [ sec : results ] .in particular , we pay attention to their scale - dependence and redshift evolution , and compare our model predictions to previous simulation results .finally section [ sec : conclusions ] is devoted to the conclusions and discussion .in this section , we present a general formalism of biasing for the one - point statistics of the galaxies and the mass smoothed with the radius . while we specifically focus on the second order statistics and discuss their nonlinearity and stochasticity in the present paper , the formalism is readily applicable to higher - order statistics .recall that the fluctuations of galaxies and the dark matter density field are given by where the variables with over - bar denote the homogeneous mean over the entire universe .we evaluate these quantities smoothed with a spherical symmetric filter function : which corresponds to quantities in equation ( [ eq : dgdm ] ) .the one - point statistics of galaxies and the mass are calculated from equation ( [ eq : smoothed - delta ] ) . in general , fluctuations of the biased objects are specified by multi - variate functions of and other observable and unobservable variables , , , characterizing the sample of objects .then one can formally write where we use to denote the galaxy number density contrast at a position and redshift smoothed over a scale as a function of , , and . in the above expression , and should be also regarded as functions of ( smoothed over the size . in practice ,galaxies in redshift surveys are identified and/or classified according to their magnitude , spectral and morphological type .the spatial clustering of galaxies should naturally depend on those observable quantities , . since any sample of galaxiesis selected over a range of values for , the distribution of leads to the stochasticity of the clustering biasing of the sample .furthermore , the unobservable quantities or the _ hidden variables _ , , which characterize an individual galaxy reflecting the different history of gravitational clustering and merging , radiative cooling , and environment effects , should provide additional stochasticity . although these processes could be related to the dark matter density fluctuation in a `` non - local '' fashion , we intend to incorporate those effects into our biasing model by a set of local functions such as the gas temperature , mass of the hosting halos , and the formation epoch of galaxies . while the distinction between and is _ conceptually _ important , it may not be easy or straightforward in reality .nevertheless it is not essential in our prescription below as long as their probability distribution function ( pdf ) : is specified .it should be noted that the above expression implicitly depends on the smoothing radius and the redshift for the given classes .the statistical information of galaxy biasing is obtained by averaging over the joint pdf . to be more specific ,the joint average of a function is defined by where we use so as to explicitly denote the joint average over the two stochastic variables , and . in our prescription , however , it is more convenient to perform the averaging over instead of : where the variable in the argument of should be regarded as a function of and ( eq.[[delta - g ] ] ) . of course the two expressions ( [ ensemble - average2 ] ) and ( [ ensemble - average ] ) should give the identical result , and thus one obtains where the region of integration is defined as for a given .equation ( [ pdf - hidden ] ) implies that the unobservable information represented by serves as a source for stochasticity between and . in other words , equations ( [ pdf - hidden ] ) and ( [ constraint ] ) can be regarded as to the definition of the joint pdf .once is specified , equation ( [ pdf - hidden ] ) can be computed numerically in a straightforward manner . in the next section, we will develop a simple analytical model for the dark halo biasing and explicitly calculate the joint pdf according to this prescription assuming that the formation epoch of halos is the major variable in .finally when the joint pdf is given , it is straightforward to calculate the conditional pdf of galaxies for a given overdensity : where the one - point pdf of the mass is related to turn next to the statistical quantities characterizing the pdf . for this purpose , we consider the second - order statistics , variance of galaxies , variance of mass , and covariance of galaxies and mass , which are defined ( joint average over and ) even in the definition of and , it can be replaced by the single average over and , respectively .] respectively as it should be also noted here that the last quantity , covariance of galaxies and mass , is not positive definite unlike the first two . as we will see below , however , this is always positive for a range of model parameters of cosmological interest which we survey .thus we use for the notational convenience .their ratios represent the degree of the biasing and stochasticity : which are sometimes quoted as `` the '' biasing parameter and the cross correlation coefficient ( dekel & lahav 1999 ) . in this paper, we use the subscripts , var and corr , for the above parameters in order to avoid possible confusions with other parameters introduced in previous papers .the above definitions of and do not yet fully distinguish the nonlinear and stochastic nature of the biasing in a clear manner .thus we introduce more convenient statistical measures , and , which quantify the two effects separately . for this purpose , the conditional mean of for a given ( dekel & lahav 1999 ): plays a key role .note that the average of over vanishes from definition ( [ density_field ] ) : { { \delta_{\rm gal}}}\cr & = & \int d{{\delta_{\rm gal}}}\ , p({{\delta_{\rm gal } } } ) \ , { { \delta_{\rm gal}}}= 0 .\end{aligned}\ ] ] the nonlinearity of biasing refers to the departure from the linear proportional relation between and .this can be best quantified by the following measure : the second equality in the above comes from the fact that . from the schwartz inequality ,one show that the right - hand - side of the above equation is non - negative and vanishes only if the linear coefficient is independent of .the stochasticity of biasing corresponds to the scatter or dispersion of around its conditional mean .averaging this scatter over with proper normalization , we define the following measure for the stochasticity of biasing : } { \sigma^4_{\rm gm } } .\ ] ] since the galaxy density field ( eq.[[delta - g ] ] ) depends on many variables other than , does not vanish in general . in turn, corresponds to the unlikely case that is uniquely determined by and thus . the galaxy biasing still exists even when .in fact , a simple linear and deterministic biasing ( [ eq : linbias ] ) falls into this category .this effect can be separated out from the covariance or linear regression of and ( dekel & lahav 1999 ) as follows : this quantity is equivalent to the coefficient of the leading order in the taylor expansion , , in a perturbative regime , ( taruya & soda 1999 ) .the biasing parameters that we introduced are related to the more conventional biasing coefficients ( eq.[[eq : ratio1 ] ] ) as these relations clearly indicate that and separate the stochastic and nonlinear effects which are somewhat degenerate in the definitions of and . also it may be useful to express our biasing parameters in terms of those introduced by dekel & lahav ( 1999 ) : of course the two sets of choice are essentially equivalent , we hope that our notation characterizes the physical meaning of nonlinear and stochastic biasing more clearly . finally , while the present paper is focused on the analyses of the second - order statistics , it is fairly straightforward to extend the above formalism to the higher - order statistics .the previous section describes the general formalism for the nonlinear stochastic biasing in terms of the hidden variable interpretation . in this sectionwe present a specific model for halo biasing and discuss its predictions according to the general formalism . before proceeding to the technical details ,it is useful to explain first the basic picture of our model in a qualitative manner .as illustrated schematically in figure [ fig : ps_halos ] , we consider the mass and galaxy density fields at redshift smoothed over the top - hat _eulerian _ proper radius .the mass density contrast computed in the eulerian coordinate relates with its lagrangian coordinate counterpart assuming the spherical collapse .then the mass in the sphere is simply given by , where is the physical mass density at .also the linearly extrapolated mass density contrast in the sphere can be evaluated from on the basis of the nonlinear spherical collapse model ( e.g. , mo & white 1996 ) .each sphere of the eulerian radius should contain a number of gravitationally virialized objects i.e. , _ dark matter halos_. given and , their conditional mass function can be predicted by extended press - schechter theory ( e.g. , bower 1991 ; bond et al .such halos are conventionally characterized by their mass and linearly extrapolated mass density contrast assuming that their formation epoch is equivalent to the current redshift .kitayama & suto ( 1996a ) pointed out that this approximation often significantly changes the predictions for x - ray cluster abundances on the basis of the press - schechter theory , and proposed a phenomenological prescription to compute the formation epoch .in fact , the halo biasing derived by mo & white ( 1996 ) is fairly sensitive to the difference of and as noticed by kravtsov & klypin ( 1999 ) .thus we extend the biasing model of mo and white ( 1996 ) , in which should be specified _ a priori _ , by considering explicitly the dependence on and averaging according to the formation epoch distribution function of lacey & cole ( 1993 ) and kitayama & suto ( 1996b ) .this is a major and important improvement of our model over the original proposal of mo and white ( 1996 ) . in our model , therefore , the formation epoch and the mass of halo constitute the _ hidden _ variables ( 2 ) , and their pdfs generate the nonlinear and stochastic behavior in the resultant halo biasing . in what follows ,we assume that the primordial mass density field obeys the random - gaussian statistics ( e.g. , bardeen et al .1986 ) . in this case , the ( unconditional ) mass function of dark halos ( press & schechter 1974 ) : } \left|\frac{d\sigma^2(m_{1},z)}{dm_{1}}\right|dm_{1}\ ] ] proves to be in reasonable agreement with results from -body simulations ( e.g. , efstathiou et al . 1988 ; suto 1993 ; lacey & cole 1993,1994 ) . equation ( [ universalmf ] ) corresponds to the comoving number density of halos exceeding the critical density threshold and of mass between and .the value for will be specified when we consider the one - point function or the conditional pdf of the dark halos below ( see eqs . [ [ eq : delta - halo ] ] and [ [ weight ] ] ) .the mass variance is defined from the _ linear _ power spectrum of mass density fluctuations at present ( ) : where is the linear growth factor normalized to unity at and the top - hat window function : with the lagrangian radius is adopted .since our one - point statistics of halos is evaluated within a sample of spheres of the eulerian radius , we need the conditional mass function for halos within a sphere . for this purpose, we use the extended press - schechter theory which predicts the conditional mass function for halos of and in the background region of as ^{3/2 } } \cr & \times & \exp{\left[-\frac{(\delta_1-\delta_0)^2 } { 2\{\sigma^2(m_1,z)-\sigma^2(m_0,z)\}}\right ] } \left|\frac{d\sigma^2(m_1,z)}{dm_1}\right|dm_1 .\label{conditionalmf}\end{aligned}\ ] ] then the biased density field for halos of mass , which formed at and are observed at within the sampling sphere of in eulerian coordinates , is derived by mo & white ( 1996 ) : in the above , the critical threshold for those halos is given as again on the basis of the spherical collapse model .the remaining task is to compute the mass , , and the _ linearly extrapolated _ mass density contrast , , of the sampling sphere from its eulerian radius and density contrast . since , the former is simply given as with being the physical mass density at .finally we adopt the following fitting formula obtained in the spherical collapse model ( mo & while 1996 ) to compute in terms of : while equations ( [ delc ] ) and ( [ del0-del ] ) were originally derived in the einstein - de sitter model , we numerically checked that they provide sufficiently accurate approximations for our present purpose even in the and model that we consider later .thus we use the expressions ( [ delc ] ) and ( [ del0-del ] ) irrespectively of the cosmological models in the subsequent analysis .figure [ fig : dhdm ] illustrates the dependence of on and as a function of smoothed over at . given a halo mass, is very sensitive to the formation epoch especially in the range of .as increases , the dependence and and becomes significant ; this reflects the fact that the larger mass halos preferentially form in the denser regions than the average since the typical halo mass that can be collapsed and virialized decreases at higher .such and dependence of convolved with the pdf of and leads to the nonlinear stochastic behavior of the biasing of dark halos .while we regard halo mass and its formation epoch as the _ hidden variables _ in equation ( [ eq : delta - halo ] ) , they may not be entirely unobservable .one may infer the halo mass and the formation epoch for an individual galaxy by combing the observed luminosity , color and metallicity with a galaxy evolution model . in this case ,their probability distribution functions need to be convolved with such observational selection functions with our prior distribution . except for this correction ,our methodology presented below remains the same . as indicated in equation ( [ eq : delta - halo ] ) , the amplitude of halo biasing is explicitly dependent on its formation epoch . thus the simple approximation of may lead to even qualitatively incorrect predictions for the biasing .in fact this was shown to be the case in recent n - body simulations by kravtsov & klypin ( 1999 ) .the importance of the distribution of has been emphasized by kitayama & suto ( 1996a ) in a different context , and a model for its pdf was proposed by lacey & cole ( 1993 ) .incidentally catelan et al .( 1998a , b ) also proposed a different model of halo biasing considering the -dependence .their model simply treats as a free parameter and does not properly take account of its distribution function .our model presented here incorporates the distribution function of the formation epoch explicitly . adopting the excursion set approach ( bond et al .1991 ) and _ defining _ the formation redshift of a particular halo of mass at as the epoch when the mass of its most massive progenitor exceeds for the first time , lacey & cole ( 1993 ) derived the differential distribution of the halo formation epoch .their result is expressed as where the function in the integrand of equation ( [ dp - domegaf ] ) can be obtained from equation ( [ ss ] ) . while the above expressions are rather complicated , practical fitting formulae for the mass variance in cold dark matter ( cdm ) models and for the formation epoch distribution were obtained in kitayama & suto ( 1996b ) which we adopt throughout the analysis . those are summarized in appendices a and b for convenience .it should be noted that the definition of halo formation is somewhat ambiguous in the framework of the extended ps theory .this aspect is explored in kitayama & suto ( 1996a ) , and their figure 1 explicitly shows how the result is dependent on the adopted ratio of the current halo mass and the progenitor mass at the formation epoch .the figure implies that the resulting formation rate is fairly insensitive to the value around 0.5 that we adopt here .figure [ fig : dpdzf ] plots the formation epoch distribution for halos selected at in cdm models .specifically we choose ( lambda cdm ; hereafter lcdm ) and ( standard cdm ; hereafter scdm ) .the top - hat mass fluctuation amplitude at , , is normalized to the cluster abundance ( kitayama & suto 1997 ) .we show results for halos of (dashed ) , (solid ) and (dot - dashed ) in lcdm , and (dotted ) in scdm .the shape of those distribution functions is quite similar , and characterized by a peak around . the peak redshift becomes closer to the observed one , , and the distribution around the peak becomes narrower as the halo mass increases , both of which are easily understood in the hierarchical clustering picture like the present models .note that the scdm model generally predicts a more sharply peaked distribution closer to than the lcdm model ( compare solid and dotted lines in fig.[fig : dpdzf ] ) .this is also reasonable from the fact that the growth of fluctuations is rapid in scdm and thus halos form only recently .thus the formation epoch distribution is fairly sensitive to the cosmological parameters .now we are in a position to explicitly construct the conditional probability distribution of the dark halo for a given , and the joint probability distribution .basically we follow the prescription described in [ subsec : pdfinhidden ] , but in slightly different order .we first compute applying equation ( [ pdf - hidden ] ) : where the region of the integration is determined from the following conditions : the normalization factor is defined as the integrand in equation ( [ weight ] ) just corresponds to the joint pdf for a given .since the joint pdf is simply computed according to one needs a reliable model for the one - point pdf of dark matter density contrast , .fortunately it has been known that this can be empirically approximated by the log - normal distribution function to a good accuracy ( e.g. , coles & jones 1991 ; kofman et al .1994 ; bernardeau & kofman 1995 ; taylor & watts 2000 ) : \frac{d{{\delta_{\rm mass}}}}{1+{{\delta_{\rm mass } } } } , \ ] ] where and is defined in equation ( [ variance - covariance ] ) .note that equation ( [ lognormalpdf ] ) reduces to the gaussian distribution for , and thus this model again assumes the primordial random - gaussian density field implicitly as our entire analysis .finally we adopt the fitting formula ( peacock & dodds 1996 ) for the nonlinear cdm power spectrum in computing the mass variance : with being the top - hat smoothing function ( eq.[[top - hat ] ] ) .the validity of the log - normal approximation for the one - point pdf is examined by bernardeau & kofman ( 1995 ) ; their figure 10 indicates that equation ( [ lognormalpdf ] ) reproduces the simulation results very accurately at least for .although the accuracy on smaller scales is not shown quantitatively , it would be reasonable to assume that the approximation is acceptable up to .also our statistical results are not sensitive to the tail of such pdf in any case . substituting the analytical expressions for and discussed in [ subsec : formation epoch ] into equation ( [ weight ] ) , one may numerically compute the conditional pdf , and the joint pdf from equation ( [ eq : jointpdf ] ) .then all the statistical quantities can be evaluated using equation ( [ ensemble - average2 ] ) . in practice , however , it is more convenient and even accurate to use ( [ ensemble - average ] ) which in the present case is written explicitly as .\end{aligned}\ ] ] we use the above expression in evaluating the various biasing parameters below except in presenting the pdfs directly .we present several specific predictions applying our nonlinear stochastic halo biasing model to representative cdm models mentioned in [ subsec : formation epoch ] . throughout the analyses , we consider the range of halo mass between and unless otherwise stated .the general formulation described in the previous section should work in principle even on fully nonlinear scales . in practice , however , the results presented below are limited by the halo exclusion effect ( due to the finite size of halos ) and our approximation , equation ( [ lognormalpdf ] ) , for the one - point pdf .the validity of both effects should be carefully checked on small scales .since a typical virial radius of a halo of mass in lcdm model is mpc , the exclusion effect can not be neglected below but is not so strong for even for our largest mass considered ( ) . also the validity of of the log - normal approximationis already remarked in subsection 3.4 .thus we expect that our predictions below are fairly reliable up to . since the conditional pdf plays a central role in the dekel & lahav ( 1999 ) description of the nonlinear stochastic biasing , we first present predicted from our model .for this purpose , we start with equations ( [ weight ] ) and ( [ eq : constraint ] ) .specifically we divide the plane in a mesh , and accumulate the integrand of equation ( [ weight ] ) satisfying the constraint ( [ eq : constraint ] ) on each grid .the resulting pdfs are plotted in figure [ fig : prob_dh ] for a given mass density contrast ; ( solid ) , ( dotted ) , ( dashed ) and ( dot - dashed ) .the upper and lowers panels show the results at and , respectively , with the top - hat smoothing radius of ( _ left _ ) and ( _ right _ ) .the ticks on the upper axis in each panel indicate the corresponding conditional mean ( eq.[[eq : conditional_mean ] ] ) .the peak position of the distribution is in reasonable agreement with .as figure [ fig : dhdm ] indicates , given and is fairly monotonically dependent on and .thus the peak in the conditional pdf reflects that of the formation epoch distribution .the width of the distribution around the peak , on the other hand , is dominated by the mass distribution since becomes more sensitive to the halo mass in a denser environment ( fig.[fig : dhdm ] ) .once is given , the joint pdf is simply obtained by multiplying the one - point pdf of the mass density , in our case , the log - normal model ( eq.[[lognormalpdf ] ] ) .the resulting contour on plane is illustrated in figure [ fig : dhdm_lincont ] .this example shows the result with the top - hat smoothing radius of at different redshifts .solid lines in each panel indicate the conditional mean .the number density of halos of mass exceeding the current threshold become progressively smaller as increases .such halos naturally reside in higher density regions , and therefore are strongly biased with respect to mass .the biasing of those halos gradually decreases as time since they simply follow the gravitational field of the background mass after formation ( fry 1996 ; tegmark & peebles 1998 ; taruya , koyama & soda 1999 ) .in addition , new halos with form more easily later and can be found even at moderately dense environment .for both reasons , the mean bias as a function of decreases at lower redshifts . at ,the joint pdf shows slightly anti - biasing behavior , i.e , .this is partly due to the fact that a fraction of halos with are merged into a part of larger mass halos since the typical virialized halo mass at approaches the mass scale , corresponding to our adopted smoothing radius itself . in other words ,our halo model generally predicts the positive - biasing except for those halos of for given and . while figure [ fig : dhdm_lincont ] elucidates the global feature of the joint pdf ,the statistical weights are practically dominated by the relatively narrow regions around .those regions are illustrated better in logarithmic scales rather than in linear scales .for this purpose , we recompute the joint pdf from equations ( [ weight ] ) and ( [ eq : constraint ] ) using the mesh with the logarithmically equal bin on the plane . note that the resulting pdf sampled in this way , , satisfies thus we decide to plot the joint pdf . in figure[ fig : dhdm_logcont ] , the dispersion around the mean biasing decreases at higher and/or larger in contrast to the conditional pdf plotted in figure [ fig : prob_dh ] .this is basically because the pdf in linear regime becomes toward gaussian and sharply peaked .we remark that the contours of our joint pdf plotted in figures [ fig : dhdm_lincont ] and [ fig : dhdm_logcont ] seem to be very narrow around .this can be understood by the fact that the positive halo density contrast preferentially developed on the over - dense dark matter environment .in other words , this is a natural consequence of our bias model in which the signs of and are almost the same as illustrated in figure [ fig : dhdm ] .incidentally this feature may be visually exaggerated by the contours of very small probabilities ; if one focuses only on the contours of , the effect does not look so strong . in reality , and thus in numerical simulations , additional stochastic processes other than the mass and formation epoch distribution ( including the dynamical motion of halos ) should further increase the scatter which makes the contour rounder than our predictions .we will return to these contour plots in understanding the behavior of the biasing parameters later .the previous two subsections show that the joint and the conditional pdfs are dependent on both and .this dependence is translated to the scale - dependence and redshift evolution of the second order statistics defined in [ subsec : second - order ] .figure [ fig : sigma_hh ] shows , and at different redshifts as a function . we compute those statistics directly integrating equation ( [ average ] ) over , and , instead of using .while their scale- and time - dependence is noticeable even from those panels , the biasing parameters ( , , , and plotted in figures [ fig : bias_r ] and [ fig : bias_z ] are more suitable in understanding the origin of the behavior .consider first the scale - dependence ( fig.[fig : bias_r ] ) . while and are generally a decreasing function of ,this behavior is significant only up to in this model .this feature is more quantitatively exhibited by and ( _ lower - right _ panel ) .therefore in practice the linear biasing provides a good approximation on linear and quasi - linear regimes .the biasing is non - deterministic especially on smaller scales almost independently of ( see also fig.[fig : bias_z ] ) .in addition , does not approach unity even on large scales , implying that non - deterministic nature still exists there to some extent . as the lower - right panel in figure[ fig : bias_r ] indicates , on all scales and the above feature should be ascribed to the stochasticity due to the distribution of and .in fact , this stochasticity on large scales is expressed explicitly in terms of the linear biasing approximation by mo & white ( 1996 ) : with .it should be noted that is often regarded as a function of and assuming , leading to the linear _ deterministic _ model .thus once a halo mass is specified , . in equation ( [ eq : mwbias ] ) , however , we explicitly keep the -dependence which adds the stochastic nature in the model .more specifically , the definition ( [ eq : scatt ] ) with equations ( [ average ] ) and ( [ eq : mwbias ] ) reduces to where denotes the average over and .this accounts for the scale - independent non - vanishing exhibits in figure [ fig : bias_r ] . in conclusion ,our model implies that the halo biasing does _ not _ become fully deterministic even on large scales where its nonlinearity is negligible .this result is not surprising since we take into account the stochastic processes which do not vanish on large scales , but the overall effect is quite small ( ) .next discuss the redshift dependence of our biasing model .figure [ fig : bias_z ] shows that and strongly evolve in time . in fact , this is in marked contrast with predictions on the basis of phenomenological _ linear deterministic _ models .for example , a model of fry ( 1996 ) leads to evolution of a form : .\ ] ] this implicitly assumes that all the objects of interest form at the same and that their biasing parameters at are independent of the mass . since neither the above assumptions apply to our model , the prediction ( [ eq : linearbz ] ) is quite different from ours even in the linear regime .our results generally show much stronger evolution as increases despite the fact that is very close to unity .the recent compilation of the various galaxy catalogs also indicates that the prediction ( [ eq : linearbz ] ) does not reasonably describe the behavior at ( magliocchetti et al .thus , the proper modeling in the framework of the nonlinear stochastic biasing is important even in predicting and . in our halo biasing model ,the degree of stochasticity is almost constant in time because it is determined by the effective widths of the probability distribution functions of and .incidentally the nonlinearity does not evolve monotonically ( thin lines in the _ lower - right _ panel ) . at an intermediate redshift, reaches at a minimum .this behavior is qualitatively explained from the curvature of the conditional mean as a function of , i.e. , its second derivative . at ,halos of the mass that we adopt here exhibit stronger positive biasing ( ) on average at and mildly anti - biasing ( ) at . since at by definition ( cf ., eq.[[eq : delta - halo ] ] ) , the dependence results in positive and negative curvatures of , respectively at and , especially around where the contribution of the joint pdf is significant .this feature is clearly visible in figure [ fig : dhdm_lincont ] . in turn, the curvature should be minimum somewhere between and .when higher - order correction is neglected , is dominated by the curvature or the second derivative of properly averaged over ( eq.[[eq : nonl ] ] ) , and it should become minimum at the same redshift . it is interesting to note that the qualitatively similar evolutionary feature was found from the numerical simulations of somerville et al .( 2000 ; their fig.17 ) .so far we have presented model predictions for halos averaged over in lcdm model taking into account the appropriate distribution .figures [ fig : dhdm_lincont_diff ] and [ fig : bias_diff ] compare those fiducial results with predictions based on the different model assumptions .first we address the question of the origin of the stochasticity . since our biasing model has two _hidden _ parameters , and , we attempt to separate the two sources by fixing or while keeping the other parameters exactly the same .the upper - panels and the lower - left panel of figure [ fig : dhdm_lincont_diff ] suggest that the distribution dominates the stochasticity at low redshift , while the effect of the distribution becomes significant at higher redshift .the same behaviors can be seen in the lower - right panel of figure [ fig : bias_diff ] . since our model relies on the hierarchical picture of structure formation , the result is simply deduced from the merging history of halos .thus , in general , the major contribution to the joint pdf can become the formation epoch distribution .this is also indicated in the scale - dependence of the stochasticity in the lower - left panel of figure [ fig : bias_diff](thick - dashed and thick - dotted lines ) .next consider the cosmological model dependence .for this purpose , we plot the result in the scdm model with the same mass range .the joint pdf at and ( _ lower - right _ panel of fig.[fig : dhdm_lincont_diff ] ) is confined in a slightly narrower regime compared with that in lcdm .this comes from the fact that the formation epoch in scdm shows a bit more sharply peaked distribution in than that in lcdm with the same halo mass ( fig.[fig : dpdzf ] ) . as a result , the stochasticity in scdm is smaller ( i.e. , is smaller and is closer to unity ) , but at is almost insensitive to such small changes . rather the major difference between lcdm and scdm is the redshift evolution of which increases more rapidly in scdm reflecting the faster growth rate of density fluctuations .dark matter halos are quite natural and likely sites for galaxy and cluster formation .thus there are many previous papers to discuss different aspects of the halo biasing on the basis of different assumptions and modeling ( catelan et al .1999a , b ; blanton et al .among others , kravtsov & klypin ( 1999 ) and somerville et al .( 2000 ) analyzed the nonlinearity and stochasticity in halo and galaxy biasing using numerical simulations . in this sensetheir work is complementary to our analytical modeling , and deserves quantitative comparison with our results .kravtsov & klypin ( 1999 ) performed high - resolution n - body simulations employing particles in a periodic box so as to overcome the halo over - merging .in particular , their figure 4 plotting the joint pdf is quite relevant for the comparison with our figure [ fig : dhdm_lincont ] . strictly speaking, their simulated halo catalogue is based on slightly different identification scheme ( klypin et al .1999 ) ; the _ bound density maxima _algorithm , a selected mass range ] . throughout the paperwe adopt the above formula ( [ eq : cdmfit ] ) combined with the cluster normalization for ( kitayama & suto 1997 ) .the distribution function of the halo formation epoch ( eq.[[dp - domegaf ] ] ) plays a central role in our model , but it requires a time - consuming numerical integration and inversion .thus in the present paper we use the following fitting formulae of kitayama & suto ( 1996b ) : where is the complementary error function , and , \\c(\alpha ) & \equiv & 1 - { 1-\alpha \over 25}.\end{aligned}\ ] ] the parameter is related to the spectral index of the mass variance .kitayama & suto ( 1996b ) showed that in the cdm model this parameter should be replaced by where the effective spectral index is computed from the fitting formula ( [ eq : cdmfit ] ) and its derivative .
we propose a physical model for nonlinear stochastic biasing of one - point statistics resulting from the formation epoch distribution of dark halos . in contrast to previous works on the basis of extensive numerical simulations , our model provides for the first time an analytic expression for the joint probability function . specifically we derive the joint probability function of halo and mass density contrasts from the extended press - schechter theory . since this function is derived in the framework of the standard gravitational instability theory assuming the random - gaussianity of the primordial density field alone , we expect that the basic features of the nonlinear and stochastic biasing predicted from our model are fairly generic . as representative examples , we compute the various biasing parameters in cold dark matter models as a function of a redshift and a smoothing length . our major findings are ( 1 ) the biasing of the variance evolves strongly as redshift while its scale - dependence is generally weak and a simple linear biasing model provides a reasonable approximation roughly at , and ( 2 ) the stochasticity exhibits moderate scale - dependence especially on , but is almost independent of . comparison with the previous numerical simulations shows good agreement with the above behavior , indicating that the nonlinear and stochastic nature of the halo biasing is essentially understood by taking account of the distribution of the halo mass and the formation epoch .
in several previous papers we have argued for a global and non - entropic approach to the problem of the arrow of time , according to which the arrow is only a metaphorical way for expressing the geometrical time - asymmetry of the universe .we have also shown that , under definite conditions , this global time - asymmetry can be transferred to local contexts as an energy flow that points to the same temporal direction all over the spacetime . however , many relevant irreversible local phenomena were still unexplained by our approach .the account of them is necessary to reach a full answer to the problem of the arrow of time , since they have been traditionally considered as the physical origin of such an arrow .the aim of this paper is to complete the global and non - entropic program by showing that our approach is able to account for those local irreversible phenomena . for this purpose ,the paper is organized as follows . in sectionii we introduce the precise definition of the basic concepts involved in the discussion : time - reversal invariance , irreversibility and arrow of time . in section iiiwe summarize our global and non - entropic approach to the problem of the arrow of time , according to which the arrow is given by the time - asymmetry of spacetime . in this sectionwe also explain how the global arrow is transferred to local contexts as an energy flow defined all over the spacetime and which , as a consequence , represents a relevant physical magnitude in local theories .section iv is devoted to show how the energy flow breaks the time - symmetry of the pair of solutions , one the temporal mirror image to the other , resulting from different time - reversal invariant fundamental laws .in particular , we consider quantum mechanics , quantum field theory , and the case of feynman graphs and quantum measurements . in sectionv , irreversibility at the phenomenological level is discussed : we show that , when phenomenological theories are analyzed in fundamental terms , a second irreversible solution evolving towards the past can always be identified ; the energy flow is what breaks the just discovered time - symmetry of the pair . finally , in sectionvi we draw our conclusions .it is surprising that , after so many years of debates about irreversibility and time s arrow , the meanings of the terms involved in the discussion are not yet completely clear : the main obstacle to agreement is conceptual confusion .for this reason , we begin with disentangling the basic concepts of the problem . even a formal concept as time - reversal invariance is still object of controversies ( see albert s recent book , and earman s criticisms ) .we define it as follows : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * definition 1 : * a dynamical equation is _ time - reversal invariant _ if it is invariant under the application of the time - reversal operator , which performs the transformation and reverses all the dynamical variables whose definitions in function of are non - invariant under the transformation ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ on the basis of this definition , we can verify by direct calculation that the dynamical equations of fundamental physics are time - reversal invariant .let us see some examples : * * classical mechanics : * in ordinary classical mechanics , the basic magnitudes ( position , velocity and acceleration ) change as + in general , mass and force are conserved magnitudes , that is , they are not functions of ; then , + since , where is a potential , the energy is invariant under the action of : analogously , in hamiltonian classical mechanics , the position and the momentum change as * * electromagnetism : * here the charge is not function of since it is also a conserved magnitude ; so , then , the charge density and the current density change as + since the lorentz force is defined as , from eqs .( [ 2 ] ) , ( [ 5 ] ) and ( [ 6 ] ) the electric field and the magnetic induction change as * * quantum mechanics : * in order to apply the time - reversal operator to quantum mechanics , the configuration representation has to be used : . since we want to obtain and as in the classical case , we impose that the wave function change as . but this requirement makes the quantum time - reversal operator _antilinear _ and _ antiunitary _( by contrast with the linearity of in classical mechanics ) . in order to express this difference ,the time - reversal operator in quantum mechanics is denoted by : in fact , if are the eigenvalues of the hamiltonian , with the linear we would obtain and , therefore , , which would lead to unacceptable negative energies . on the contrary , with the antilinear , * * quantum field theory : * in this chapter of physics , the linear and unitary operator , corresponding to space - inversion , and the antilinear and antiunitary operator , corresponding to time - reversal , apply to the quadri - momentum as ( we will return on this point in section iv.b ) where as a consequence of the definition of time - reversal invariance , given a time - reversal invariant equation , if is a solution of , then is also a solution . in previous papers, we have called these two mathematical solutions _ time - symmetric twins _ : they are twins because , without presupposing a privileged direction of time , they are only conventionally different ; they are time - symmetric because one is the temporal mirror image of the other .the traditional example of time - symmetric twins is given by electromagnetism , where dynamical equations always have advanced and retarded solutions , respectively related with incoming and outgoing states in scattering as described by lax - phillips theory .the two twins are identical and can not be distinguished at this stage since , up to now , there is no further criterion than the time - reversal invariant dynamical equation from which they arise .conventionally we can give a name to each solution : advanced and retarded , incoming and outgoing , etc .but these names are just conventional labels and certainly do not establish a non - conventional difference between both time - symmetric solutions . in general ,the dynamical equations of _ fundamental _ physics are time - reversal invariant , e.g. the dynamical equation of classical mechanics , the maxwell equations of electromagnetism , the schrdinger equation of quantum mechanics , the field equations of quantum field theory , the einstein field equations of general relativity .however , not all axioms of fundamental theories are time - reversal invariant ; this is the case of the postulate iii of quantum field theory ( see section iv.b ) and the measurement postulate of quantum mechanics ( see section iv.c ) . on the other hand , many non fundamental laws are non time - reversal invariant , as the phenomenological second law of thermodynamics ( see section v ) .one of the purposes of this paper is to explain these apparent although the concepts of reversibility and irreversibility have received many definitions in the literature on the subject , from a very general viewpoint a reversible evolution is usually conceived as a process that can occur in the opposite temporal order according to the dynamical law that rules it : the occurrence of the opposite process is not excluded by the law .the typical irreversible processes studied by physics are decaying processes , that is , time evolutions that tend to a final equilibrium state from which the system can not escape : the irreversibility of the process is due precisely to the fact that the evolution leaving the equilibrium state is not possible . for these cases, reversibility can be defined as : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * definition 2 : * a solution of a dynamical equation is _reversible _ if it does not reach an equilibrium state ( namely , if where the system remains forever ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ __ for instance , according to this definition , in classical mechanics a solution of a dynamical equation is reversible if it corresponds to a closed curve in phase space ( even if these curves are closed through a point at infinite ) ; if not , it is irreversible .it is quite clear that time - reversal invariance and reversibility are different concepts to the extent that they apply to different mathematical entities : time - reversal invariance is a property of dynamical equations and , _ a fortiori _ , of the set of its solutions ; reversibility is a property of a single solution of a dynamical equation .furthermore , they are not even correlated , since both properties can combine in the four possible cases ( see castagnino , lara and lombardi ) . in fact , besides the usual cases time - reversal invariance - reversibility and non time - reversal invariance - irreversibility , the remaining two combinations are also possible : * * time - reversal invariance and irreversibility . * let us consider the pendulum with hamiltonian the dynamical equations are time - reversal invariant since : + therefore , the set of trajectories in phase space is symmetric with respect to the -axis .however , not all the solutions are reversible .in fact , when , the solution is irreversible since it tends to , ( , ) when ( ) ( see tabor ) : it corresponds to the pendulum reaching its unstable equilibrium state for ( ) . for ( oscillating pendulum ) and ( rotating pendulum ) , the evolutions are reversible . ** non time - reversal invariance and reversibility .* let us now consider the modified oscillator with hamiltonian where when , when , and and are constants .this means that . as a consequence , if , the dynamical equations are non time - reversal invariant since , for , and for , + nevertheless , the solutions and are , for , and for , + where and the constants change from one cycle to the next cycle in such a way that the solutions turn out to be continuous . in fig . [ osc ]we display the time - asymmetric solutions for this example .it is clear that these solutions have no limit for : each trajectory is reversible since it is a closed curve in phase space . , width=226 ] once both concepts are elucidated in this way , _ the problem of irreversibility _ can be clearly stated : _ how to explain irreversible evolutions in terms of time - reversal invariant laws_. when explained in these terms , it turns out to be clear that there is no conceptual puzzle in the problem of irreversibility : nothing prevents a time - reversal invariant equation from having irreversible solutions .nevertheless , the solution of the problem of irreversibility does not provide yet an adequate distinction between the two directions of time .in fact , if an irreversible evolution is a solution of a time - reversal invariant law , there will always exist its time - symmetric twin , that is , another irreversible solution that is its temporal mirror image .for instance , if there is an irreversible solution leading to equilibrium towards the future , necessarily there exists another irreversible solution leading to equilibrium towards the past , and there is no non - conventional criterion for selecting one of the temporally opposite evolutions as the physically relevant . in general , a privileged direction of timeis presupposed when irreversible processes are studied .in fact , when we talk about entropy increasing processes , we suppose an entropy increase _ towards the future _ ; or when we consider a process going from non - equilibrium to equilibrium , we implicitly locate equilibrium _ in the future_. in general , any evolution that tends to an attractor is conceived as approaching it towards the future .this means that the distinction between past and future is usually _ taken for granted _ , and this fact usually hides the existence of the second irreversible twin of the pair .however , when the time - reversal invariant theory is developed without projecting our time - asymmetric intuitions , the pair of time - symmetric twins becomes manifest .the problem of the arrow of time owes its origin to the intuitive asymmetry between past and future .we experience the time order of the world as directed : if two events are not simultaneous , one of them is earlier than the other .moreover , we view our access to past and future quite differently : we remember past events and predict future events . on the other hand ,we live in a world full of processes that never occur in the opposite direction : coffee and milk always mix together , we always get older , and regrettably we never see the reversed processes . therefore , if we conceive the problem of the arrow of time as the question does the arrow of time exist ? , we can legitimately solve it on the basis of our best grounded experiences : there is a non merely conventional difference between the two directions of time , and the privileged direction , that we call future , is the direction of those well known processes .however , this is not the problem of the arrow of time as conceived in the foundations of physics since the birth of thermodynamics . in this context, the difficulty consists in finding a _ physical correlate _ of the experienced difference between the two temporal directions .if such a temporal asymmetry did not exist , there would be no need to ask physics for its explanation .it is precisely due to the directedness of our experience of time that we want to find this feature accounted for by physical theories .but , then , we can not project our time - asymmetric experiences and observations into the solution of the problem without begging the question . in this paper, we will address the problem of the arrow of time within the limits of physics : we will not discuss our experiences about time and processes. our question will be : _ do physical theories pick out a preferred direction of time _ ?the main difficulty to be encountered in answering this question relies on our anthropocentric perspective : the difference between past and future is so deeply rooted in our language and our thoughts that it is very difficult to shake off these temporally asymmetric assumptions . in fact , traditional discussions around the problem of the arrow of time in physics are usually subsumed under the label the problem of the direction of time , as if we could find an exclusively physical criterion for singling out the privileged direction of time , identified with what we call future .but there is nothing in the dynamical laws of physics that distinguishes , in a non - arbitrary way , between past and future as we conceive them in our ordinary language and our everyday life .it might be objected that physics implicitly assumes this distinction with the use of temporally asymmetric expressions , like future light cone , initial conditions , increasing time , and so on .however this is not the case , and the reason relies on the distinction between conventional and substantial . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * definition 3 : * two objects are _ formally identical _ when there is a permutation that interchanges the objects but does not change the properties of the system to which they belong . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in physics it is usual to work with formally identical objects : the two semicones of a light cone , the two spin directions , etc . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * definition 4 : * we will say that we establish a _conventional _ difference between two objects when we call two formally identical objects with two different names ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ this is the case when we assign different signs to the two spin directions , or different names to the two light semicones of a light cone , etc . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * definition 5 : * we will say that the difference between two objects is _ substantial _ when we assign different names to two objects that are not formally identical . in this case ,although the particular names we choose are conventional , the difference is substantial ( see penrose , sachs ) . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ for instance , the difference between the two poles of the theoretical model of a magnet is conventional since both poles are formally identical ; on the contrary , the difference between the two poles of the earth is substantial because in the north pole there is an ocean and in the south pole there is a continent ( and the difference between ocean and continent remains substantial even if we conventionally change the names of the poles ) . in mathematics ,given a segment ] and , we are expressing a substantial difference between both points since the tail is not formally identical to the head .once this point is accepted , it turns out to be clear that , given the time - reversal invariance of the fundamental laws , physics uses the labels past and future in a conventional way .therefore , the problem can not be posed in terms of identifying the privileged direction of time named future , as we conceive it in our ordinary language : the problem of the arrow of time in physics becomes the problem of finding a _ substantial difference __ between the two temporal directions grounded only on physical theories_. but if this is our central question , we can not project our experiences about past and future for solving it .when we want to address the problem of the arrow of time from a perspective purged of our temporal intuitions , we must avoid the conclusions derived from subtly presupposing temporally asymmetric notions .as huw price claims , it is necessary to stand at a point outside of time , and thence to regard reality in atemporal terms : this is his _ view from nowhen _ .this atemporal standpoint prevents us from using temporally asymmetric expressions in a non - conventional way : the assumption about the difference between past and future is not yet legitimate in the context of the problem of the arrow of time .but then , what does the arrow of time mean when we accept this constraint ?of course , the traditional expression coined by eddington has only a metaphorical sense : its meaning must be understood by analogy .we recognize the difference between the head and the tail of an arrow on the basis of its intrinsic properties ; therefore , we can substantially distinguish between both directions , head - to - tail and tail - to - head , _ independently of our particular perspective and our pretheoretical intuitions ._ analogously , we will conceive _ the problem of the arrow of time _ in terms of _ the possibility of establishing a substantial distinction between the two directions of time on the basis of exclusively physical arguments_.on the basis of the distinction between conventional and substantial differences , and of the need of an atemporal standpoint , we have proposed and developed a global and non - entropic approach to the problem of the arrow of time in several previous papers . herewe will only summarize the main points of our argument .let us begin with explaining in what sense our approach moves away from the traditional local and entropic way of addressing the problem .why global ? * the traditional local approach owes its origin to the attempts to reduce thermodynamics to statistical mechanics : in this context , the usual answer to the problem of the arrow of time consists in defining the future as the direction of time in which entropy increases .however , already in 1912 paul and tatiana ehrenfest noted that , when entropy is defined in statistical terms on the underlying classical dynamics , if the entropy of a closed system increases towards the future , such increase is matched by a similar one towards the past : if we trace the evolution of a non - equilibrium system back into the past , we obtain states closer to equilibrium .this old discussion can be generalized to the case of any kind of evolution arising from local time - reversal invariant laws .in fact , as we have seen in the previous section , any time - reversal invariant equation gives rise to a pair of time - symmetric twins and , which are only conventionally different to each other .of course , the existence of time - symmetric twins is a result of a formal property of the equation .when it represents a local dynamical law , the solutions are conceived as representing two possible evolutions relative to that law , because local physics assumes a one - to - one mapping between possible evolutions and solutions of the dynamical law .but since both solutions are only conventionally different , they do not supply a substantial distinction between the two directions of time . in the face of this problem, one might be tempted to solve it by simply stating that both solutions describe the same process from temporally reversed viewpoints .however , the fact that a single process be described by and means that and are only two different names for the same temporal point .therefore , the time represented by and the time represented by $ ] would be not conventionally different , but strictly identical . then , time itself would not have the topology of , as in local physical theories , but the topology of . andsince in the directions and are substantially different , the substantial difference between the two directions of time would turn out to be imposed by hand in local theories , which should include the specification of an absolute origin of time .moreover , this move would break the galilean or the lorentz invariance of those theories ; in particular , the non - homogeneity of time would lead them to be non - invariant under time - translation and this , in turn , would amount to resign the local principle of energy conservation .summing up , local theories do not offer a non - conventional criterion for distinguishing between the time - symmetric twins and , therefore , between the two directions of time .when this fact is accepted , general relativity comes into play and the approach to the problem of the arrow of time turns out to be global . *2 . why non - entropic ?* when , in the late nineteenth century , boltzmann developed the probabilistic version of his theory in response to the objections raised by loschmidt and zermelo ( for historical details , see brush ) , he had to face a new challenge : how to explain the highly improbable current state of our world . in order to answer this question ,boltzmann offered the first global approach to the problem . wrote : _ the universe , or at least a big part of it around us , considered as a mechanical system , began in a very improbable state and it is now also in a very improbable state .then if we take a smaller system of bodies , and we isolate it instantaneously from the rest of the world , in principle this system will be in an improbable state and , during the period of isolation , it will evolve towards more probable states _ . ] since that seminal work , many authors have related the temporal direction past - to - future to the gradient of the entropy function of the universe : it has been usually assumed that the fundamental criterion for distinguishing between the two directions of time is the second law of thermodynamics ( see , for instance , reichenbach , feynman , davies ) .the global entropic approach rests on two assumptions : that it is possible to define entropy for a complete cross - section of the universe , and that there is an only time for the universe as a whole .however , both assumptions involve difficulties . in the first place ,the definition of entropy in cosmology is still a very controversial issue : there is no consensus regarding how to define a global entropy for the universe . in fact , it is usual to work only with the entropy associated with matter and radiation because there is not yet a clear idea about how to define the entropy due to the gravitational field . in the second place ,when general relativity comes into play , time can not be conceived as a background parameter which , as in pre - relativistic physics , is used to mark the evolution of the system .therefore , the problem of the arrow of time can not legitimately be posed , from the beginning , in terms of the entropy gradient of the universe computed on a background parameter of evolution .nevertheless , there is an even stronger argument for giving up the traditional entropic approach . as it is well known , entropy is a phenomenological property whose value is compatible with many configurations of a system .the question is whether there is a more fundamental property of the universe which allows us to distinguish between both temporal directions . on the other hand ,if the arrow of time reflects a substantial difference between both directions of time , it is reasonable to consider it as an intrinsic property of time , or better , of spacetime , and not as a secondary feature depending on a phenomenological property . for these reasons we will follow earman s _ time direction heresy_ , according to whichthe arrow of time is an intrinsic property of spacetime , which does not need to be reduced to non - temporal features . in general relativity ,the universe is a four - dimensional object , physically described by the geometrical properties of spacetime , embodied in the metric tensor , and the distribution of matter - energy throughout the spacetime , embodied in the energy - momentum tensor .both properties are physical , and they are related by the einstein field equations in such a way that the universe can be physically described in geometrical terms or in matter - energy terms .we will use a geometrical language for presenting the conditions for the arrow of time only because it makes the explanation more intuitive . as it is well known , many different spacetimes , of extraordinarily varied topologies , are consistent with the field equations .and some of them have features that do not admit a unique time for the universe as a whole , or even the definition of the two directions of time in a global way .therefore , the possibility of defining a global arrow of time requires two conditions that the spacetime must satisfy : time - orientability and existence of a global time . * 1 .time - orientability * a spacetime is _ time - orientable _ if there exists a continuous non - vanishing vector field on the manifold which is everywhere non - spacelike ( see hawking and ellis ) . by means of this field , the set of all light semicones of the manifold can be split into two equivalence classes , ( semicones containing the vectors of the field ) and ( semicones non containing the vectors of the field ) .it is clear that the names and are completely conventional , and can be interchanged as we wish : the only relevant fact is that , for all the semicones , each one of them belongs to one and only one of the two equivalence classes . on the contrary , in a non time - orientable spacetime it is possible to transform a timelike vector into another timelike vector pointing to the opposite temporal direction by means of a continuous transport that always keeps non - vanishing timelike vectors timelike; therefore , the equivalence classes , and can not be defined in an univocal way .global time *time - orientability does not guarantee yet that we can talk of _ the _ time of the universe : the spacetime may be non globally splittable into spacelike hypersurfaces such that each one them contains all the events simultaneous with each other .the _ stable causality condition _ amounts to the existence of a _ global time function _ on the spacetime ( see hawking and ellis ) , that is , a function whose gradient is timelike everywhere .this condition guarantees that the spacetime can be _ foliated _ into hypersurfaces of simultaneity ( ) , which can be ordered according to the value of ( see schutz ) .time - asymmetry * as grnbaum correctly points out , the mere oppositeness of the two directions of a global time , and even of the two equivalence classes of semicones , does not provide a non - conventional criterion for distinguishing the two temporal directions .such a criterion is given by the time - asymmetry of the spacetime .a time - orientable spacetime with global time is _ time - symmetric _ with respect to some spacelike hypersurface if there is a diffeomorphism of onto itself which ( i ) reverses the temporal orientations , ( ii ) preserves the metric , and ( iii ) leaves the hypersurface fixed .intuitively , this means that the spacelike hypersurface splits the spacetime into two halves , one the temporal mirror image of the other . on the contrary , in a time - asymmetric spacetimethere is no spacelike hypersurface from which the spacetime looks the same in both temporal directions : the properties in one direction are different than the properties in the other direction , and this fact is expressed by the metric .but , according to the einstein field equations , this also means that the matter - energy of the universe is asymmetrically distributed along the global time , and this is expressed by the energy - momentum tensor .therefore , no matter which spacelike hypersurface is used to split a time - asymmetric spacetime into to halves , the physical ( geometrical or matter - energy ) properties of both halves are substantially different , and such a difference establishes a substantial distinction between the two directions of time .now we can assign different names to the substantially different temporal directions on the basis of that difference .for instance , we can call one of the directions of the global time positive and the class of semicones containing vectors pointing to positive , and the other direction of negative and the corresponding class .of course , the particular names chosen are absolutely conventional , we can use the opposite convention , or even other names ( and , black and white , or alice and bob ): the only relevant fact is that both directions of time are substantially different to each other , and the different names assigned to them express such a substantial difference . *the meaning of time - reversal invariance in general relativity * the metric and the energy - momentum tensor of a particular spacetime are , of course , a solution of the einstein field equations which , being fundamental laws , are time - reversal invariant .then , there exists another solution given by and , the time - symmetric twin of the previous one .so , the ghost of symmetry threatens again : it seems that we are committed to supplying a non - conventional criterion for picking out one of both solutions , one the temporal mirror image of the other .however , in this case the threat is not as serious as it seems .as it is well known , time - reversal is a symmetry transformation . under the active interpretation, a symmetry transformation corresponds to a change from one system to another ; under the passive interpretation , a symmetry transformation consists in a change of the point of view from which the system is described .the traditional position about symmetries assumes that , in the case of discrete transformations as time - reversal or spatial reflection , only the active interpretation makes sense : an idealized observer can rotate himself in space in correspondence with the given spatial rotation , but it is impossible to rotate in time ( see sklar ) . of course , this is true when the idealized observer is immersed in the same spacetime as the observed system .but when the system is the universe as a whole , we can not change our spatial perspective with respect to the universe : it is as impossible to rotate in space as to rotate in time .however , this does not mean that the active interpretation is the correct one : the idea of two identical universes , one translated in space or in time regarding the other , has no meaning .this shows that both interpretations , when applied to the universe as a whole , collapse into conceptual nonsense ( for a full argument , see castagnino , lombardi and lara ) .in fact , in cosmology symmetry transformations are neither given an active nor a passive interpretation .two models and of the einstein equations are taken to be equivalent if they are _ isometric _ , that is , if there is a diffeomorphism which carries the metric into the metric ( see hawking and ellis ) .since symmetry transformations are isometries , two models related by a symmetry transformation ( in particular , time - reversal ) are considered equivalent descriptions of one of the same spacetime .therefore , by contrast with local theories , when the object described is the universe as a whole , it is not necessary to supply a non - conventional criterion for selecting one solution of the pair of time - symmetric twins .this fundamental difference between general relativity and the local theories of physics is what allows the global approach to the problem of the arrow of time to provide a solution that can not be offered by local approaches . *the generic character of the global and non - entropic arrow * as we have seen , the global entropic approach explains the arrow of time in terms of the increasing entropy function of the universe : as a consequence , this position has to posit a low - entropy initial state from which entropy increases .then , the problem of the arrow of time is pushed back to the question of why the initial state of the universe has low entropy .but a low - entropy initial state is extraordinarily improbable in the collection of all possible initial states .therefore , the global entropic approach is committed to supply an answer to the problem of explaining such an improbable initial condition ( see , for instance , the arguments of davies and of penrose and percival ; see also the well known criticisms directed by price to the global entropic approach ) . in our global and non - entropic approachthere are not improbable conditions that require to be accounted for . on the contrary , in previous papers ( castagnino and lombardi )we have proved that the subset of time - symmetric spacetimes has measure zero in ( or is a proper subspace of ) the set of all possible spacetimes admissible in general relativity .this result can be intuitively understood on the basis of the evident fact that symmetry is a very specific property , whereas asymmetry is greatly generic .therefore , in the collection of all the physically possible spacetimes , those endowed with a global and non - entropic arrow of time are overwhelmingly probable : the non existence of the arrow of time is what requires an extraordinarily fine - tuning of all the variables of the universe .these arguments , based on theoretical results , are relevant to the problem of finding a substantial difference between the two directions of time grounded on physical theories .of course , theories are undetermined by empirical evidence , and this underdetermination is even stronger in cosmology , where the observability horizons of the universe introduce theoretical limits to our access to empirical data .in fact , on the basis of the features of the unobservable regions of the universe , it may be the case that our spacetime be time - symmetric , or lacking a global time , or even non time - orientable .of course , this does not undermine the overwhelmingly low probability of time - symmetry .but since probability zero does not amount to impossibility , we can not exclude the case that we live in a time - symmetric , or even in a non time - orientable universe . in that case , the global and non - entropic arrow of time would not exist and , therefore , the explanation of the local time - asymmetries to be presented in the next sections would not apply .it is quite clear that this case can not be excluded on logical nor on theoretical grounds .however , the coherence of our overall explanation of the arrow of time and of the local irreversible phenomena , whose theoretical account we were looking for , counts for its plausibility . on the other hand ,it is difficult to see what theoretical or empirical reasons could be used to argue for the fact that we live in a universe lacking a global arrow of time ; on the contrary , the cosmological models accepted in present - day cosmology as the best representations of our actual universe ( big bang - big rip frw models ) are clearly time - asymmetric ( see caldwell _et al_. ) . as always in physics and , in general, in science , there are not irrefutable explanations .in particular , any statistical argument admits probability zero exceptions .nevertheless , we can make reasonable decisions about accepting or rejecting a particular explanation on the basis of its fruitfulness for explaining empirical evidence and its coherence with the knowledge at our disposal .* 1 . from time - asymmetryto energy flow * as we have seen , the time - asymmetry of spacetime establishes a substantial difference between the two directions of time .this time - asymmetry is a physical property of the spacetime that can be equivalently expressed in geometrical terms ( ) or in terms of the matter - energy distribution ( ) .however , in none of both descriptions it can be introduced in local theories which , in principle , do not contain the concepts of metric or of energy - momentum tensor .for this reason , if we want to transfer the global arrow of time to local contexts , we have to translate the time - asymmetry embodied in and into a feature that can be expressed by the concepts of local theories .this goal can be achieved by expressing the energy - momentum tensor in terms of the four - dimensional energy flow .as it is well known , in the energy - momentum tensor , the component represents the matter - energy density and the component , with to , represents the spatial energy flow .thus , can be viewed as a spatio - temporal matter - energy flow that embodies , not only the flow of matter - energy in space but also its flow in time ; let us call it energy flow for simplicity . in turn, satisfies the dominant energy condition if , in any orthonormal basis , , for each to .this is a very weak condition , since it holds for almost all known forms of matter - energy ( see hawking and ellis ) and , then , it can be considered that satisfies it in almost all the points of the spacetime .the dominant energy condition means that , for any local observer , the matter - energy density is _ non - negative _ and the energy flow is _ non - spacelike_. therefore , in all the points of the spacetime where the condition holds , points to the same temporal direction and , since it gives the temporal direction of , the energy flow is endowed with the same feature .the first point to stress here is that the dominant energy condition is immune to time - reversal .in fact , if we apply the time - reversal operator to , we obtain therefore , time - reversal does not change the sign of and , as a consequence , if the condition is satisfied by , it is also satisfied by .the second point that has to be emphasized is that , without a substantial difference between the two temporal directions , the term positive applied to by the dominant energy condition is merely conventional .the relevant content of the condition is that the energy flow is _ non - spacelike _ ( that is , the matter - energy does not flow faster than light ) , and that it points to the same temporal direction in all the points where the condition is satisfied .therefore , we can choose any temporal direction as positive , and the condition will preserve its conceptual meaning . a third point that deserves to be stressed is the application of the condition to spacetimes where points to the opposite temporal direction than in the original spacetime .let us consider the two spacetimes with and : it is clear that , if the dominant energy condition is satisfied by , it is also satisfied by and , with the only change in the conventional decision about the positive time direction .this is not surprising since the models with , and are isometric and , therefore , they are equivalent descriptions of one and the same universe : to say that the temporal direction in one of them is opposite to the time direction in the other is senseless ( see subsection iii.c.2 ) . once these delicate points have been understood , it turns out to be clear that the non - conventional content of the dominant energy condition is not the substantial identification of a temporal direction as the direction of the energy flow , but the coordination of the temporal direction of that flow in all the points of the spacetime where the condition holds .in other words , if satisfies the condition in all ( or almost all ) the points of the spacetime , we can guarantee that the energy flow is always contained in the semicones belonging to only one of the two classes , or , arising from the partition introduced by time - orientability . at this point, somebody might suppose that the satisfaction of the condition , by itself , solves the problem of the arrow of time : if the energy flow points to the same temporal direction all over the spacetime , let us define that direction as the future and that s all , we do nt need time - asymmetry .but this conclusion forgets the conventionality of the direction selected as positive for the energy flow .in fact , even if the energy flow is always contained in the semicones belonging to , say , , in a time - symmetric spacetime the difference between and is merely conventional .only when we can substantially distinguish between the two temporal directions in terms of the time - asymmetry of the spacetime , we can use the energy flow pointing to the same direction all over the spacetime for expressing that substantial difference . in short ,the arrow of time is _ defined _ by the time - asymmetry of the spacetime , and _ expressed _ by the energy flow . up to nowwe have not used the words past and future .it is clear that , since they are only labels , their application is conventional .now we can introduce the usual convention in physics , which consists in calling the temporal direction of and , therefore , also the temporal direction of the energy flow positive direction or future ; then , it is said that , under the dominant energy condition , is always contained in the semicones belonging to the positive class .with this convention we can say that the energy flows towards the future for any observer and in any point of the spacetime . of course, we could have used the opposite convention and have said that the energy always flows towards the past .but , in any case , no matter which terminological decision we make , past is substantially different than future , and the arrow of time consists precisely in such a substantial difference grounded on the time - asymmetry of spacetime . * 2 . breaking local time - symmetries * as we have said , any time - reversal invariant equation leads to a pair of time - symmetric twins , that is , two solutions symmetrically related by the time - reversal transformation : each twin , which in some cases represents an irreversible evolution , is the temporal mirror image of the other twin . from the viewpoint of the local theory that produces the time - symmetric twins ,the difference between them is only conventional : both twins are nomologically possible with respect to the theory . and , as the eherenfests pointed out , this is also true for entropy when computed in terms of a time - reversal invariant fundamental law . the traditional arguments for discarding one of the twins and retainingthe other invoke temporally asymmetric notions which are not justified in the context of the local theory .for instance , the retarded nature of radiation is usually explained by means of _ de facto _arguments referred to initial conditions : advanced solutions of wave equations correspond to converging waves that require a highly improbable conspiracy , that is , a miraculous cooperative emitting behavior of distant regions of space at the temporal origin of the process ( see sachs ) .it seems quite clear that this kind of arguments , even if admissible in the discussions about irreversibility , are not legitimate in the context of the problem of the arrow of time , to the extent that they put the arrow by hand by presupposing the difference between the two directions of time from the very beginning .in other words , they violate the nowhen requirement of adopting an atemporal perspective purged of our temporal intuitions and our time - asymmetric observations , like those related with the asymmetry between past and future or between initial and final conditions .therefore , from an atemporal standpoint , the challenge consists in supplying a non - conventional criterion , _ based only on theoretical arguments _ , for distinguishing between the two members of the pair of twins .the desired criterion , that can be legitimately supplied neither by the local theory nor by our time - asymmetric experiences , can be grounded on global considerations .if we adopt the usual terminological convention according to which the future is the temporal direction of the energy flow and denotes a future semicone , then the energy flow is contained in the future semicone for any point of the spacetime .on the other hand , in any pair of time - symmetric twins , the members of the pair involve energy flows pointing to opposite temporal directions , with no non - conventional criterion for distinguishing between them .but once _ we have established the substantial difference between past and future on global grounds _ and have decided that energy flows towards the future , we have a substantial criterion for discarding one of the twins and retaining the other as representing the relevant solution of the time - reversal invariant law .for instance , given the usual conventions , in the case of electromagnetism only retarded solutions are retained , since they describe states that carry energy towards the future .another relevant example is the creation and decaying of unstable states . from an equilibrium state , an unstable non - equilibrium state is created by an antidissipative process with evolution factor .this unstable non - equilibrium state decays towards an equilibrium state through a dissipative process with evolution factor and .when considered locally , the pairs of twins ( antidissipativea and dissipative , and ) are only conventionally different . however , since past is substantially different than future and , according to the usual convention , the energy flow always goes from past to future , the unstable states are always created by energy pumped from the energy flow coming from the past , while unstable states decay returning this energy to the energy flow pointing towards the future .therefore , the energy flow introduces a substantial difference between the two members of each pair . in the next sections, we will analyze the breaking of the symmetry introduced by the energy flow in different local laws , coming from fundamental theories and from phenomenological theories ; in this last case , the first step will be to bring into the light the second twin usually hidden in the formalism .the so - called irreversible quantum mechanics is based on the use of rigged hilbert spaces , due to the ability of this formalism to model irreversible physical phenomena such as exponential decay or scattering processes ( see bohm and gadella ) .the general strategy consists in introducing two subspaces , and , of the hilbert space .the vectors of the subspaces and are characterized by the fact that their projections on the eigenstates of the energy are functions of the hardy class from above and from below respectively .these subspaces yield two rigged hilbert spaces : where and are the anti - dual of the spaces and respectively .it is quite clear that , up to this point , this general strategy amounts to obtain two time - symmetric structures from a time - reversal invariant theory .in fact , quantum mechanics formulated on a hilbert space is time - reversal invariant since where is the antilinear and antiunitary time - reversal operator ( see section ii.a ) .but if quantum mechanics is formulated on spaces , it turns out to be a non time - reversal invariant theory since moreover , in the analytical continuation of the energy spectrum of the system s hamiltonian , there exists at least a pair of complex conjugate poles , one in the lower half - plane and the other in the upper half - plane of the complex plane .such poles correspond to a pair of gamov vectors : these vectors are proposed to describe irreversible processes and are taken to be the representation of the exponentially growing and decaying part of resonant unstable states , respectively .but the symmetric position of the poles with respect to the real axis in the complex plane is a clear indication of the fact that gamov vectors are a case of time - symmetric twins . in his detailed description of scattering processes, arno bohm breaks the symmetry between the twins by appealing to the so - called preparation - registration arrow of time , expressed by the slogan _ no registration before preparation _ ( see bohm , antoniou and kielanowski ) . the key idea behindthis proposal is that observable properties of a state can not be measured until the state acting as a bearer of these properties has been prepared .for instance , in a scattering process , it makes no sense to measure the scattering angle until a state is prepared by an accelerator . on this basis, bohm proposes the following _ interpretational postulate _ : the vectors represent the states of the system and the vectors represent the observables of the system in the sense that observables are obtained as .the time is considered as the time at which preparation ends and detection begins .the preparation - registration arrow imposes the requirement that the energy distribution produced by the accelerator , represented by , be zero for : the time evolution , traditionally represented by the group on the hilbert space , is here represented by two semigroups and : ( i ) is restricted to and , then , it is valid only for , and ( ii ) is restricted to and , then , it is valid only for ( see bohm _et al._ , bohm and wickramasekara ) : as a consequence , the two gamov vectors and turn out to be representations of growing and decaying states respectively : the evolution of the _ growing gamov vector _ can be defined only for , and the evolution of the _ decaying gamov vector _ can be defined only for .as we can see , bohm s approach breaks the symmetry between the time - symmetric twins by means of an interpretational postulate based on the preparation - registration arrow .of course , this strategy supplies a solution to the problem of irreversibility to the extent that it permits the representation of irreversible growing and decaying processes .however , it does not offer a theoretical way out of the problem of the arrow of time , since the preparation - registration arrow is introduced as a postulate of the theory .in fact , such an arrow presupposes the distinction between past and future from the very beginning : in the past ( ) the system is prepared , in the future ( ) the system is measured , and and represent the corresponding growing and decaying processes respectively , both evolving toward the future .it is clear that such a postulate is based on our pretheoretical assumption that preparation is temporally previous than registration .but , from an atemporal viewpoint , we could reverse that interpretational postulate : we could consider that is the space of states and is the space of vectors by means of which the observables are obtained ; in this case we would obtain the temporal mirror image of the original theory , where and represent growing and decaying states respectively , both evolving toward the past . in other words ,the two possible interpretational postulates restore the time - symmetry since they lead to two non time - reversal invariant theoretical structures , one the temporal mirror image of the other .bohm s strategy of choosing the future directed version of the postulate introduces the arrow of time only on the basis of pretheoretical considerations ( for a detailed discussion , see castagnino , gadella and lombardi ) . nevertheless , it is possible to find a theoretical justification for the preparation - registration arrow and , as a consequence , for choosing the future directed version of the interpretational postulate : the breaking of the symmetry between the time - symmetric twins is supplied by the energy flow represented by .the preparation of the states acting as bearers of properties requires energy coming from other processes .since the energy flow comes from the past and goes to the future , the states are prepared by means of energy coming from the past , that is , from previous processes ; the growing gamov vector represents precisely the growing process occurring at , which absorbs the energy coming from the past . on the other hand, the registration of the observable properties represented by provides energy to other processes . again , since the energy flow comes from the past and goes to the future , the measurement of observables emits energy toward the future , that is , provides energy to latter processes ; the decaying gamov vector represents precisely the decaying process occurring at , which emits energy toward the future . summing up , the energy flow coming from the past and directed towards the future supplies the criterion for selecting the future directed version of bohm s interpretational postulate , and turns quantum mechanics into a non time - reversal invariant theory without the addition of pretheoretical assumptions .it is quite clear that the two classes of light semicones and are a pair of time - symmetric twins .quantum field theory breaks the symmetry of this pair from the very beginning , by introducing non time - reversal invariance as a primitive assumption . in this sectionwe will explain how this symmetry - breaking can be derived from the global time - asymmetry of the universe .the non time - reversal invariance of axiomatic qft * in any of its versions , axiomatic qft includes a non time - reversal invariant postulate ( see bogoliubov _ et al._ , roman , and also haag , where it is called postulate iii ) , which states that the spectrum of the energy - momentum operator is confined to a future light semicone , that is , its eigenvalues satisfy this postulate says that , when we measure the observable , we obtain a _ non - spacelike _ _ classical _ _ contained in a future semicone _, that is , a semicone belonging to .it is clear that condition selects one of the elements of the pair of time - symmetric twins and or , in other words , of the pair and that would arise from the theory in the absence of the time - reversal invariance breaking postulate . by means of this postulate, qft becomes a non time - reversal invariant theory . in turn , since qft , being both quantum and relativistic , can be considered one of the most basic theories of physics , the choice introduced by condition is transferred to the rest of physical theories .but such a choice is established from the very beginning , as an unjustified assumption .the challenge is , then , to _ justify _ the non time - reversal invariant postulate by means of independent theoretical arguments .let us recall that , in the energy - momentum tensor , represents the spatio - temporal matter - energy flow and represents the linear momentum density . since is a symmetric tensor , and , therefore , ; in other words , the matter - energy flow is equal to the linear momentum density .this means that , if can be used to express the global arrow of time under the dominant energy condition , this is also the case for the linear momentum density .but it is precisely the linear momentum density the magnitude corresponding to the classical of qft ; thus , at each point of the spacetime , . in conclusion, the fact that * * at each point of the local context and , therefore , for every classical particle , must be contained in the future light semicone turns out to be a consequence of the global time - asymmetry of the spacetime when the dominant energy condition holds everywhere . in other words ,the non time - reversal invariant postulate can be justified on global grounds instead of being imposed as a starting point of the axiomatic version of qft . *the non time - reversal invariance of ordinary qft * in the ordinary version of qft , the classification of one - particle states according to their transformation under the lorentz group leads to six classes of four - momenta . among these classes, it is considered that only three have physical meaning : these are precisely the cases that agree with the non time - reversal invariant postulate of the axiomatic version of qft . in other words , the symmetry group of qft is the orthochronous group ( see weinberg ) , where space - reversal but not time - reversal is included .this is another way of expressing the non time - reversal invariance of qft . in this case , the non time - reversal invariance is introduced not by means of a postulate , but on the basis of empirical arguments that make physically meaningless certain classes of four - momenta .however , to the extent that special relativity and standard quantum mechanics are time - reversal invariant theories , those arguments give no theoretically grounded justification for such a breaking of time - reversal invariance . nevertheless , as we have seen in the previous subsection , this justification can be given on global grounds .let us make the point in different terms .the quantum field correlates of and , and , are defined as where is a linear and unitary operator and is an antilinear and antiunitary operator ( see section ii.a ) .in fact , if were linear and unitary , we could simply cancel the s and , then , from eq .( [ 32 ] ) , : the action of the operator on the operator would invert the sign of , with the consequence that the spectrum of the inverted energy - momentum operator would be contained in a past light semicone . precisely , for , , where is the energy operator ; then , if were linear and unitary , ( in contradiction with eq .( [ 3 ] ) ) with the consequence that , for any state of energy there would be another state of energy .the antilinearity and the antiunitarity of avoid these anomalous situations , in agreement with the conditions imposed by the non time - reversal invariant postulate and , at the same time , make qft non time - reversal invariant .once again , there are good empirical reasons for making antilinear and antiunitary , but not theoretical justification for such a move . summing up ,in ordinary qft it is always necessary to make a decision about the time direction of the spectrum of the energy - momentum operator .the point that we want to stress here is that , either in the case of the non time - reversal invariant postulate of the axiomatic version of qft or in the case of the usual version of qft , the decision can be justified on global grounds , as a consequence of the time - asymmetry of the spacetime . *3 . weak interactions *finally , it is worth reflecting on the role of weak interactions in the problem of the arrow of time . the cpt theorem states that is the only combination of charge - conjugation , parity - reflection and time - reversal which is a symmetry of qft .in fact , it is well known that weak interactions break the of the cpt theorem . according to a common opinion, it is precisely this empirical fact the clue for the solution of the problem of the arrow of time : since the symmetry is violated by weak interactions , they introduce a non - conventional distinction between the two directions of time ( see visser ) .the question is : is the breaking of what distinguishes both directions of time in qft ? as we have seen , the operator was designed precisely to avoid that certain tetra - magnitudes , such as the linear momentum , have the anomalous feature of being contained in a past light semicone : the action of the operator onto the energy - momentum operator preserves the time direction of and , therefore , of its eigenvalues .it is this fundamental fact what makes qft non time - reversal invariant , and not the incidental violation of by weak interactions .this non time - reversal invariance of qft , based on the peculiar features of the operator , distinguishes by itself between the two directions of time , with no need of weak interactions ( see discussion in castagnino and lombardi ) .in other words , even if weak interactions did not exist , qft would be a non time - reversal invariant theory which would distinguish between the two directions of time .the real problem is , then , to justify the non time - reversal invariance of a theory which is presented as a synthesis of two time - reversal invariant theories such as special relativity and quantum mechanics . butthis problem is completely independent of the existence of weak interactions and the breaking of introduced by them . summing up, weak interactions do not play a role as relevant in the problem of the arrow of time as it is usually supposed . *feynman graphs * let us consider the feynman graphs of fig .[ graficof ] , where the horizontal direction ideally corresponds to the time axis , the vertical direction represents a spatial axis , is a two - particle state and is an -particle state .+ as we have argued in the previous sections , at this point there is no substantial criterion to select the past - to - future direction on the time axis .thus , we can not even consider motions ( e.g. convergence to the vertex or divergence from the vertex ) along the lines of the graph because both graphs are formally identical : one is the temporal mirror image of the other .moreover the probabilities of both processes are equal : this fact shows that the probabilities are not affected by the direction of time .the time - symmetry of both graphs results from the time - reversal invariance of the physical laws on which the graphs are based .if we wanted to distinguish them , we should say that in one graph the state is at the left ( i.e. in the past ) of the state , and in the second graph is at the right ( i.e. in the future ) of .but this argument requires a theoretical reason to say which one of the states is at the left of the other . if we want to turn the merely conventional difference between the two graphs of fig .[ graficoff ] into a substantial difference , we have to consider the energy flow trough the process .in fact , if we represent such a flow by means of arrows ( they are not fermion arrows ! ! ! ) , we obtain the following figure : + now both process are substantially different : in the first one two arrows converge to the target point and arrows diverge from it , while in the second one arrows converge and only two diverge .as we can see , the temporal mirror image of one of the graphs is not the other : both graphs are not formally identical because the flow of energy introduces a substantial difference in the pair of time - symmetric twins_. _ on the basis of this substantial difference between the two graphs , now we can define the first one as the typical quantum scattering process and call the prepared state and the detected state . only on these theoretical groundswe can say that the arrow of time goes _ from preparation to registration in a quantum scattering process_. * 2 .von neumann quantum measurements * the argument above can be easily applied to the case of quantum measurement .let us consider the two graphs of fig .[ graficoqm ] , representing a typical von neumann measurement , where being the state that we want to measure , and the the eigenstates of the pointer observable . in both cases ,the measurement can be performed on the basis of the correlation . as in the case of the feynman graphs , in the measurement situation the arrow of time is usually introduced by saying that it goes from the preparation state to the set of measured states represented by .but , as in the previous subsection , this amounts to putting the arrow by hand , without theoretical grounds .+ once again , if the energy flow trough the process is considered , the two members of the pair of graphs of fig .[ graficoqm ] turn out to be substantially different and can be represented as in fig .[ graficoqmm ] .+ on the basis of this substantial difference , now we can define the first graph as representing the typical quantum measurement , and call the prepared state and the measured state .analogously to the previous case , the arrow of time goes _ from preparation to registration in a quantum measurement process_.phenomenological theories are usually non time - reversal invariant ; so , the solution directed towards the future is taken as the physically relevant . however , in those theories the complexity of the fundamental models is hidden in some phenomenological coefficients that are assumed as positive ; but if these coefficients are deduced from underlying fundamental theories , always a negative counterpart can be discovered .this means that , when a phenomenological theory is explained in fundamental terms , the hidden time - reversal invariance becomes manifest , and the corresponding pair of time - symmetric twins can be identified .let us give some examples : * * the dumped oscillator * : this is * * the paradigmatic example .let us consider the equation of the dumped harmonic oscillator where is the viscosity term , which is opposed to the motion if the bulk viscosity is .if we make the ansatz , we obtain the solution where and ( we are just considering the dumped oscillation case where ) . as a consequence ,the resulting evolution is a dumped motion towards equilibrium : for , .the energy obtained by this process is dissipated towards the future , e.g. in the form of heat .this evolution is one of the phenomenological twins .the second twin is the time - reversal version of eq .( [ 36 ] ) , where and are _ negative . _this seems strange at first sight , but the existence of this _ antidissipative _ solution is a necessary consequence of the time - reversal invariance of the fundamental laws underlying the process . if in the first twin there was a flow of energy dissipated towards the future , in the second twin the energy that amplifies the oscillations comes _ from the past . _ * * the fourier law : * the fourier law in thermodynamics tells us that the heat transport goes from a higher temperature region to a lower temperature region : + if this equation were deduced from a fundamental theory , it would result time - reversal invariant . in turn, we know that the temperature and the gradient do not change sign with time reversal , and that for the flow , . therefore , if eq . ( [ 37 ] ) has to result time - reversal invariant , then : the negative coefficient will lead to the second time - symmetric twin . * * perturbative master equation in quantum brownian motion : * in paz and zurek , this equation is computed and the coefficient is given in eq .( 3.13 ) . since this equationis deduced from the time - reversal invariant equations of quantum mechanics , it has to be also time - reversal invariant .it is easy to show that , if we perform the time - reversal , we found .this means that in this case can be either positive or _ negative _ and , therefore , antidissipative processes are as possible as dissipative ones . * * perturbative master equation for a two - level system coupled to a bosonic heat bath : * in paz and zurek , this equation is computed and the coefficient ( which plays the role of ) is given in eq .again , it is easy to verify that , under the time - reversal , we obtain , and also in this case can be either positive or _ negative : _ antidissipative processes are as possible as dissipative ones . * * perturbative master equation for a particle coupled with a quantum field : * in paz and zurek , this equation is computed ( eq . ( 3.25 ) ) and the coefficient ( which plays the role ) is given in the dipole approximation of eq .once more , it is easy to verify that , under the time - reversal , we obtain , and also in this case can be either positive ( dissipative processes ) or _ negative _ ( antidissipative processes ) . * * quantum field theory in curved spacetime * : * * an example from a completely different chapter of physics comes from the theory of fields in curved spacetime ( birrel and davies ) .let us consider a flat frw universe and a scalar field , where is the scale factor of the universe , is the linear momentum and is the time evolution factor that satisfies \varphi ( t)=0 \label{38}\ ] ] where is the hubble coefficient and is the mass of the scalar field .we see that the last equation is similar to eq .( [ 35 ] ) if we make the analogy .when , the universe describes a dissipative evolution in such a way that vanishes for ( castagnino _ et al_. ; see also castagnino _ et al_. for the similar case of fluctuations in a frw background ) .but even if in an expanding universe , in a _ contracting _ universe .this shows that the time - reversal invariant equations of general relativity do not exclude a negative viscosity that leads to the second time - symmetric twin .in this subsection we will consider how the pair of time - symmetric twins arises in the case of the equations of a viscous fluid , when they are deduced from the time - reversal invariant equations of classical mechanics .when newton s law is applied to a small fluid volume , the navier - stokes equations are obtained : where is the fluid density , is the velocity , is a potential , the are the external forces , and the are the internal forces , that is , the forces due to the neighboring fluid elements .if we consider the case of a non - rotational fluid , we obtain the following expression for ( see huang ) : where is the pressure and is the viscosity .( [ 40 ] ) was obtained exclusively by means of classical mechanics , it is necessarily time - reversal invariant . on this basis, we can infer the behavior of the viscosity under the application of the time - reversal operator . for the l.h.s . of eq .( [ 40 ] ) , because is a potential that verifies , where the are the internal forces of the fluid .on the other hand , for the first term of the r.h.s . of eq .( [ 40 ] ) , since is the pressure , i.e. force per unit area . and for the second term , given eqs .( [ 41 ] ) and ( [ 42 ] ) , this second term must be also invariant under the application of .therefore , the viscosity must change as .this shows that the time - reversal invariant fundamental laws underlying the phenomenological equations allow for either positive and negative values of . in the usual formulation of the phenomenological equation ,only the positive values are considered ; but when such an equation is derived from fundamental laws , the second twin that leads to an antidissipative process becomes manifest .we can go further to track the origin of these negative values up to the microscopic level .let us consider the maxwell - boltzmann distribution : where and , the most probable speed of a molecule of the gas , coincides with the maximum of , .this distribution applies when the gas is in equilibrium .the standard method for studying a gas near equilibrium consists in considering a family of these distributions in the neighborhood of any point of the gas and solving the equations to different orders of corrections to the maxwell - boltzmann function .then , by using the transport equations for a gas with molecules per unit volume to the first order approximation of eq .( [ 44 ] ) , valid for a gas close to the equilibrium state ( see huang ) , we arrive to a statistical expression for the viscosity and for the thermal conductivity : where is the mean free path .but , again , if we apply the time - reversal operator to , we obtain ; now the most probable speed is as we can see in fig .[ distri ] : , width=188 ] since , where is the kinetic energy of the gas particles , and clearly we verify eq .( [ 3 ] ) , i.e. ; therefore , . nevertheless , from eqs .( [ 45 ] ) we can see that and : both and change their sign under the application of the time - reversal operator , result that again unmasks the second twin of the time - symmetric pair .if the entropy of phenomenological thermodynamics is to be derived from fundamental laws , the equation that makes it to grow only towards the future ( the second law ) has to be one element of a pair of time - symmetric twins : as the eherenfests pointed out many years ago , there must exist the time - reversed twin that makes entropy to grow towards the past .the problem consists in discovering the second twin by appealing to the fundamental definitions underlying the phenomenological approach .the entropy balance equation reads where is the entropy per unit volume , is the entropy flow and is the entropy production per unit volume .if the are the thermodynamic forces or affinities such that , where are the thermodynamic variables ( i.e. the coordinates of the thermodynamic space , being the number of thermodynamics variables ) , and the are the thermodynamic flows , reads then , the onsanger - casimir relations near equilibrium read ( castagnino _ et al_. ) where is a matrix containing the constant phenomenological coefficients ( as the coefficient of eq .( [ 36 ] ) , or the bulk viscosity the shear viscosity the heat conduction and all the remaining coefficients of the previous subsection ) , such that therefore , the entropy production results the phenomenological second law of thermodynamic states that as a consequence , is a positive definite matrix , that is , all the constant phenomenological coefficients ( ) are positive .this means that the second law describes dissipative processes corresponding to the future - directed twin of a time - symmetric pair . of course, the corresponding antidissipative twin is obtained simply changing the signs .however , the existence of such a second twin can also be proved by considering the original definition of entropy : in this definition , ( eq . ( [ 3 ] ) ) and , since , ; therefore , .moreover , and , therefore , . andsince * , * then .by applying these results to eq .( [ 47 ] ) , we can conclude that this means that , when thermodynamics is expressed in terms of fundamental definitions , the time - symmetric twin of the second law comes to the light : the evolutions with are nomologically possible and , in some cases , is a negative definite matrix .this mirror image behavior corresponds to the change of signs of the phenomenological coefficients studied in the previous subsection . ) , since it is a flow , and since it is a force or affinity ; then , . andsince and are contained in , they also have to change their signs . ]if we use just the first twin , we are in the realm of phenomenological thermodynamics ; if we use both twins , we are in the realm of fundamental thermodynamics. however , usually we only see dissipative process ; thus , something must break the time - symmetry of the twins . *1 . breaking the time - symmetry with the second law * let an oscillator be initially in motion in a gas atmosphere at rest .the oscillator gradually looses its energy and finally stops , while the initially motionless molecules get in motion .this is a paradigmatic dissipative process with factor and .but as the fundamental physical equations are time - reversal invariant , the opposite process is also possible according to the laws .if initially the oscillator were at rest but the molecules were in _ exactly the opposite motion than in the final state of the previous case _( i.e. their velocities were inverted by a maxwell demon ) , the evolution would be antidissipative with factor and .it seems to be obvious that dissipative evolutions are much more frequent that antidissipative ones , because the latter are produced by infrequent conspiracies . however , this is not a consequence of the dynamical laws but of the initial conditions .we might say that non - conspirative initial conditions are easy to be produced but conspirative initial conditions ( demon conditions ) are very difficult to be obtained .nevertheless , this is just a practical problem : we are macroscopic beings and , for this reason , moving a single oscillator is a simple task for us , whereas endowing a great number of molecules with the precise conspirative motion is extremely difficult ( we need the help of the microscopic maxwell demon ) .this means that the limitation is practical and not resulting from some physical law .in fact , sometimes practical limitations can be overcome and the inversion of velocities can be obtained , as in the case of the spin - echo experiments ( see balliant , levstein _et al_. ) .some authors base their definition of the arrow of time and their foundation of the second law in these practical reasons , that is , the absence of conspiracies ( see , for instance , sachs ) .however , such a position forces them to face a long list of well known criticisms .in fact , since antidissipative processes are not ruled out by the fundamental laws of physics , _ a violation of irreversibility is not forbidden as a matter of principle but because it is highly improbable ( _ balliant , p.408 ) ; therefore , _ irreversibility is not an absolute concept , but is partially subjective by nature depending in the complexity of the system and on the details and ingenuity of our observations ( _ balliant , p.412 ) .this means that the appeal to practical limitations is a non - theoretical , even a non - objective way to break the symmetry of the phenomenological pair of time - symmetric twins . *breaking the time - symmetry with the energy flow * the future - directed energy flow , expressing the global time - asymmetry of the spacetime , supplies a theoretically grounded mean for breaking the symmetry of the twins .let us begin with the case of the fourier law where , as we have shown , the negative coefficient leads to the second time - symmetric twin .if the energy flow is future - directed all over the spacetime , at each point the quadrivector , representing the heat flow , lies in the future light semicone . in the case of thermodynamics ,the classical limit has to be applied ; as a consequence , the semicone becomes a future semiplane , but this limit does not affect the orientation of . then , although the time - reversal invariant fundamental laws lead to a pair of time - symmetric twins corresponding to the two possible orientations of , the energy flow breaks the symmetry by selecting the quadrivector corresponding to ; therefore , it explains the positive value of the coefficient and , _ a fortiori _ , the second law of thermodynamics . from a more general viewpoint, we have seen that the time - symmetric twin of the second law is : when , the process produces an energy flow directed towards the future ; when , the process requires an energy flow pumped from the past .let us note that , if we did not have a criterion for defining the past - to - future direction of the energy flow , the previous assertions would be senseless : from an viewpoint that does not commits a _ petitio principii _ by presupposing a privileged direction of time , past and future are only conventional labels ( see fig .6 ) . on the contrary , with the energy flow pointing to the same direction all over the spacetime , we can legitimately say that corresponds to a dissipative decaying process evolving from non - equilibrium to equilibrium as , and corresponds to an antidissipative growing process evolving from equilibrium to non - equilibrium as .the two processes , which in principle are only conventionally different , turn out to be substantially different due to the future - directed energy flow that locally expresses the global time - asymmetry of the universe ( see fig .7 ) .finally , let us consider how the energy - momentum tensor and the entropy are related in the case of relativistic imperfect fluids , as a further example of time - symmetric twins whose symmetry can be broken by a future - directed energy flow . for a universe containing a relativistic imperfect fluid , the energy - momentum tensor reads where is the pressure , is the metric tensor , is the energy - matter density , is the absolute velocity of the fluid , and is a term due to the imperfection of the fluid . in a comoving ( free falling )frame , , , , and the corresponding eq .( [ 48 ] ) reads where is the absolute temperature , are the spatial indices , and the einstein summation convention is used as before . from this equationwe want to obtain and .but here we will not appeal to the traditional argument , which relies on the second law ( see weinberg ) . on the contrary , we will use a fundamental law as eq .( [ 48 ] ) , which will bring to the light the second element of the pair of time - symmetric twins ; the second law will be a consequence of the symmetry breaking in the pair .let us define and are the two irreducible components of the symmetric tensor under the spatial rotation group , while is an irreducible vector under the same group .the thermodynamic space is flat or , at least , locally flat ( near equilibrium ) .therefore , since the theory must be invariant under the rotation of , the matrix must be spherically symmetric for each irreducible component , and eq .( [ 51 ] ) reads where we have attributed an arbitrary scalar , , and to each component .this equation has to be satisfied by any arbitrary values of , , and ; then , by means of eq .( [ 56 ] ) we obtain having obtained and without appealing to the second law , now we can deduce the expression for from eqs .( [ 56 ] ) and ( [ 59 ] ) : up to this point , we have made no assumptions about the values of the coefficients , and ; then , they can be either positive , leading to , or negative , leading to : both situations , one the temporal mirror image of the other , are nomologically possible according to the fundamental laws .also in this case , the symmetry can be broken by means of the energy flow .once we have established a substantial difference between the two directions of time and we have used , following the traditional convention , the label future for the direction of the energy flow , we can say that : * for _ dissipative _ processes , , and ; as a consequence , since the geometrical factors between parenthesis in eq .( [ 60 ] ) are all non negative , then ( the second law ) . * for _ antidissipative _ processes , , and ; as a consequence , . in the regions of the universe where this condition holds, the second law will be not locally valid .we can summarize the results of this section by saying that , according to the fundamental laws of physics , either dissipative processes with and antidissipative processes with are nomologically possible .the symmetry between both in principle formally identical situations is broken only by an energy flow which points to the same direction all over the universe and expresses the global time - asymmetry of the spacetime .it seems quite clear that this way of breaking the symmetry in the pair of time - symmetric twins is theoretically grounded and not relying on merely contingent practical limitations .it is also completely general , since it can be applied to any pair of time - symmetric twins of physics .furthermore , since the energy flow points to the same direction all over the spacetime , this symmetry breaking accounts for the otherwise unexplained fact that the different arrows of time , defined in the different chapters of physics ( electromagnetic arrow , thermodynamic arrow , cosmological arrow , etc . ) , all point to the same time direction .this conclusion allows us to assess the status of the second law of thermodynamics .as we have seen , when arguments are based exclusively on fundamental laws , pairs of time - symmetric twins appear in all the chapters of physics . in the particular case of phenomenological thermodynamics, the twin of the second law can also be discovered .so , the traditional second law only arises when the time - symmetry is broken by the future - directed energy flow . butthis way of breaking the symmetry is common to all the pairs of time - symmetric twins , from electromagnetism to quantum field theory .therefore , the second law is not endowed with a privileged character with respect to the arrow of time , as usually supposed : the thermodynamic arrow , as all the other arrows , is a consequence of the global time - asymmetry of the universe . in this sense , _ the second law can be inferred on the basis of global considerations _, in the same way as the irreversible evolutions of quantum mechanics or the non time - reversal invariant postulate of quantum field theory .in this paper we have completed the following tasks : * we have disentangled the concepts of time - reversal invariance , irreversibility and arrow of time , dissipating the usual confusions between the problem of irreversibility and the problem of the arrow of time .* we have defined the arrow of time as the global time - asymmetry of spacetime , which is locally expressed as a future - directed energy flow all over the universe .* we have shown how , in different chapters of physics , the time - reversal invariant fundamental laws lead to pairs of time - symmetric twins whose elements are only conventionally different in the light of such laws . * we have shown how the future - directed energy flow is what breaks the symmetry of all the pairs of time - symmetric twins of physics , giving rise to the different arrows of time traditionally treated in the literature on the subject . with this work we have tried to contribute to the resolution of the problem of the direction of time , one of the most longstanding debates on the conceptual foundations of theoretical physics .we are particularly grateful to huw price , who urged us to complete our work by considering entropy .this work was partially supported by grants of the buenos aires university , the conicet and the foncyt of argentina . 1 . m. castagnino , o. lombardi and l. lara , the global arrow of time as a geometrical property of the universe , _ foundations of physics _ * 33 * , 877 - 912 ( 2003 ) .m. castagnino , l. lara and o. lombardi , the cosmological origin of time - asymmetry , _ classical and quantum gravity _ * 20 * , 369 - 391 ( 2003 ) .m. castagnino , l. lara and o. lombardi , the direction of time : from the global arrow to the local arrow , _ international journal of theoretical physics _ * 42 * , 2487 - 2504 ( 2003 ) .m. castagnino and o. lombardi , the generic nature of the global and non - entropic arrow of time and the double role of the energy - momentum tensor , _journal of physics a ( mathematical and general ) _ * 37 * , 4445 - 4463 ( 2004 ) .m. castagnino and o. lombardi , a global and non - entropic approach to the problem of the arrow of time , in a. reimer ( ed . ) , _ spacetime physics research trends .horizons in world physics _( nova science , new york , 2005 ) . 6 .m. castagnino and o. lombardi , the global non - entropic arrow of time : from global geometrical asymmetry to local energy flow ,_ synthese _ , forthcoming .d. albert , _ time and chance _( harvard university press , cambridge ma , 2001 ) . 8 .j. earman , what time reversal invariance is and why it matters , _ international studies in the philosophy of science _ * 16 * , 245 - 264 ( 2002 ) . 9. p. d. lax and r. s. phillips , _scattering theory ( _ academic press , new york , 1979 ) . 10 .m. tabor , _ chaos and integrability in nonlinear dynamics _( john wiley and sons , new york , 1989 ) . 11 .r. penrose , singularities and time asymmetry , in s. hawking and w. israel ( eds . ) , _ general relativity , an einstein centenary survey _( cambridge university press , cambridge , 1979 ) . 12 .r. g. sachs , _ the physics of time - reversal _ ( university of chicago press , chicago , 1987 ) . 13 .h. price , _time s arrow and archimedes point _( oxford university press , oxford , 1996 ) . 14 .p. ehrenfest and t. ehrenfest , _ the conceptual foundations of the statistical approach in mechanics _( cornell university press , ithaca , 1959 , original 1912 ) . 15 .s. brush , _ the kind of motion we call heat _ ( north holland , amsterdam , 1976 ) .l. boltzmann , _ annalen der physik _ * 60 , * 392 - 398 ( 1897 ) . 17. h. reichenbach , _ the direction of time _ ( university of california press , berkeley , 1956 ) .r. p. feynman , r. b. leighton and m. sands , _ the feynman lectures on physics , vol . 1 ( _ addison - wesley _ , _ new york , 1964 )p. c. davies , _ the physics of time asymmetry _ ( university of california press , berkeley , 1974 ) .p. c. davies , stirring up trouble , in j. j. halliwell , j. perez - mercader and w. h. zurek ( eds . ) , _ physical origins of time asymmetry _ ( cambridge university press , cambridge , 1994 ) .j. earman , _ philosophy of science _ * 41 , * 15 - 47 ( 1974 ) . 22 .s. hawking and j. ellis , _ the large scale structure of space - time _( cambridge university press , cambridge , 1973 ) .b. f. schutz , _ geometrical methods of mathematical physics _ ( cambridge university press , cambridge , 1980 )a. grnbaum , _ philosophical problems of space and time _ ( reidel , dordrecht , 1973 ) . 25 .l. sklar , _ space , time and spacetime _( university of california press , berkeley , 1974 ) .o. penrose and i. c. percival , the direction of time , _ proceedings of the physical society _ * 79 , * 605 - 616 ( 1962 ) . 27 .r. caldwell , m. kamionkowski and n. weinberg , phantom energy and cosmic doomsday , _ physical review letters _ * 91 * , 071301 ( 2003 ) .a. bohm and m. gadella , _ dirac kets , gamow vectors , and gelfand triplets ( _ springer - verlag , berlin , 1989 ) .a. bohm , i. antoniou and p. kielanowski , the preparation / registration arrow of time in quantum mechanics , _ physics letters a _ * 189 * , 442 - 448 ( 1994 ) .a. bohm , i. antoniou and p. kielanowski , a quantum mechanical arrow of time and the semigroup time evolution of gamow vectors , _ journal of mathematical physics _ * 36 * , 2593 - 2604 ( 1994 ) .a. bohm , s. maxson , m. loewe , and m. gadella , quantum mechanical irreversibility , _ physica a _ * 236 * , 485 - 549 ( 1997 ) . 32 . a. bohm and s. wickramasekara , the time reversal operator for semigroup evolutions , _ foundations of physics _* 27 * , 969 - 993 ( 1997 ) .m. castagnino , m. gadella and o. lombardi , time s arrow and irreversibility in time - asymmetric quantum mechanics , _ international studies in the philosophy of science _ * 19 * , 223 - 243 ( 2005 ) .m. castagnino , m. gadella and o. lombardi , time - reversal , irreversibility and arrow of time in quantum mechanics , _ foundations of physics _* 36 * , 407 - 426 ( 2006 ) . 35 .n. bogoliubov , a. a. logunov and i. t. todorov , _ axiomatic quantum field theory _( benjamin - cummings , reading ma , 1975 ) . 36 .p. roman , _ introduction to quantum field theory _( wiley , new york , 1969 ) . 37 .r. haag , _ local quantum physics .fields , particles , algebras _ ( springer , berlin , 1996 ) .. s. weinberg , _ the quantum theory of fields _ ( cambridge university press , cambridge , 1995 ) .m. visser , _ lorentzian wormholes _( springer - verlag , berlin , 1996 ) .paz and w. h. zurek , environment - induced decoherence and the transition from quantum to classical , in dieter heiss ( ed . ) , _ lecture notes in physics , vol. 587 _ ( springer , heidelberg - berlin , 2002 ) .n. birrell and p. davies , _ quantum fields in curved space _( cambridge university press , cambridge , 1982 ) . 42. m. castagnino , h. giacomini and l. lara , dynamical properties of the conformally coupled flat frw model , _ physical review d _ * 61 * , 107302 ( 2000 ) . 43 .m. castagnino , j. chavarriga , l. lara and m. grau , exact solutions for the fluctuations in a flat frw universe coupled to a scalar field , _ international journal of theoretical physics _ * 41 * , 2027 - 2035 ( 2002 ) . 44 .k. huang , _ statistical mechanics _ ( john wiley and sons , new york , 1987 ) . 45 .r. balliant , _ from microphysics to macrophysics : methods and applications of statistical physics _( springer , heidelberg , 1992 ) .r. levstein , g. usaj and h. m. pastawski , attenuation of polarization echoes in nuclear magnetic resonance : a study of the emergence of dynamical irreversibility in many - body quantum systems , _ journal of chemical physics _ * 108 * , 2718 - 2724 ( 1998 ) .. s. weinberg , _ gravitation and cosmology _( john wiley and sons , new york , 1972 ) .
in several previous papers we have argued for a global and non - entropic approach to the problem of the arrow of time , according to which the arrow is only a metaphorical way of expressing the geometrical time - asymmetry of the universe . we have also shown that , under definite conditions , this global time - asymmetry can be transferred to local contexts as an energy flow that points to the same temporal direction all over the spacetime . the aim of this paper is to complete the global and non - entropic program by showing that our approach is able to account for irreversible local phenomena , which have been traditionally considered as the physical origin of the arrow of time .
a bayesian net ( bn ) is a directed acyclic graph ( probabilistic expert system ) in which every node represents a random variable with a discrete or continuous state . + the relationships among variables , pointed out by arcs , are interpreted in terms of conditional probabilities according to bayes theorem .+ with the bn is implemented the concept of conditional independence that allows the factorization of the joint probability , through the markov property , in a series of local terms that describe the relationships among variables : where denotes the states of the predecessors ( parents ) of the variable ( child ) .this factorization enable us to study the network locally .+ a bayesian network requires an appropriate database to extract the conditional probabilities ( parameter learning problem ) and the network structure ( structural learning problem ) .+ the objective is to find the net that best approximates the joint probabilities and the dependencies among variables .+ after we have constructed the network one of the common goal of bayesian network is the probabilistic inference to estimate the state probabilities of nodes given the knowledge of the values of others nodes .the inference can be done from children to parents ( this is called diagnosis ) or vice versa from parents to children ( this is called prediction ) .+ however in many cases the data are not available because the examined events can be new , rare , complex or little understood . in such conditionsexperts opinions are used for collecting information that will be translated in conditional probability values or in a certain joint or prior distribution ( probability elicitation ) .+ such problems are more evident in the case in which the expert is requested to define too many conditional probabilities due to the number of the variable s parents .so , when possible , is worthwhile to reduce the number of probabilities to be specified by assuming some relationships that impose bonds on the interactions between parents and children as for example the noisy - or and its variation and genralization .+ in the business field , bayesian nets are a useful tool for a multivariate and integrated analysis of the risks , for their monitoring and for the evaluation of intervention strategies ( by decision graph ) for their mitigation . + enterprise risk can be defined as the possibility that something with an impact on the objectives happens , and it is measured in terms of combination of probability of an event ( frequency ) and of its consequence ( impact ) . + the enterprise risk assessment is a part of enterprise risk management ( erm ) where to estimate the frequency and the impact distributions historical data as well as expert opinions are typically used . then such distributions are combined to get the loss distribution .+ in this context bayesian nets are a useful tool to integrate historical data with those coming from experts which can be qualitative or quantitative .what we present in this work is the construction of a bayesian net for having an integrated view of the risks involved in the building of an important structure in italy , where the risk frequencies and impacts were collected by an erm procedure unsing expert opinions .+ we have constructed the network by using an already existing database ( db ) where the available information are the risks with their frequencies , impacts and correlation among them . in totalthere are about 300 risks .+ in our work we have considered only the frequencies of risks and no impacts . with our bnwe construct the risks joint probability and the impacts could be used in a later phase of scenario analysis to evaluate the loss distribution under the different scenarios .+ in table 1 there is the db structure used for network learing and in which each risk is considered as a binary variable ( one if the risk exists _ ( yes ) _ and zero if the risk dosent exist _ ( not ) _ ) .therefore , for each considered risk in the network there will be one node with two states ( and ) .the task is , therefore , to find the conditional probabilities tables by using only the correlations and the marginal frequencies . instead, the net structure is obtained from table 1 by following the node relationships given by correlations .+ the main ideas for finding a way to construct a bn have been : first to find the joint probabilities as functions of only the correlations and the marginal probabilities ; second to understand how the correlations are linked with the incremental ratios or the derivatives of the child s probabilities as functions of the parent s probabilities .this choice is due to the fact that parent and child interact through the values of conditional probabilities ; the derivatives are directly linked to such probabilities and , therefore , to the degree of interaction between the two nodes and , hence with the correlation .+ afterwards we have understood as to create equations , for the case with dependent parents we have used the local network topology to set the equations .+ we have been able to calculate the cpt up to three parents for each child .although there is the possibility to generalize to more than three parents , it is necessary to have more data which are not available in our db .so when four or more parents are present we have decided to divide and reduce to cases with no more than three parents . to approximate the networkwe have `` separated '' the nodes that give the same effects on the child ( as for example the same correlations ) by using auxiliary nodes . when there was more than one possible scheme available we have used the mutual information ( mi ) criterion as a discriminating index by selecting the approximation with the highest total mi ; this is the same to choose the structure with the minimum distance between the network and the target distribution .+ we have analyzed first the case with only one parent to understand the framework , then it has been seen what happens with two independent parents and then dependent .finally we have used the analogies between the cases with one and two parents for setting the equations for three parents . the case with one parent ( figure 1 ) is the simplest .let p(f ) and p(c ) be the marginal probability given from expert ( as in table 1 ) : * for the parent , f , we have : p(f = y)=x , p(f = n)=1-x ; * for the child , c , we have : p(c = y)=y , p(c = n)=1-y ; one parent scheme ] the equations to find either the conditional probabilities or the joint probabilities are : [ cols= " < , < " , ]so far we have seen that using the equation systems for conditional and joint probabilities the cpts can be obtained .the method can be generalized to the case with more three parents , but there are problems in setting more parameters ( standardized joint moment ) and in looking for more complicated feasible marginal correlation areas .+ so to develop a network we propose to use , separately , firstly the equations and procedure for the one parent ; secondly those for two parents distinguishing when they are dependent and not .finally we use the equations and the procedures , when possible , for the three parents case by distinguishing also in this situation between dependent and independent parents ; otherwise we split one parent from the others by using the mutual information as splitting index .+ we remark that we need to reduce to a more simple case those configurations with more than three parents .we can achieve this trying to estimate a local approximate structure , with only one , two and three parents , by `` separating '' those that give different effects on the child ( as for instance different incremental ratios ) . if there are more schemes available for the substitution we select that with the highest mi ( ) .+ it is important to be noted that such method is modular , this is if we add or delete a node we can use the appropiate system ( one , two or three nodes ) to according to we add or delete a parent or a child .the authors acknowledge financial support from the miur - firb 2006 - 2009 project and musing project contract number 027097 .zagorecki a. , and druzdzel m. ( 2004 ) .an empirical study of probability elicitation under noisy - or assumption . in _ proceedings of the seventeenth international florida artificial intelligence research society conference ( flairs 2004 )_ , pp 800 - 885 .wiegmann d.a .( 2005 ) . developing a methodology for eliciting subjective probability estimates during expert evaluations of safety interventions : application for bayesian belief networks . _aviation human factors division _ , october , from www.humanfactors.uiuc.edu .agnieszka 0 ., druzdzel m. and wasyluk h. ( 2001 ) .learning bayesian network parameters from small data sets : application of noisy - or gates ._ international journal of approximate reasoning _ , vol .27 , pp 165 - 182 .kleiter g.d . and jirousek r. ( 1996 ) .learning bayesian networks under the control of mutual information _ proceedings in information processing and management of uncertainty in knowledge - based systems _, pp 985 - 990 .druzdzel m.j . and van der gaag l.c .elicitation of probabilities for belief networks : combining qualitative and quantitative information _ proceedings of the eleventh conference on uncertainty in artificial intelligence _ ,pp 141 - 148 .
according to different typologies of activity and priority , risks can assume diverse meanings and it can be assessed in different ways . + in general risk is measured in terms of a probability combination of an event ( frequency ) and its consequence ( impact ) . to estimate the frequency and the impact ( severity ) historical data or expert opinions ( either qualitative or quantitative data ) are used . moreover qualitative data must be converted in numerical values to be used in the model . + in the case of enterprise risk assessment the considered risks are , for instance , strategic , operational , legal and of image , which many times are difficult to be quantified . so in most cases only expert data , gathered by scorecard approaches , are available for risk analysis . + the bayesian network is a useful tool to integrate different information and in particular to study the risk s joint distribution by using data collected from experts . + in this paper we want to show a possible approach for building a bayesian networks in the particular case in which only prior probabilities of node states and marginal correlations between nodes are available , and when the variables have only two states .
widespread deployment of networks of sensors and autonomous vehicles is expected to revolutionize our ability to monitor and control physical environments from remote locations . however , for such networks to achieve their full range of applicability , they must be capable of operating in uncertain and unstructured environments without centralized supervision . realizing the full potential of such systems will require the development of protocols that are fully autonomous , distributed , and adaptive in the face of changing environments . an important problem in this contextis the coverage problem : a network of mobile sensors should distribute itself over a region with a measurable field so that the likelihood of detecting an event of interest is maximized .if the probability distribution of the event is uniform over the area , then the optimal solution will involve a uniform spatial distribution of the agents . on the other handif this probability distribution is not uniform , then the sensors should be more densely positioned in the subregions that have higher event probability .there is a considerable literature on coverage algorithms for groups of dynamic agents ; we refer the reader to and the references therein . in ,uniform coverage algorithms are derived using voronoi cells and gradient laws for distributed dynamical systems .uniform constrained coverage control is addressed in where the constraint is a minimum limit on node degree .virtual potentials enable repulsion between agents to maximize coverage and attraction between agents to enforce the constraint . in ,gradient control laws are proposed to move sensors to a configuration that maximizes expected event detection frequency .local rules are enforced by defining a sensing radius for each agent , which also makes computations simpler .the approach is demonstrated for a nonuniform but symmetric density field with and without communication constraints .further results for distributed coverage control are presented in for a coverage metric defined in terms of the euclidean metric with a weighting factor that allows for nonuniformity . as in ,the methodology makes use of voronoi cells and lloyd descent algorithms .the paper considered the general nonuniform coverage problem with a non - euclidean distance , and it proposed and proved the correctness of a coverage control law in the plane .however , the control law of is only partially distributed , in that it relies on a `` cartogram computation '' step which requires some global knowledge of the domain .our work builds on the results of to design a control law for the nonuniform coverage problem in the one - dimensional case when the agents are positioned on the line .we develop fully distributed coverage control laws for a nonuniform field in this setting , and moreover , we prove quantitative convergence bounds on the performance of these algorithms .interestingly , we find that relatively modest increases in the capabilities and knowledge of each agent can translate into considerable improvements in the global performance .we begin with an introduction to the nonuniform coverage problem in section ii . in section iii, we present our first fully distributed control law for the coverage problem .the execution of this control law only requires the agents to be able to measure distances to their neighbors and to measure the field around their location .the main result of this section is theorem 1 , which demonstrates the correctness of the algorithm and gives a quantitative bound on its performance .we show that it takes agents essentially on the order of to come close to the optimal configuration regardless of the initial conditions . in section iv, we present another fully distributed control law for coverage .the execution of this control law requires more capabilities on the part of the agents : they store several numbers in memory , communicate these numbers to their neighbors at every round , and moreover , they know approximately ( within a constant factor ) how many agents there are in total .subject to these assumptions , we derive a considerable over the simple static control law of section iii .the main result of this section is theorem 6 , which demonstrates the correctness of the algorithm and gives a quantitative bound on its performance .we show that it takes agents essentially on the order of to come close to the optimal configuration regardless of initial conditions .this is an order of magnitude improvement over the control law of section iii .we introduce the nonuniform coverage problem in this section ; our exposition closely follows the expositions of .we consider mobile agents initially situated at arbitrary positions which , for simplicity , we henceforth assume to be located in the interval ] , which measures the density of information or resource at each point .the goal is to bring the agents from their initial configuration to a static configuration that allows them to optimally sense in the density field . intuitively , we would like more agents to be positioned in areas where is high , and fewer agents positioned in areas where is low .more formally , for ] .relative to the ordinary distance , this metric expands regions where is large and shrinks regions where is small .following , we define the coverage of a set of points relative to the density field as } \min_{i=1,\ldots , n } d_{\rho}(y , x_i).\ ] ] given the positions of the agents and the density field , computing requires computing the distance from any point in ] .we use to denote the best ( smallest ) possible coverage ^n } \phi(x_1,\ldots , x_n , \rho).\ ] ] in this paper , we are concerned with designing control laws which drive agents towards positions with coverage . as pointed out in , with a nonuniform distance ; is closely related to information gathering and sensor array optimization problems .a typical problem is to minimize shortest response time from a collection of vehicles to any point in a terrain of varying roughness . in that case, the non - euclidean distance appears because rougher bits of terrain take longer to traverse .another such problem is the detection of acoustic signals ; the objective is to place sensors so they can detect a source anywhere in an inhomogeneous medium . in that case, the non - euclidean distance appears as a result of the spatially varying refractive index of the environment .we now describe and analyze a simple distributed control law that drives the vehicles towards optimal coverage .first , we need to define the notion of a -weighted median between points .* definition : * the -median is defined as the point which satisfies due to the strict positivity of , it is easy to see that a unique such point exists for any .we can now state the coverage control law .we assume for convenience that agents are labeled from left to right .this makes it easier to state what follows ; however , the actual implementation of the algorithm does not require the use of these labels .* a static coverage control law : * the agents iterate as we first briefly outline how this scheme may be implemented without knowledge of the labels by the nodes. a node will initially check whether it has a left neighbor and a right neighbor , or whether it is a `` border agent '' with a single neighbor .suppose it has two neighbors .then , it will measure the distance to its left neighbor and to its right neighbor , and denoting its position ( which it does not know ) by , it will measure in the interval ] and its closest agent clearly equals .we next argue that is at least this big .consider any set of positions .since + \left [ d_{\rho}(x_2,x_3 ) \right ] + \cdots + \left [ d_{\rho}(x_{n-1},x_n ) \right ] \nonumber + \left [ d_{\rho}(0,x_1 ) + d_{\rho}(x_n,1 ) \right ] \label{sumeq } \end{aligned}\ ] ] and there are terms in brackets , we can conclude that at least one of the bracketed terms is at least .if the only such term is the last term , then one of , has value at least . butthis implies that the distance from either or to the closest agent is at least , which proves that in this case is at least that big . if , on the other hand , it is some term which is at least , then the distance from the median to the closest agent is at least , which again implies is at least this much . *q.e.d . *we next introduce a change of variables which makes our static control law easier to analyze .we define and note that .moreover , for any two points in ] figure 3 shows the results from a simulation with random initial conditions . in this case , is the largest value of random variables all uniform on ] .moreover , we assumed that each agent knows the total number of agents in the system , i.e. . the left figure shows some snapshots from the progress of both algorithms , while the right figure shows the time until the stopping condition holds for the first time .the results show reasonably quick convergence for the random initial conditions .we see that , the static control law has a convergence time which grows slower than the quadratic growth proved in theorem 1 , while the dynamic control law has convergence time which appears to grow somewhat slower than the linear upper bound of theorem 6 .{fig5l.pdf } \includegraphics[width=0.5\textwidth]{fig5r.pdf } \end{array} ]we have investigated distributed control laws for mobile , autonomous agents to position themselves on the line for optimal coverage in a nonuniform field .our main results are stated in theorems 1 and 6 .theorem 1 gives a quantitative upper bound on the convergence time of a relatively simple control law for coverage .theorem 6 discusses a dynamic control law which , while making stronger assumptions on the capabilities of each agent , manages to accomplish the coverage task our work suggests a number of open questions .it is of interest to understand whether the increased capabilities of the agents in section 4 are really necessary to achieve better performance .in addition , it would be interesting to explore whether the results described here extend to two and higher dimensions , and in particular , whether a dynamic control law such as the one in theorem 6 might be useful for speeding up performance in more general settings .the authors would like to thank hari narayanan for useful discussions .a. ahmadzadeh , j. keller , g. j. pappas , a. jadbabaie , v. kumar , an optimization - based approach to time - critical cooperative surveillance and coverage with unmanned aerial vehicles , _ proceedings of the international symposium on experimental robotics , _ 2006 .p. barooah , p. g. mehta , j. hespanha , mistuning - based decentralized control of vehicular platoons for improved closed loop stability , _ ieee transactions on automatic control , _ vol .54 , no.9 , pp . 2100 - 2113 , 2009 .a. breitenmoser , m. schwager , j .- c .metzger , r. siegwart , d. rus , voronoi coverage of non - convex environments with a group of networked robots , _ proceedings of ieee international conference on robotics and automation , _ 2010 .j. cortes , s. martinez , f. bullo , spatially - distributed coverage optimization and control with limited - range interactions , _ esaim : control , optimisation , and calculus of variations , _ vol .691 - 719 , 2005 .a. howard , m. mataric , g. sukhatme , mobile sensor network deployment using potential fields : a distributed , scalable solution to the area coverage problem , _ proceedings of the 6th international symposium on distributed autonomous robotics systems , _ 2002 .i. hussein , d. stipanovic , effective coverage control for mobile sensor networks with guaranteed collision avoidance , _ ieee transactions on control systems technology , _ vol .642 - 657 , 2007 .t. wong , t. tsuchiya , t. kikuno , a self - organising algorithm for sensor placement in wireless mobile microsensor networks,_international journal of wireless and mobile computing , _ vol .3 , no . 1 - 2 , 2008 .y. zou , k. chakrabarty , sensor deployment and target localization based on virtual forces , _ proceedings of the twenty - second annual joint conference of the ieee computer and communications societies , _ 2003 .
this paper investigates control laws allowing mobile , autonomous agents to optimally position themselves on the line for distributed sensing in a nonuniform field . we show that a simple static control law , based only on local measurements of the field by each agent , drives the agents close to the optimal positions after a number of sensing / movement/ rounds that is essentially quadratic in the number of agents . further , we exhibit a dynamic control law which , under slightly stronger assumptions on the capabilities and knowledge of each agent , drives the agents close to the optimal positions after a number of sensing / communication//movement rounds that is essentially linear in the number of agents . crucially , both algorithms are fully distributed and robust to unpredictable loss and addition of agents .
during the late quaternary period , after the last glacial maximum ( lgm ) , from 18 kyears ago till present , a global warming was responsible for the melting of the glaciers leading to a fast increase in the sea level . in approximately 13 kyears , the sea level rised up to about 120 meters , reaching the actual levelhowever , the sea level did not go up in a continuous fashion , but rather , it has evolved in a pulsatile way , leaving behind a signature of what actually happened , the continental shelf , i.e. the seafloor .continental shelves are located at the boundary with the land so that they are shaped by both marine and terrestrial processes .sea - level oscillations incessantly transform terrestrial areas in marine environments and vice - versa , thus increasing the landscape complexity .the presence of regions with abnormal slope as well as the presence of terraces on a continental shelf are indicators of sea level positions after the last glacial maximum ( lgm ) , when large ice sheets covered high latitudes of europe and north america , and sea levels stood about 120 - 130 m lower than today .geomorphic processes responsible for the formation of these terraces and discontinuities on the bottom of the sea topography are linked to the coastal dynamics during eustatic processes associated with both erosional or depositional forcing ( wave cut and wave built terraces respectively ) . the irregular distribution of such terraces and shoreface sediments is mainly controlled by the relationship between shelf paleo - physiography and changes on the sea level and sediment supply which reflect both global and local processes .several works have dealt with mapping and modeling the distribution of shelf terraces in order to understand the environmental consequences of climate change and sea level variations after the lgm . in this period of time the sea - level transgression was punctuated by at least six relatively short flooding events that collectively accounted for more than 90 m of the 120 m rise .most ( but not all ) of the floodings appear to correspond with paleoclimatic events recorded in greenland and antarctic ice - cores , indicative of the close coupling between rapid climate change , glacial melt , and corresponding sea - level rise . in this work ,we analyze data from the southeastern brazilian continental shelf ( sbcs ) located in a typical sandy passive margin with the predominance of palimpsests sediments .the mean length is approximately 250 km and the shelfbreak is located at 150 m depth .it is a portion of a greater geomorphologic region of the southeastern brazilian coast called so paulo bight , an arc - shaped part of the southeastern brazilian margin .the geology and topography of the immersed area are very peculiar , represented by the mesozoic / cenozoic tectonic processes that generated the mountainous landscapes known as `` serra do mar '' . these landscapes ( with mean altitudes of 800 m ) have a complex pattern that characterize the coastal morphology , andleads to several scarps intercalated with small coastal plains and pocket beaches .this particular characteristic determines the development of several small size fluvial basins and absence of major rivers conditioning low sediment input , what tends to preserve topographic signatures of the sea - level variations . for the purpose of the present study , we select three parallel profiles acquired from echo - sounding surveys , since for all the considered profiles ,the same similar series of sequences of terraces were found .these profiles are transversal to the coastline and the isobaths trend , and they extend from a 20 m to a 120 m depth .the importance of understanding the formation of these ridges is that it can tell us about the coastal morphodynamic conditions , inner shelf processes and about the characteristics of periods of the sea level regimes standstills ( paleoshores ) . in particular , the widths of the terraces are related to the time the sea level `` stabilized '' .all this information is vital for the better understanding of the late quaternary climate changes dynamic .we find relations between the widths of the terraces that follow a self - affine pattern description .these relations are given by a mathematical model , which describes an order of appearance for the terraces .our results suggest that this geomorphological structure for the terraces can be described by a devil s staircase , a staircase with infinitely many steps in between two steps .this property gives the name `` devil '' to the staircase , once an idealized being would take an infinite time to go from one step to another .so , the seafloor morphology is self - affine ( fractal structure ) as reported in ref . , but according to our findings , it has a special kind of self - affine structure , the devil s staircase structure .a devil s staircase as well as other self - affine structure are the response of an oscillatory system when excited by some external force .the presence of a step means that while varying some internal parameter , the system preserves some averaged regular behavior , a consequence of the stable frequency - locking regime between a natural frequency of the system and the frequency of the excitation .this staircase as well as other self - affine structures are characterized by the presence of steps whose widths are directly related to the rational ratio between the natural frequency of the system and the frequency of the excitation . in a similar fashion , we associate the widths of the terraces with rational numbers that represent two hypothetical frequencies of oscillation which are assumed to exist in the system that creates the structure of the sbcs , here regarded as the sea level dynamics ( sld ) , also known as the sea level variations .then , once these rational numbers are found , we show that the relative distances between triples of terraces ( associated with some hypothetical frequencies ) follow similar scalings found in the relative distance between triples of plateaus ( associated with these same frequencies ) observed in the devil s staircase .the seafloor true structure , apart from the dynamics that originated it , is also a very relevant issue , specially for practical applications .for example , one can measure the seafloor with one resolution and then reconstruct the rest based on some modeling . as we show in this work ( sec .[ model ] ) , a devil s staircase structure fits remarkably well the experimental data .our paper is organized as follows . in sec .[ data ] , we describe the data to be analyzed . in sec .[ devil ] , we describe which kind of dynamical systems can create a devil s staircase and how one can detect its presence in experimental data based on only a few observations . in sec .[ devil_in_data ] , we show the evidences that led us to characterize the sbcs as a devil s staircase , and in sec . [ model ] we show how to construct seafloor profiles based on the devil s staircase geometry . finally , in sec .[ conclusao ] , we present our conclusions , discussing also possible scenarios for the future of the sea level dynamics under the perspective of our findings .the data consists of the tree profiles given in fig . [ meco_fig1](a - b ) .the profile considered for our analyzes is shown in fig .[ meco_fig1](b ) , where we show the continental shelf of the state of so paulo , in a transversal cut in the direction : inner shelf ( `` cost '' ) shelfbreak ( `` open sea '' ) .the horizontal axis represents the distance to the cost and the vertical axis , the sea level ( depth ) , .we are interested in the terraces widths and their respective depths .the profiles shown in fig .[ meco_fig1 ] were the result of a smoothing ( filtering ) process from the original data collected by sonar .the smoothing process is needed to eliminate from the measured data the influence of the oscillations of the ship where the sonar is located and local oscillations on the sea floor probably due to the stream flows .smaller topographic terraces could be smoothed or masked due to several processes such as : coastal dynamic erosional during sea - level rising , holocene sediment cover , erosional processes associated with modern hidrodynamic pattern ( geostrophic currents ) .for that reason we only consider the largest ones , as the ones shown in fig .[ meco_fig2 ] , ( located at with the width of ) .as one can see , the edges of the terraces are not so sharp as one would expect from a staircase plateau .again , this is due to the action of the sea waves and stream flows throughout the time . to reconstruct what we believe to be the original terrace, we consider that its depth is given by the depth of the middle point , and its width is given by the minimal distance between two parallel lines placed along the scarps of the terrace edges. using this procedure , we construct table [ table1 ] with the largest and more relevant terraces found . we identify a certain terrace introducing a lower index in and , according to their chronological order of appearance .more recent appearance ( closer to the cost , less deep ) smaller is the index .we consider the more recent data to have a zero distance from the cost , but in fact , this data is positioned at about 15 km away from the shore , where the bottom of the sea is not affected by the turbulent zone caused by the break out of the waves . the profile of fig .[ meco_fig1](b ) was the one chosen among the other tree profiles because from it we could more clearly identify the largest number of relevant terraces ..terrace widths and depths .while the depths present no representative deviation , the deviation in the widths become larger for deeper terraces .the deviation in the widths is estimated by calculating the widths assuming many possible configurations between the placement of the two parallel lines used to calculate the widths .[ cols="^,^,^",options="header " , ] we do not expect to have eq .( [ regra_terrace_2 ] ) satisfied .we only require that the difference between the left and right hand sides of this equation , regarded as , is the lowest possible , among all possible values for and ( with ) , for a given , with the restriction that the considered largest terraces are related to largest plateaus of eq .( [ circle_map ] ) , and thus = and = , and .doing so , we find the rationals associated with the terraces , which are shown in table [ table2 ] .the minimal value of , denoted by }]=0.032002 , with = for the terrace 1 , and = , for the terrace 3 .we also find that } ] and $ ] , we find that .we have assumed that they could be either a daughter or a parent . from now on ,when convenient , we will drop the index and represent each terrace by the associated frequency ratio .so , the terrace 1 , for =1 , is represented as the terrace with .table [ table2 ] can be represented in the form of the farey tree as shown in fig .[ meco_fig3 ] .the branch of rationals in the farey tree in the form belongs to the most stable branch , which means that the observed terraces should have the largest widths .we believe that the other less important branches of the complete devil s staircase present in the data were smoothed out by the action of the waves and the flow streams throughout the time , and at the present time can not be observed .notice that as the time goes by , the frequency ratios are increasing their absolute value , which means that if this tendency is preserved in the future , we should expect to see larger terraces . in the following , we will try to recover in the experimental profile , the universal scaling laws of eqs .( [ scaling_1 ] ) and ( [ scaling_2 ] ) .regarding eq .( [ scaling_1 ] ) , we find that scales as , as shown in fig .[ meco_fig5 ] , which is the expected global universal scaling for a complete devil s staircase . regarding eq .( [ scaling_2 ] ) , and calculating , , and using the triple of terraces with widths , , and , as represented in fig . [ meco_fig6_1 ] , we find =0.89 .using the triple of terraces ( =3,=4,=5 ) , we find that =0.87 .both results are very close from the universal fractal dimension , found for a complete devil s staircase .motivated by our previous results , we fit the observed shelf as a complete devil s staircase , using eq .( [ circle_map ] ) .notice that the only requirement for eq .( [ circle_map ] ) to generate a complete devil s staircase is that the function has a cubic inflection point at the critical parameter .whether eq .( [ circle_map ] ) is indeed an optimal modeling for the shelf is beyond the scope of the present study .we only chose this map because it is a well known system and it captures most of the relevant characteristic a dynamical systems needs to fulfill in order to create a devil s staircase .we model the sbcs as a complete devil s staircase , but we rescale the winding number into the observed terrace depth .so , we transform the complete devil s staircase of fig .[ meco_fig7 ] as good as possible into the profile of fig .[ meco_fig1](b ) , by rescaling the vertical axis of the staircase in fig .[ meco_fig7 ] .we do that by first obtaining the function ( see fig .[ meco_fig4 ] ) whose application into the terrace depth gives the frequency ratio associated with the terrace .for the triple of terraces =(1/8,2/17,1/9 ) , we obtain )=0.14219 + 0.00057853d[km ] , \label{function_f1}\ ] ] and for the triple of terraces =(1/17,2/35,1/18 ) , we obtain )=0.080941 + 0.00029786d[km ] .\label{function_f2}\ ] ] therefore , we assume that , locally , the frequency ratios are linearly related to the depth of the terraces . then , we rescale the vertical axis of the staircases in figs .[ meco_fig7](a - b ) and calculate an equivalent depth , , for the winding number by using we also allow tiny adjustments in the axes for a best fitting .the result is shown in fig .[ meco_fig8](a ) for the triple of terraces =(1/8,2/17,1/9 ) and in fig .[ meco_fig8](b ) for the triple of terraces =(1/17,2/35,1/18 ) .we see that locally , for a short time interval , we can have a good agreement of the terrace widths and positions , with the rescaled devil s staircase . however , globally , the fitting in ( a ) does not do well , as it is to be expected since the function is only locally well defined and it changes depending on the depths of the terraces .notice however that this short time interval is not so short since the time interval correspondent to a triple of terraces is of the order of a few hundred years .the assumption made that is also supported from eq .( [ cutoff ] ) . using this equation, we can obtain an estimation of the maximum value of from a terrace with a frequency ratio that has the largest denominator . in our case , we observed . using =35 in eq .( [ cutoff ] ) , we obtain . in fig .[ meco_fig8](a ) , we see a 1/7 plateau positioned in the zero sea level . that is the current level .thus , the model predicts that nowadays we should have a large terrace , which might imply in an average stabilization of the sea level for a large period of time .however , this prediction might not correspond to reality if the sea dynamics responsible for the creation of the observed continental shelf suffered structurally modifications .we have shown some experimental evidences that the southern brazilian continental shelf ( sbcs ) has a structure similar to the devil s staircase .that means that the terraces found in the bottom of the sea are not randomly distributed but they occur following a dynamical rule .this finding lead us to model the sbcs as a complete devil s staircase , in which , between two real terraces , we suppose an infinite number of virtual ( smaller ) ones .we do not find these later ones , either because they have been washed out by the stream flow or simply due to the fact that the time period in which the sea level dynamics ( sld ) stayed locked was not sufficient to create a terrace . by our hypothesis, the sld creates a terrace if it is a dynamics in which two relevant frequencies are locked in a rational ratio .this special phase - locked dynamics possesses a critical characteristic : large changes in some parameter responsible for a relevant natural frequency of the sld might not destroy the phase - locked regime , which might imply that the averaged sea level would remain still . on the other hand , small changes in the parameter associated with an external forcing of the sld could be catastrophic , inducing a chaotic sld , what would mean a turbulent averaged sea level rising / regression . in order to interpret the shelf as a devil s staircase, we have shown that the terraces appear in an organized way according to the farey mediant , the rule that describes the way plateaus appear in the devil s staircase .that allow us to `` name '' each terrace depth , , by a rational number , , regarded as the hypothetical frequency ratio .arguably , these ratios represent the ratio between real frequencies that are present in the sld .it is not the scope of the present work to verify this hypothesis , however , one way to check if the hypothetical frequency ratios are more than just a mathematical artifact would be to check if the sld has , nowadays , two relevant frequencies in a ratio 1/7 , as predicted . the newly proposed approach to characterize the sbcs rely mainly on the ratios between terraces widths and between terraces depths . while single terrace widths and depths are strongly influenced by local properties of the costal morphology and the local sea level variations , the ratios between terrace widths and depths should be a strong indication of the global sea level variations .therefore , the newly proposed approach has a general character and it seems to be appropriated as a tool of analysis to other continental shelves around the world . reminding that the local morphology of the studied area , the `` serra do mar '' does not have a strong impact in the formation of the shelf and assuming that the local sld is not directed involved in the formation of the large terraces considered in our analyses , thus, our results should reflect mainly the action of the global sld .if the characteristics observed locally in the so paulo bight indeed reflect the effect of the global sld , then the global sld might be a critical system .hopefully , the environmental changes caused by the modern men have not yet made any significant change in a relevant parameter of this global system .the profiles used in this work were obtained by the use of polinomial spilines in a two dimensional grip of points 10 km appart .this also imposes limits in the indentification of terraces that have small widths .
we show some evidences that the southeastern brazilian continental shelf ( sbcs ) has a devil s staircase structure , with a sequence of scarps and terraces with widths that obey fractal formation rules . since the formation of these features are linked with the sea level variations , we say that the sea level changes in an organized pulsating way . although the proposed approach was applied in a particular region of the earth , it is suitable to be applied in an integrated way to other shelves around the world , since the analyzes favor the revelation of the global sea level variations .
the conventional quantum theory has been tested very well only for nonrelativistic physical phenomena of microcosm .the quantum theory is founded on the nonrelativistic quantum principles .application of the quantum theory to relativistic phenomena of microcosm meets the problem of join of the nonrelativistic quantum principles with the principles of the relativity theory .many researchers believe that such a join has been carried out in the relativistic quantum field theory ( qft ) .unfortunately , it is not so , and the main difficulty lies in the fact that we do not apply properly the relativity principles . writing dynamic equations in the relativistically covariant form , we believe that we have taken into account all demands of the relativity theory . unfortunately , it is not so .the relativistic invariance of dynamic equations is the necessary condition of true application of the relativistic principles , but it is not sufficient .besides , it is necessary to use the relativistic concept of the state of the considered physical objects . for instance , describing a particle in the nonrelativistic mechanics , we consider the pointlike particle in the three - dimensional space to be a physical object , whose state is described by the particle position and momentum . the world line of the particle is considered to be a history of the particle , but not a physical object .however , in the relativistic mechanics the particle world line is considered to be a physical object ( but not its history ) . in this casethe pointlike particle is an intersection of the world line with the hyperplane : .the hyperplane is not invariant in the sense , that the set of all hyperplanes is not invariant with respect to the lorentz transformations .if we have several world lines , , ... , their intersections with the hyperplane form a set of particles in some coordinate system .the set of particles depends on the choice of the coordinate system . in the coordinate system ,moving with respect to the coordinate system we obtain another set of particles , taken at some time moment .if we have only one world line , we may choose the constant in such a way , that coincides with .however , if we have many world lines coincidence of sets and is impossible at any choice of the constant . in other words ,the particle is not an invariant object , and it can not be considered as a physical object in the relativistic mechanics . in the case of one world linewe can compensate the noninvariant character of a particle by a proper choice of the constant , but at the statistical description , where we deal with many world lines , such a compensation is impossible . in the nonrelativistic mechanicsthere is the absolute simultaneity , and the set hyperplanes is the same in all inertial coordinate systems . in this case intersections of world lines , , ... with the hyperplane form the same set of events in all coordinate systems , and particles are invariant objects , which may be considered as physical objects . strictly, the world line should be considered as a physical object also in the nonrelativistic physics , as far as the nonrelativistic physics is a special case of the relativistic one .but , in this case a consideration of a particle as a physical object is also possible , and this consideration is simpler and more effective , as far as the pointlike particle is a simpler object , than the world line .the above statements are not new .for instance , v.a .fock stressed in his book , that concept of the particle state is different in relativistic and nonrelativistic mechanics . as a rule researchers do not object to such statements , but they do not apply them in practice . the nonrelativistic quantum theory has been constructed and tested in many experiments . it is a starting point for construction of the relativistic quantum theory . in this paperwe try to investigate different strategies of the relativistic quantum theory construction , in order to choose the true one .however , at first we consider interplay between the fundamental physical theory and the truncated physical theory .the scheme of this interplay is shown in the figure .the fundamental theory is a logical structure .the fundamental principles of the theory are shown below .the experimental data , which are to be explained by the theory are placed on high . between themthere is a set of logical corollaries of the fundamental principles .it is possible such a situation , when for some conditions one can obtain a list of logical corollaries , placed near the experimental data .it is possible such a situation , when some circle of experimental data and of physical phenomena may be explained and calculated on the basis of this list of corollaries without a reference to the fundamental principles . in this casethe list of corollaries of the fundamental principles may be considered as an independent physical theory .such a theory will be referred to as the truncated theory , because it explains not all phenomena , but only a restricted circle of these phenomena ( for instance , only nonrelativistic phenomena ) .examples of truncated physical theories are known in the history of physics .for instance , the thermodynamics is such a truncated theory , which is valid only for the quasi - static thermal phenomena .the thermodynamics is an axiomatic theory .it can not be applied to nonstationary thermal phenomena . in this caseone should use the kinetic theory , which is a more fundamental theory , as far as it may be applied to both quasi - static and nonstationary thermal phenomena . besides , under some conditions the thermodynamics can be derived from the kinetic theory as a partial case .the truncated theory has a set of properties , which provide its wide application . 1 .the truncated theory is simpler , than the fundamental one , because a part of logical reasonings and mathematical calculations of the fundamental theory are used in the truncated theory in the prepared form . besides, the truncated theory is located near experimental data , and one does not need long logical reasonings for application of the truncated theory .2 . the truncated theory is a list of prescriptions , and it is not a logical structure in such extent , as the fundamental theory is a logical structure .the truncated theory is axiomatic , it contains more axioms , than the fundamental theory , as far as logical corollaries of the fundamental theory appear in the truncated theory as fundamental principles ( axioms ) .being simpler , the truncated theory appears before the fundamental theory .it is a reason of conflicts between the advocates of the fundamental theory and advocates of the truncated theory , because the last consider the truncated theory to be the fundamental one .such a situation took place , for instance , at becoming of the statistical physics , when advocates of the axiomatic thermodynamics oppugn against gibbs and boltzmann .such a situation took place at becoming of the doctrine of copernicus - galileo - newton , when advocates of the ptolemaic doctrine oppugn against the doctrine of copernicus - galileo - newton .they referred that there was no necessity to introduce the copernican doctrine , as far as the ptolemaic doctrine is simple and customary .only discovery of the newtonian gravitation law and consideration of the celestial phenomena , which can not be described in the framework of the ptolemaic doctrine , terminated the contest of the two doctrines .4 . constructing the truncated theory before the fundamental one, the trial and error method is used usually .in other words , the truncated theory is guessed , but not constructed by a logical way .the main defect of the truncated theory is an impossibility of its expansion over wider circle of physical phenomena .for instance , let the truncated theory explain nonrelativistic physical phenomena .it means , that the basic propositions of the truncated theory are obtained as corollaries of the fundamental principles and of nonrelativistic character of the considered phenomena . to expand the truncated theory on relativistic phenomena, one needs to separate , what in the principles of the truncated theory is a corollary of fundamental principles and what is a corollary of nonrelativistic character of the considered phenomena .a successful separation of the two factors means essentially a perception of the theory truncation and construction of the fundamental theory .if the fundamental theory has been constructed , the expansion of the theory on the relativistic phenomena is obtained by an application of the fundamental principles to the relativistic phenomena .the obtained theory will describe the relativistic phenomena correctly .it may be distinguished essentially from the truncated theory , which is applicable for description of only nonrelativistic phenomena .the conventional nonrelativistic quantum theory is a truncated theory , which is applicable only for description of nonrelativistic phenomena .it has formal signs of the truncated theory ( long list of axioms , simplicity , nearness to experimental data ) .truncated character of the nonrelativistic quantum theory is called in question usually by researchers working in the field of the quantum theory .the principal problem of the relativistic quantum theory is formulated usually as a problem of unification of the nonrelativistic quantum principles with the principles of the relativity theory ._ conventionally the nonrelativistic quantum theory is considered to be a fundamental theory_. the relativistic quantum theory is tried to be constructed without puzzling out , what in the nonrelativistic quantum theory is conditioned by the fundamental principles and what is conditioned by its nonrelativistic character .it is suggested that the linearity is the principal property of the quantum theory , and it is tried to be saved .however , the analysis shows that the linearity of the quantum theory is some artificial circumstance , which simplifies essentially the description of quantum phenomena , but it does not express the essence of these phenomena . the conventional approach to construction of the relativistic quantum theoryis shown by the dashed line in the scheme .following this line , the construction of the true relativistic quantum theory appears to be as difficult , as a discovery of the newtonian gravitation law on the basis of the ptolemaic conception , because in this case only the trial and errors method can be used .besides , even if we succeeded to construct such a theory , it will be very difficult to choose the valid version of the theory , because it has no logical foundation .in other words , the conventional approach to construction of the relativistic quantum theory ( invention of new hypotheses and fitting ) seems to lead to blind alley , although one can not eliminate the case that it appears to be successful .( the trial and error method appeared to be successful at construction of the nonrelativistic quantum mechanics ) .alternative way of construction of the relativistic theory of physical phenomena in the microcosm is shown in figure by the solid line .it supposes a derivation of fundamental principles and their subsequent application to the relativistic physical phenomena .elimination of the nonrelativistic quantum principles is characteristic for this approach .this elimination is accompanied by elimination of the problem of an unification of the nonrelativistic quantum principles with the relativity principles .simultaneously one develops dynamic methods of the quantum system investigation , when the quantum system is investigated simply as a dynamic system .these methods are free of application of quantum principles .they are used for investigation of both relativistic and nonrelativistic quantum systems .a use of logical constructions is characteristic for this approach .one does not use an invention of new hypotheses and fitting ( the trial and error method ) .it is assumed usually that quantum systems contain such a specific nonclassical object as the wave function .quantum principles is a list of prescriptions , how to work with the wave functions . in realitythe wave function is not a specific nonclassical object .the wave function is a complex hydrodynamic potential .any ideal fluid can be described in terms the hydrodynamic potentials ( clebsch potentials ) .in particular , it can be described in terms of the wave functions .prescriptions for work with description in terms of wave functions follows directly from definition of the wave function and from prescriptions for work with the dynamic systems of hydrodynamic type .quantum systems are such dynamic systems of hydrodynamic type , for which the dynamic equations are linear , if they are written in terms of the wave function .statistical ensemble ] become linear , if they are written in terms of the wave function . in this case the statistical ensemble ] .thus , the quantum systems are not enigmatic systems , described by a specific nonclassical object ( wave function ) .quantum systems are a partial case of dynamic systems , which may and must be investigated by conventional dynamic methods , applied in the fluid dynamics .the classical principles of dynamics and those of statistical description are fundamental principles of any dynamics and , in particular , of the quantum mechanics , considered to be a dynamics of stochastic particles . in other words ,the nonrelativistic quantum theory is truncated theory with respect to dynamics of the stochastic systems .transition to relativistic quantum mechanics means that one should apply the general principles of mechanics to the statistical ensembles of stochastic particles , whose regular component of velocity is relativistic .( stochastic component of velocity is always relativistic , even in the case , when the regular component is nonrelativistic ) .such a statistical description can be carried out in terms of the wave function .however , we can not state previously that dynamic equations will be linear , because in the relativistic case there is such a phenomenon as the particle production , which is absent in the classical relativistic mechanics and in the nonrelativistic quantum theory . at first sight, the direct way of transition from nonrelativistic quantum theory to the relativistic one seems to be more attractive , because it is simpler and it does not need a discovery of fundamental concepts . besides , it seems to be an unique way , if we believe that the nonrelativistic quantum theory is a fundamental theory ( but not a truncated one ) .unfortunately , following the quantum principles and this way , we come to a blind alley .this circumstance forces us to question , whether the nonrelativistic quantum theory is a fundamental theory ( but not a truncated one ) .we shall refer to the path , shown by the dashed line as the direct path ( direct approach ) .the path , shown by the solid line will be referred to as the logical path ( logical approach ) .investigation of the two approaches and of investigation strategies connected with them is the main goal of this paper .the logical path seems to be more adequate , but at the same time it seems to be more difficult .there are two different methods of presentations of our investigation : ( 1 ) description of problems of the direct path from the viewpoint of the logical one , ( 2 ) description of those problems of the direct path which have lead to a refusal from the direct path in favour of the logical one . in this paperwe prefer to use the second version .the particle production is the physical phenomenon , which is characteristic only for quantum relativistic physics .this phenomenon has no classical analog , because it is absent in the classical relativistic physics .this phenomenon is absent in the nonrelativistic quantum physics . at the classical descriptionthe particle production is a turn of the world line in the time direction .according to such a conception the particles are produced by pairs particle antiparticle . in classical physicsthere is no force field , which could produce or annihilate such pairs .if the world line describes the pair production , some segment of this world line is to be spacelike . at this segmentwe have where is the metric tensor and is the equation of the world line .on the other hand , the action for the free classical relativistic particle has the form = -\int mc\sqrt{g_{ik}\frac{dx^{i}}{d\tau } \frac{dx^{k}}{d\tau } } d\tau \label{b1.2}\]]relations ( [ b1.1 ] ) and ( [ b1.2 ] ) are incompatible .they become compatible , if there is such a force field , which changes the particle mass . for instance , if instead of the action ( [ b1.2 ] ) we have = \int l\left ( x,\dot{x}\right ) d\tau , \qquad l =- m_{\mathrm{eff}}c\sqrt{g_{ik}\frac{dx^{i}}{d\tau } \frac{dx^{k}}{d\tau } } , \qquad m_{\mathrm{eff}}=m\sqrt{1+f } \label{b1.3}\]]where is the effective particle mass , and is some external force field , which changes the effective particle mass .if , the effective mass is imaginary , the condition ( [ b1.1 ] ) takes place in the region , where , and the interdict on the pair production , or on the pair annihilation is violated .further we shall use the special term wl for the world line considered as a physical object .the term `` wl '' is the abbreviation of the term `` world line '' . along with the termwl we shall use the term `` emlon '' , which is the perusal of russian abbreviation `` ml '' , which means world line .investigation of the emlon , changing its direction in the time direction and describing pair production ( or annihilation ) , shows , that some segments of the emlon describe a particle , whereas another segments describe an antiparticle .the particle and antiparticle have opposite sign of the electric charge .the energy of the particle and that of the antiparticle is always positive . the time components , of the canonical momentum of the particle and that of the antiparticle have opposite sign , if the world line ( wl ) is considered as a single physical object ( single dynamic system ) .they may have the same sign and coincide with , if different segments of the emlon are associated with different dynamic systems ( particles and antiparticles ) .description of the annihilation process as an evolution of two different dynamic systems ( particle and antiparticle ) , which cease to exist after collision , is incompatible with the conventional formalism of classical relativistic dynamics , where dynamic systems may not disappear .however , description of this process as an evolution of some pointlike object swl moving along the world line in the direction of increase of the evolution parameter is possible from viewpoint of the conventional formalism of the relativistic physics .the object swl is the abbreviation of the term `` section of world line '' .along with the term `` swl '' we shall use also the term `` esemlon '' .it is the perusal of russian abbreviation `` sml '' , which means `` section of the world line '' .the esemlon is the collective concept with respect to concepts of particle and antiparticle . in the process of evolutionthe esemlon may change its state ( particle or antiparticle ) .such an approach is compatible with the relativistic kinematics .the investigation shows that the energy and the temporal component of the canonical momentum are different quantities , which may coincide , only if there is no pair production . in the presence of pair production the equality for the whole world line is possible also in the case , when the whole world line is cut into segments , corresponding to particles and antiparticles , and each segmentis considered to be a single dynamic system .it is generally assumed that the perturbation theory and the divergencies are the main problems of qft . in reality, it is only a vertex of iceberg .the main problem lies in the definition of the commutation relations .we demonstrate this in the example of the dynamic equation is the scalar complex field and is the hermitian conjugate field , is the self - action constant .there are two different schemes of the second quantization : ( 1 ) -scheme , where particle and antiparticle are considered as different physical objects and ( 2 ) -scheme , where world line ( wl ) is considered as a physical object .a particle and an antiparticle are two different states of swl ( or wl ) .these two schemes distinguish in the commutation relations , imposed on the operators and ( see for details ) . in the -schemethere is indefinite number of objects ( particles and antiparticles ) which can be produced and annihilated .the commutators _ { -} ] vanish _ { -}=0,\qquad \left [ \varphi \left ( x\right ) , \varphi ^{\ast } \left ( x^{\prime } \right ) \right ] = 0,\qquad \left\vert x - x^{\prime } \right\vert ^{2}<0 \label{b3.2}\]]if interval between the points and is spacelike .the -scheme tried to describe the entire picture of the particle motion and their collision .it is a very complicated picture .it can be described only in terms of the perturbation theory , because the number of physical objects ( objects of quantization ) is not conserved .the commutation relation , which are used in the -scheme are _ incompatible with the dynamic equations_. as a result the -scheme of the second quantization appears to be inconsistent . in the -scheme of the second quantizationthe number of objects of quantization ( wl ) is conserved , and one can divide the whole problem into parts , containing one wl , two wls , three wls etc . each of parts may be considered and solved independently .the statement of the problem reminds that of the nonrelativistic quantum mechanics , where the number of particles is conserved . as a resultthe whole problem may be divided into one - particle problem , two - particle problem , etc , and each problem can be solved independently . according to such a division of the whole problem into several simpler problems ,the problem of the second quantization in -scheme is reduced to several simpler problems . as a resultit may be formulated without a use of the perturbation theory ( see for details r2001 ) .commutation relations in the -scheme do not satisfy the condition ( [ b3.2 ] ) .this circumstance is connected with the fact that the objects of quantization ( wls ) are lengthy objects .if there is the particle production , wls are spacelike in the sense that they may contain points and , separated by the spacelike interval .there are such dynamic variables at and at , lying on the same wl , for which the commutator does not vanish , and it is a reason for violation of conditions ( [ b3.2 ] ) in the -scheme of quantization .the commutation relations in -scheme are compatible with dynamic equations . besides, simultaneous commutation relations depend on the self - action constant .the -scheme of the second quantization is consistent and compatible with dynamic equations .it can be solved by means of nonperturbative methods .however , the pair production is absent in the -scheme , even if the self - action constant .one believes , that there is the pair production in the -scheme .however , the -scheme is inconsistent , and the pair production is a corollary of this inconsistency .thus , neither -scheme nor -scheme of quantization can derive the pair production effect .it is connected with the fact , that the self - action of the form ( [ b3.1 ] ) , as well as other power interactions can not generate pair production . to generate the pair production, one needs interaction of special type .advocates of the -scheme state that in the -scheme the causality principle is violated , because the conditions ( [ b3.2 ] ) are not fulfilled .it is not so , because the causality principle has the form ( b3.2 ) only for pointlike objects . for lengthy objects ( spacelike world lines )the causality principle has another form .the condition ( [ b3.2 ] ) states simply that the dynamic variables of different dynamic systems commutate .but in the case of spacelike wl the points and , separated by the spacelike interval , may belong to the same dynamic system . in this casethe condition ( [ b3.2 ] ) has to be violated .but independently of whether or not the advocates of the -scheme are right , the dynamic equation ( [ b3.1 ] ) does not describe the pair production , and appearance of the pair production is a result of incompatibility of the commutation relations with the dynamic equation .conventionally one considers the commutation relations as a kind of initial conditions for the dynamic equations . as a result onedoes not see a necessity to test the compatibility of the commutation relations with the dynamic equations .in reality the analogy between the commutation relations and initial conditions is not true .the commutation relations are additional constraints imposed on the dynamic variables ._ compatibility of additional constraints with the dynamic equations is to be tested . _dependence of the simultaneous commutation relations on the self - action constant in the -scheme , where such a compatibility takes place , confirms the necessity of such a test .thus , a direct application of the quantum mechanics formalism to a relativistic dynamic systems leads to impossibility of the particle production description .it means that we should understand the essence of the quantum mechanics formalism and revise its form in accordance with the revised understanding of the quantum mechanics .to show that the linearity of quantum mechanics formalism is not an essential inherent property of the fundamental theory , we consider the schrdinger particle , which is the dynamic system described by the action = \int \left\ { \frac{i\hbar } { 2}\left ( \psi ^{\ast } \partial _ { 0}\psi -\partial _ { 0}\psi ^{\ast } \cdot \psi \right ) -\frac{\hbar ^{2}}{2m}\mathbf{\nabla } \psi ^{\ast } \mathbf{\nabla } \psi \right\ } d^{4}x \label{c2.1}\]]where is a complex wave function .the meaning of the wave function ( connection between the particle and the wave function ) is described by the relations . define the mean value of any function of position and momentum .we shall refer to the relation ( [ c2.4 ] ) together with the restrictions imposed on its applications as the quantum principles , because von neumann has shown , that all propositions of quantum mechanics can be deduced from relations of this type .thus , the action ( [ c2.1 ] ) describes the quantum mechanics formalism ( dynamics ) , whereas the relation ( [ c2.4 ] ) forms a basis for the conventional interpretation of the quantum mechanics .dynamic equation by the action ( [ c2.1 ] ) is linear , and one believes that this linearity together with the linear operators , describing all observable quantities , is the inherent property of the quantum mechanics . the quantum constant is supposed to describe the quantum properties in the sense , that setting in the quantum description , we are to obtain the classical description .however , setting in the action ( [ c2.1 ] ) , we obtain no description .all terms in the action contain , and it seems that the description by means of the action ( [ c2.1 ] ) is quantum from the beginning to the end .in reality the principal part of the dynamic system is classical , and the quantum description forms only a part of the general description . in other words , description in terms of the action ( [ c2.1 ] )is an artificial description . to show this , we transform the wave function , changing its phase ( [ c3.1 ] ) in the action ( [ c2.1 ] ), we obtain = \int \left\ { \frac{ib}{2}\left ( \psi _ { b}^{\ast } \partial _ { 0}\psi _ { b}-\partial _ { 0}\psi _ { b}^{\ast } \cdot \psi _ { b}\right ) -\frac{b^{2}}{2m}\mathbf{\nabla } \psi _ { b}^{\ast } \mathbf{\nabla } \psi _ { b}\right.\]] change of variables leads to the replacement and to appearance of two nonlinear terms which compensate each other , if .the change of variable does not change the dynamic system , although the dynamic equation becomes nonlinear , if however , the description in terms of appears to be natural in the sense , that after setting , the action ] of free classical particles .the action } ] . the statistical ensemble ] ( [ c3.6a ] ) is a partial case ( irrotational flow ) of the dynamic system ] to the action ] is an arbitrary constant of integration ( gauge constant ) .arbitrary integration functions are `` hidden '' inside the wave function .thus , the limit of schrdinger particle ( [ c3.2 ] ) at is a statistical ensemble ] of free classical ( deterministic ) particles suggests the idea , that the schrdinger particle is in reality a statistical ensemble ] of free deterministic particles we obtain the action } \left [ \mathbf{x}\right ] = \int \frac{m}{2}\left ( \frac{d\mathbf{x}}{dt}\right ) ^{2}dtd\mathbf{\xi } \label{c3.7}\]]where is a 3-vector function of independent variables .the variables ( lagrangian coordinates ) label particles of the statistical ensemble ] is a dynamic system of hydrodynamic type . the statistical ensemble ] of free stochastic relativistic particles , which is the dynamic system described by the action } \left [ x , u\right ] = -\int m_{\mathrm{eff}}c\sqrt{g_{ik}\dot{x}^{i}\dot{x}^{k}}d\tau d\mathbf{\xi , \qquad } \dot{x}^{k}\equiv \frac{dx^{k}}{d\tau } \label{c5.2}\]]where , , . here the effective mass is obtained from the mass of the deterministic ( classical ) particle by means of the change the mean value of the 4-velocity stochastic component . using the relation is convenient to introduce the 4-velocity , with having dimensionality of the length .the action ( [ c5.2 ] ) takes the form } \left [ x,\kappa \right ] = -\int mck\sqrt{g_{ik}\dot{x}^{i}\dot{x}^{k}}d\tau d\mathbf{\xi , \qquad } k\mathbf{=}\sqrt{1+\lambda ^{2}\left ( g_{ik}\kappa ^{i}\kappa ^{k}+\partial _ { k}\kappa ^{k}\right ) } \label{c5.5}\]]where is the compton wave length of the particle and the metric tensor . in the nonrelativistic approximationthe action ( [ c5.5 ] ) turns in the action ( [ c4.1 ] ) , which takes the form = \int \left\ { -mc^{2}+\frac{m}{2}\left ( \frac{d\mathbf{x}}{dt}\right ) ^{2}+\frac{m}{2}\mathbf{u}^{2}-\frac{\hbar } { 2}\mathbf{\nabla u}\right\ } dtd\mathbf{\xi } \label{c5.6}\]]deriving ( [ c5.6 ] ) , we choose the parameter , take into account the relation ( [ c5.4 ] ) and neglect as compared with . in the general case ( [ c5.5 ] )the -field may be also represented in the form of gradient as well as in the case ( [ c4.5]) is the scalar field of such a form , that satisfies the inhomogeneous wave equation . using proper change of variables, one can introduce the wave function , satisfying the klein - gordon equation .at such a change of variables the -field appears to be hidden in the wave function and its remarkable properties appear to be hidden . as well as the diffusion velocity in ( [ c4.1a ] ) the -field describes the mean value of the stochastic component of the particle velocity . in the non - relativistic case ( [ c4.1a ] ) the 3-velocity determined uniquely by its source ( the density of particles ) .but the -field is a relativistic field , which may escape from its source and exist separately from its source .besides , the -field can change the effective particle mass , as one can see from the expression ( [ c5.5 ] ) for the action .in particular , if particle mass becomes imaginary . in this case the mean world line of the particle is spacelike , and the pair production becomes to be possible .in other words , the -field can produce pairs .the property of pair production is a crucial property of the quantum relativistic physics .the classical fields ( electromagnetic , gravitational ) do not possess this property . as we have seen in the second section the description in framework of the conventional qfthas problems with the pair production description .there is a hope , that the proper statistical description of several relativistic stochastic particle will admit one to obtain the pair production effect .for instance , maybe , two colliding relativistic particles can produce pairs by means of their common -field .we hope that such a program may appear to be successful , provided the proper formalism of the statistical ensemble will be developed .today we have only the developed formalism for statistical description of stochastic system consisting of one emlon ( wl ) .formalism for statistical description of stochastic system consisting of several emlons ( wls ) is not yet developed properly .construction of the relativistic quantum theory is a very difficult problem .but solution of this problem depends not only on the difficulty of the problem in itself .it depends also on qualification of researchers , investigating this problem , on effectiveness of the applied investigation methods , on capability of researchers to logical reasonings and on other factors . in this sectionwe shall try analyze the character of appearing difficulties .we shall separate them into two parts : objective difficulties and subjective difficulties .the objective difficulties have been discussed .further we shall try to discuss subjective difficulties and mistakes .discovery and correction of these mistakes is difficult because of their subjective character . in our opinion ,the main difficulty is a deficit of logic ( predominance of the trial and error method over logic ) at the construction of the quantum relativistic theory . in particular, this deficit of logic is displayed in disregard of demands , imposed by the relativity principles .let us consider briefly the history of the question . in the beginning of the 20th centurythere were attempts of constructon of the nonrelativistic quantum mechanics as a statistical description of stochastic particles . in these attemptsthe statistical description was considered to be a probabilistic description .incompatibility of the probabilistic description with the relativity principles was not known , because one ignored the circumstance that the world line ( but not the particle ) was the physical object .because of this mistake one could not construct the statistical conception of the quantum mechanics .one succeeded to construct the axiomatic conception of quantum mechanics by means of the trial and error method .after this success the trial and error method became the principal investigation method in the quantum theory .the trial and error method had the success and became to be predominant , because it was insensitive to mistakes in the fundamental physical principles , whereas the classical investigation method , which run back to isaac newton , was founded on the logic .the method , founded on logic could not lead to correct results , if the fundamental physical principles contained mistakes , or these principles were applied incorrectly . in the first half of the 20th centurythere were classics of physics , who knew and used the classical logical method of investigation . in the last half of the 20th century, there were only researchers , using the trial and error method .the predominancy of the trial and error method is explained by two factors .firstly , it appeared to be successful in application to construction of the nonrelativistic quantum theory .secondly , the classical logical method appeared to be forgotten , because new generations of the researcher were educated on the example of the successful application of the trial and error method , which was perceived as the only possible method of investigation .any ambitious theorist dreamed to invent such hypothesis ( maybe , very exotic ) , which could be explain at once the mass spectrum of elementary particles and solve other problems of qft .development of the microcosm physics turned into competition of such hypotheses .the circumstance , that the correct application of the relativity principles ( the correct application of the fact that the world line is a physical object ) may be important from the practical viewpoint became to be clear for the author of this paper after investigation of the world line properties .two important results followed from this paper : ( 1 ) the quantum mechanics as a statistical theory can be constructed , if one uses the relativistic concept of the state and construct the statistical description without a use the probability theory , ( 2 ) the perturbation theory at the second quantization may be eliminated , if the conservation law of physical objects ( world lines ) is taken into account ) .the paper was reported at the seminar of the theoretical department of the lebedev physical institute .relation to the paper was sceptical as far as results were obtained without any additional suppositions ( and this was unusual ) .besides , many researchers did not believe , that it was possible a classical description of the world line , making a zigzag in the time direction .further the calculation were tested , and all objections were eliminated .it was clear to the author of paper , that the first and the second results were incompatible .he believed that the quantum mechanics is the statistical description of random world lines and the quantum principles are to be corollaries of this description .however , in that time the integration of hydrodynamic equations was not known , and from the mathematical viewpoint the statistical description could not be considered as a starting point of the quantum description .the second result admitted one to separate the problem of the second quantization in to parts and to solve exactly the one - emlon problem and the two - emlon problem without a use of the perturbation theory .the author expected that the further development of the second quantization led the problem into the blind alley , provided the fitting is not used . from his viewpoint it should prove that development of qft in the direction of the second quantization were erroneous .no additional hypotheses were used , to avoid a charge in a use of erroneous hypotheses , leading to a strange result ( absence of the pair production ) . in particular, one uses neither the perturbation theory , nor cut off the self - action at the time tending to infinity .( usually the two hypotheses are always used ) . under these conditionsthe absence of the pair productions meant , that _ the strategy of the second quantization in itself is erroneous _ , as far as the relativistic quantum field theory , where there is no pair production , can not be true .when the corresponding paper was submitted to a scientific journal , it was rejected on the basis of the review of the referee , who wrote : `` i do not recommend the paper for publication , because the author himself stated that in his method of quantization the pair production is absent . ''( the paper has not been published , and now it can be found only at the site ) we see here a sample of logic , based on the trial and error method , which does not accept the papers with a negative result .the referee does not admit existence of other investigation methods other , than the trial and error method . indeed , as far as in the trial and error methodall hypotheses are random , the tests leading to a negative result are of no interest . in the logical investigation methodthe negative result means that the starting premises are false ( of course , if there are no mistakes in the executed investigation ) .unfortunately , the approach of the referee is typical .most researchers are apt to use only the trial and error method and they can not imagine anything other than this method . during thirty yearsthe author of this paper had a chance to discuss the correctness of the second quantization problem with his colleagues dealing with qft .some of them agreed that , maybe , the commutation relation are incompatible with the dynamic equations .but at the same time they stated that it means nothing , because qft agrees with the experimental data very well .the circumstance , that the experimental data are explained by means of the inconsistent theory did not lead to objections from them .such an approach is a corollary of the predominant method of trial and error .we think that this method is the main obstacle on the path of the relativistic quantum theory construction ._ one can find and correct mistakes in the theory , but a change of mentality needs some time .this time may be rather long_. we have seen that the nonrelativistic quantum theory could be presented as a statistical description of stochastic particles , if we apply the relativity principles correctly and use the _ dynamic conception _ of the statistical description . in fact , the nonrelativistic quantum theory was developed mainly by the trial and error method .appearance of quantum principles was a result of application of this method .the trial and error method is an effective method for investigation of single physical phenomena of unknown nature , because it admits one to discover new concepts , which are adequate to the considered phenomenon .however , the trial and error method is inadequate for construction of a fundamental theory , because the fundamental theory is a logical structure , which systematizes our knowledge .the systematization needs a long logical reasonings , because it is based on several fundamental propositions .the systematization as well as the fundamental theory is very sensitive to possible mistakes in the logical reasonings and in the mathematical calculations .any mistake should be analyzed and eliminated .on the contrary , the trial and error method is insensitive to mistakes. it is multiple - path .usually before obtaining a correct solution one suggests and tests many different hypotheses .only one of them may appear to be true .as far as the hypotheses are suggested occasionally , it is useless to analyze the erroneous hypotheses .such an analysis gives nothing , because the hypotheses do not connect between themselves and with the obtained correct result .if we use logical reasonings , based on the fundamental principles , and obtain an incorrect result , it means that either we make a mistake , or the fundamental principles are incorrect .thus , at the logical approach we should discover and analyze our mistakes .it is useful for a correction of our reasonings . at the engineering approach to the construction of the fundamental theory , when one uses the trial and error method , a discovery and an analysis of the possible mistake is uselessfurthermore , it is possible such a case , when the obtained result is incorrect , although it agrees with the experimental data .let us illustrate this in the example , which relates to the problem of the relativistic quantum theory construction .we discuss the problem , whether the dirac particle is pointlike or it has an internal structure .the dirac particle is the dynamic system , described by the action of the form=c^{2}\int ( -mc\bar{\psi}\psi + \frac{i}{2}\hbar \bar{\psi}\gamma ^{l}\partial _ { l}\psi -\frac{i}{2}\hbar \partial _ { l}\bar{\psi}\gamma ^{l}\psi -\frac{e}{c}a_{l}\bar{\psi}\gamma ^{l}\psi )d^{4}x \label{b5.1}\]]where and are respectively the mass and the charge of the dirac particle , and is the speed of the light . here is four - component complex wave function , is the hermitian conjugate wave function , and is the conjugate one .the quantities , are complex constant matrices , satisfying the relation is the identity matrix , and is the metric tensor .the quantity , is the electromagnetic potential .the action ( [ b5.1 ] ) generates the dynamic equation for the dynamic system , known as the dirac equation expressions for physical quantities : the 4-flux of particles and the energy - momentum tensor if the dirac particle is not pointlike and has an internal structure , described by some additional degrees of freedom , this structure is to be present also in the nonrelativistic approximation .conventionally one assumes that the pauli particle is the nonrelativistic approximation of the dirac particle ( see , for instance , ) . in the partial case ,when , the pauli particle is the dynamic system , described by the dynamic equation is the two - component complex wave function , is the levi - chivita pseudotensor , and are the pauli matrices .the pauli particle is the pointlke particle , which has no internal structure .this fact agrees with the experimental data .hence , if the pauli particle is the nonrelativistic approximation of the dirac particle , the dirac particle is pointlike also and has no internal structure .the pauli equation ( [ b5.5 ] ) has a lower order ( four first order real equations ) , than the dirac equation ( [ b5.3 ] ) ( eight first order real equations ) .it means that at transition from the dirac equation to the pauli equation the order of the system of dynamic equation reduces , and several degrees of freedom were lost .the equation ( [ b5.5 ] ) is obtained from equation ( [ b5.3 ] ) as the limit . some temporal derivatives in the dirac equation ( [ b5.3 ] )have small coefficients of the order and .these terms are neglected and the order of the system of dynamic equations reduces . however , the neglected terms are the terms with the small parameters before the highest derivative .one can not neglect these terms , because at very high frequencies ( of the order ) these terms become to be of the same order as the remaining terms .neglecting these terms , we neglect the high frequency degrees of freedom . in reality , the states of the dirac particle , which are linear superposition of conventional low frequency state with the high frequency state are unstable , because in this case the 4-current oscillates with the frequency of the order .the dirac particle is charged . as a resultthe energy of the high frequency excitation is radiated in the form of electromagnetic waves , and the dirac particle appears at the state , where the 4-current is constant .thus , from the experimental viewpoint the additional high frequency degrees of freedom of the dirac particle do not exist , because they are not observable .can one discover these degrees of freedom theoretically from the analysis of the dirac particle ?yes , one can discover them at the scrupulous analysis in the framework of the conventional quantum mechanics .but they have not been discovered .we are not sure , whether the theory of differential equations with small parameter before the highest derivative was known in the first half of the 20th century , but it was definitely known in the last half of the 20th century. nevertheless the necessary analyses has not been produced , and the dirac particle was considered to be pointlike .it is true that the high frequency degrees of freedom are not observable at low energies , and they give no contribution to description of the dirac particle in the nonrelativistic case .one can assume that these degrees of freedom absent in the nonrelativistic case , and this assumption agrees with the experimental data .however , at the high energy collisions of dirac particles these degrees of freedom may be excited and make an essential contribution to description of the high energy collision process . why has one not discovered these degrees of freedom theoretically ? because researchers used the trial and error method , where the only criterion of validity of the theory is the agreement with experiment .the logical reasonings and mistakes in consideration are of no importance , provided they do not violate agreement with experiment . if we take into account that the nonrelativistic quantum theory was created by means of the trial and error method , and three generations of the microcosm researchers have been educated on the example of this method , we recognize that the internal degrees of freedom of the dirac particle could not be discovered before the execution of the proper high energy experiments with dirac particles .the internal degrees of freedom of the dirac particle were discovered at investigation of the dirac particle by dynamic methods , which use the logical approach to investigation .the dynamic methods are attentive to the logic and to mistakes of investigation .they are not oriented to the trial and error method and to agreement with experiment . besides, investigation of the dirac equation by the dynamic methods has shown that the description of internal degrees of freedom is nonrelativistic .it means that the whole dirac equation ( [ b5.1 ] ) is a nonrelativistic equation , although it is written in the relativistically covariant form .nonrelativistic character of the dirac equation means mathematically , that the set of all solutions of the dirac equation is not invariant , in general , under some poincar transformations . from viewpoint of researchers , who believed only in experimental test ( but not in logic reasonings )the dirac equation is relativistic , because only internal degrees of freedom are described nonrelativistically , but these degrees of freedom are ignored at the conventional approach .publishing of the papers in the reviewed journal appears to be impossible , because the referee declared that he can not believe that the dirac equation be nonrelativistic .this declaration appears to be sufficient for rejection of the paper .i think that the opinion of the referee reflects the viewpoint of the statistical average researcher , and this opinion is erroneous .problem of the relativistic invariance of the dirac equation is discussed in in details . herewe shall not go in details .we refer only to the theorem , formulated by anderson .this theorem states : _ the symmetry group of dynamic equations , written in the relativistically covariant form , coincides with the symmetry group of absolute objects . _the absolute objects are quantities , which are the same for all solutions of the dynamic equations . in the dirac equation ( [ b5.3 ] )the quantities and matrices are absolute objects .we set for simplicity .then the symmetry group of 4-vector and of the unit matrix 4-vector is the group of translation and the group of rotation around the direction defined by the unit 4-vector .this 7-parametric group is a subgroup of the 10-parametric poincar group .hence , the dirac equation is not relativistic .the dirac equation distinguishes from the klein - gordon equation in the sense , that the klein - gordon equation contains the absolute object , which has the 10-parametric poincar group as its symmetry group . as an illustration of the role of unit constant vector in the relativistically covariant equation , we note the dynamic equation for the free nonrelativistic classical particle written in the relativistically covariant form , if one introduces the unit timelike constant 4-vector .we obtain instead of ( [ b5.6]) = 0,\qquad \dot{x}^{i}\equiv \frac{dx^{i}}{d\tau } , \qquad i=0,1,2,3 \label{b5.7}\]]indeed , setting in ( [ b5.7 ] ) , we obtain for the equation ( [ b5.6 ] ) , because . for obtain the identity \equiv 0\ ] ] the newtonian space of events contains two invariants and , whereas the minkowski space - time contains only one invariant .introduction of the constant unit 4-vector admits one to construct two invariants and from one invariant and 4-vector by means of relations thus , if the dynamic equations written in the relativistically covariant form contain the unit timelike constant vector , we should suspect that dynamic equations are not relativistic .thus , the quantum mechanics can be founded as a mechanics of stochastic particles . however , it is not known , why the motion of free particles is stochastic and from where the quantum constant does appear .there are two variants of answer to these questions .the stochasticity of the free particle motion is explained by the space - time properties , and the quantum constant is a parameter , describing the space - time properties .the stochasticity of the free particle motion is explained by the special quantum nature of particles .the motion of such a particle distinguishes from the motion of usual classical particle .there is a series of rules ( quantum principles ) , determining the quantum particle motion .the universal character of the quantum constant is explained by the universality of the quantum nature of all particles and other physical objects .as to event space , it remains to be the same as at isaac newton .it is quite clear that the first version of explanation is simpler and more logical , as far as it supposes _ only a change of the space - time geometry_. the rest , including the principles of classical physics , remains to be unchanged .the main problem of the first version was an absence of the space - time geometry with such properties .in general , one could not imagine that such a space - time geometry can exist . as a result in the beginning of the 20th century one choose the second version .after a large work the necessary set of additional hypotheses ( quantum principles ) had been invented .one succeeded to explain all nonrelativistic quantum phenomena .however , an attempt of the quantum theory expansion to the relativistic phenomena lead to the problem , which is formulated as _ join of nonrelativistic quantum principles with the principles of the relativity theory_. in general , the question , why the motion of microparticles is stochastic , does not relate directly to the problem of the relativistic quantum theory construction .it relates only in the sense , that explanation of the stochasticity by the space - time properties creates an entire picture of the world , where the good old classical principles rule , and only the space - time properties are slightly changed .it is clear , that explanation of quantum properties by a slight correction of the space - time properties is more attractive , than the substitution of principles of classical physics by enigmatic quantum principles , which are incompatible with the relativity principles .besides , the correction of the space - time properties is very simple .it does not demand an introduction of additional exotic space - time properties such as a space - time stochasticity , or noncommutativity of coordinates in the space - time .correction of the space - time properties means a change of the world function of the space - time .it consists of three points r90,r2002,r2005d . 1 .one proves that the proper euclidean geometry has the -immanence property .it means that the proper euclidean geometry is described entirely by its world function , and all euclidean prescriptions for construction of geometrical objects and relations between them can be expressed in terms and only in terms of the euclidean world function .it is supposed that any space - time geometry has the -immanence property .it means that all prescriptions of the geometry for construction of geometrical objects and relations between them can be obtained from the euclidean prescription by a proper deformation of the euclidean geometry , i.e. by the change of the euclidean world function by the world function of the space - time geometry in all euclidean prescriptions .the world function of the space - time geometry is chosen in the form is the world function of the minkowski space , is the speed of the light and / cm is the constant , describing connection between the geometric mass and usual mass by means of the relation . in the space - time with nonvanishing distortion the particle mass is geometrized , and motion of free particles is stochastic .the distortion function describes the character of quantum stochasticity .form of the distortion function is determined by the demand that the stochasticity generated by distortion is the quantum stochasticity , i.e. the statistical description of the free stochastic particle motion is equivalent to the quantum description in terms of the schrdinger equation .we have considered two possible strategies of the relativistic quantum theory construction .the first strategy , founded on the application of the conventional quantum technique to relativistic systems , leads either to inconsistent conception or to the consistent theory , where the pair production does not appear for interactions of the degree type .the second strategy is founded on the construction of the fundamental theory , which relates to the conventional nonrelativistic quantum theory approximately in such a way as the statistical physics relates to the axiomatic thermodynamics .the fundamental theory is the conventional relativistic classical theory in the space - time , whose geometry is slightly modified in such a way , that motion of free particles is primordially stochastic and the particle mass is geometrized .the quantum constant appears as a parameter of the space - time geometry . statistical description of stochastic nonrelativistic particle motion appears to be equivalent to the conventional quantum description .there is no necessity to postulate the quantum principles , because they may be obtained as a corollary of such a statistical description of nonrelativistic stochastic particles .there is a hope that the direct application of the statistical description to relativistic stochastic systems admits one to construct the relativistic quantum theory .the fundamental theory admits one to use only the logical investigation method of isaac newton .the fundamental theory is free of application of the trial and error method , which is the main obstacle on the path of the relativistic quantum theory construction .predominance of the trial and error method in the 20th century generated a specific mentality of contemporary researchers , when the researcher tries to suggest new hypotheses and to guess the result but not to derive it by the logical way from the fundamental physical principles .this mentality is a very serious obstacle on the path of the relativistic quantum theory construction .rylov , on connection between the energy - momentum vector and canonical momentum in relativistic mechanics ._ teoretischeskaya i matematischeskaya fizika_. * 2 * , 333 - 337 , ( 1970 ) .( in russian ) .theor . andmath . phys .( usa ) , 5 , 333 , ( 1970 ) ( translated from russian ) .
two different strategies of the relativistic quantum theory construction are considered and evaluated . the first strategy is the conventional strategy , based on application of the quantum mechanics technique to relativistic systems . this approach can not solve the problem of pair production . the apparent success of qft at solution of this problem is conditioned by the inconsistency of qft , when the commutation relations are incompatible with the dynamic equations . ( the inconsistent theory `` can solve '' practically any problem , including the problem of pair production ) . the second strategy is based on application of fundamental principles of classical dynamics and those of statistical description to relativistic dynamic systems . it seems to be more reliable , because this strategy does not use quantum principles , and the main problem of qft ( join of nonrelativistic quantum principles with the principles of relativity ) appears to be eliminated .
superconducting magnetic energy storage ( smes ) has been traditionally considered for power conditioning applications , where instantaneous high power can be delivered in a matter of milliseconds . among the advantages exclusively offered by superconductors we could mention : _ i _ ) higher energy storage efficiency ( ) than other existing energy storage systems , and _ ii _ ) almost - immediate charge / discharge characteristics .certain applications , such as the electrical power grid and electromagnetic rail launchers are heavily - dependent on the efficiency and rapid charge / discharge response of a smes . currently , it is one of a few superconductivity - based applications that is adaptable to the needs of the electric utilities markets .recently , a renewed interest in smes technologies has been motivated by the search for means of improving the stability of the future power grid system , that would incorporate a large number of intermittent energy sources , such as wind and solar .historically , the concept of the smes to a power system was originally developed in 1969 in order to balance the variations in the supply and demand of electricity .low - temperature superconductors , such as nb - ti , have been successfully used for smes , however , issues with reliability of 4.2 k cryogenics , efficiency of power electronics , and the relatively low energy density of nb - based smes have limited the applicability of the technology to a few cases , where electric power quality is at premium .since the magnetic energy stored in a smes is proportional to , doubling the operating field can quadruple the stored energy , an advantage , unmatched by other energy storage solutions .in contrast to the superconducting transition temperature , , and the upper critical field , , the current - carrying capability , measured by the critical current density , , is governed by the vortex pinning strength , which depends fundamentally on the intrinsic properties of the superconducting state , and is determined by the ability of defects in superconductors to pin the vortices carrying magnetic flux .so far , second generation ( 2 g ) high temperature superconductor ( hts ) yba ( ybco ) has offered the greatest hope for implementation , since it exhibits all - around superior properties to all other classes of superconductors , especially in light of the fact that it offers a possibility of operation at a temperature much higher than that of liquid helium .however , other promising candidates have emerged throughout the years as well , such as magnesium diboride , mgb and the recently - discovered iron - based superconductors .even though mgb and the fe - based superconductors have typically lower s , s and s compared to the hts s , they exhibit much lower magnetic field anisotropies and are capable of being processed by a variety of methods .magnet design has become a process of paramount importance to smes performance , necessitating better and faster analytical tools , since magnetic field calculations in and around a smes demand sizeable cpu time and memory .the finite element method ( fem ) has been the most common tool currently utilized for computations of magnetic fields in superconducting magnetic energy storage devices . in this report, we present an alternative approach towards building an algorithmic solution for simulating and optimizing a smes device based on a selection of superconducting materials , such as second - generation ybco tape or mgb . this method is based on the _ radia _ software package , which is written in object - oriented c++ , and interfaced to mathematica via mathlink .a substantial portion of algorithmic processing in the scope of this work is implemented in the mathematica language . here , _ radia _is used to assess the fields created by 3d volumes with constant current density , based on the biot - savart law .additionally , we present results on realistic ac loss assessment in a smes , and suggest that for energies in the joule range ac losses are tolerable and will not significantly impact smes performance .in this section we present the analytical considerations behind our smes design / optimization algorithm .first , we begin by presenting an overview of the geometrical considerations for a realistic smes .then , we describe _ radia _ an analytical tool used here as an alternative to finite element methods ( fem ) calculations . in the subsequent subsectionwe outline the computational considerations , and compare our initial outputs with those of a fem . in the final subsection , we revisit the topic of magnetic field scanning inside a coil as a necessary ingredient for a smes optimization algorithm . generally , there exist several types of geometrical arrangements for building a smes , but the toroidal design , offers the advantage of a reduced stray ( perpendicular ) magnetic field on the tape or wire , mostly confining the field inside the coil . such an arrangement is especially critical for second generation ( 2 g ) ybco tape , since ybco is marked by a high critical current density anisotropy , . in the case of ybco , the in - plane t ) = 10.8 ma/ , for example , can exceed the out of plane one t ) = 1.8 ma/ by roughly an order of magnitude at 4.2 k. the smes simulation reported here has been built on top of the _ radia _ software package using _ mathematica _ by wolfram research .the _ radia _ software package was designed by scientists at the european synchrotron radiation facility ( esrf ) , for solving physical and technical problems one encounters during the development of insertion devices for synchrotron light sources .however , it can also be used in different branches of physics , where efficient solutions of 3d boundary problems of magnetostatics are needed .the _ radia _ package is essentially a 3d magnetostatics computer code optimized for undulators and wigglers .the code has been extensively benchmarked with respect to a commercial finite element code .all esrf insertion devices built since 1992 have been designed using this code or its earlier versions .a large number of predictions made by _radia _ concerning the magnetic field and field integrals were verified on real insertion devices after manufacture .thus , one can design nearly any permanent magnet undulator or wiggler , including the central field as well as the extremities . creating and linking the objectsproperly is the first step in describing the magnetostatics problem .contrary to the fem approach , _ radia _ does not mesh the vacuum ( an example of a fem geometrical segmentation and field generation is shown in fig .1 ) , but rather , solves boundary magnetostatic problems with magnetized and current - carrying volumes , using the boundary integral approach .the current - carrying elements can be straight or curved blocks , and the planar boundary conditions are simulated via sets of space transformations , such as translations , rotations , or plane symmetry inversions . applying transformations with a multiplicity can be understood as an efficient use of the symmetries in the model being solved .this results in a minimum number of degrees of freedom , and therefore , dramatically reduces the memory requirements and cpu time needed to obtain a solution .the reduction of the necessary elements for precisions comparable to fem approaches ( see fig .2 ) leads to a drastic reduction of the required cpu time ( typically on the order of 20 times smaller ) .this time efficiency is crucial when it comes to creating smes optimization algorithms , where , as we will show in the discussion section , for each point in configuration space one needs to carry the entire operation multiple times , from creating the entire smes toroid with its associated coils , to computing the field distributions and stored energies , and calculating optimum design parameters . in the current work, we employed _ radia _ to simulate and optimize a realistic smes device , which takes into account the temperature and magnetic field dependent critical current density , , of the superconducting wire of choice used in the simulation ( second generation ybco tape or mgb ) .the significant reduction in computation time was the primary motivation behind this endeavor .our algorithm starts by specifying the actual geometry of the device , which is built according to given specifications , such as _ i _ ) coil radius , _ ii _ ) coil thickness , _ iii _ ) coil width , _ iv _ ) coil thickness , _ v _ ) toroidal radius , and _ vi _ ) number of coils .the coil radius is defined as the mean of the inner and outer radii ( with respect to the coil s axis of symmetry ) of each coil , while the coil thickness is the difference between these two ( see the 16 module toroidal system generated by _radia _ in fig .the large ( toroidal ) radius is the distance from the geometric center of the smes to the geometric center of each individual coil , and the typical toroidal radia that we employed were on the order of 1 2 meters .a realistic toroidal - type smes design must include the frame and conduction support bars , as well as insulating spacers between the coils . _ radia _ gives the freedom of choosing the degree of segmentation of each coil ; the more segments are chosen , the closer the geometry is to an ideal cylindrical shape , but it will also result in longer computation time .we `` built '' coils with 50 segments for the actual simulations .a significant increase in the segmentation tends to lead to an undue increase in the computation time without the benefit of a significant increase in the precision . in order to test the validity of _ radia_-based simulations , we compared the outputs of our _ radia_-generated smes design with that of a fem simulation by lee _ et al .we computed the maximum magnetic fields parallel and perpendicular to the test coil , as well as the total energy stored in the device using the same geometry and operating current density as used by lee __ , and observed a discrepancy of less than 1 between the two methods .the comparison between the two approaches is shown in table 1 .*15 specification & in this work & from lee _ et al . _ + operating current density ( a / mm ) & 194.5 & 194.5 + max parallel magnetic field ( t ) & 8.8 & 8.1 + max perpendicular magnetic field ( t ) & 1.12 & 1.16 + stored energy ( mj ) & 2.6 & 2.6 + total length of hts conductor ( km ) & 19.58 & 19.55 + the highest field inside a coil is the bottle - neck limiter to the amount of current density , , that the wire can reasonably handle before it quenches and subsequently incapacitating the device .we present magnetic field analysis results of a toroidal - type smes magnet calculated via _ radia _ ( see fig .4 ) . a characteristic magnetic flux density pattern of the center plane of the toroid is exhibited in fig .4a , while typical perpendicular , , and tangential , , flux density profiles along the radial direction of a single pancake coil are shown in fig .4b . for this particular simulation the operating current of the magnetis taken to be 960 a , while the maximum perpendicular magnetic flux density , tangential magnetic flux density and stored energy of the simulated device obtained from the simulation are shown to be 1.01 t , 9.00 t and 2.68 mj , respectively .the discrepancies between the maxima of the _ i _ ) perpendicular magnetic flux density , and _ ii _ ) tangential magnetic flux density , obtained via fem and _ radia _ were shown to be on the order of 2 , confirming _ radia _s validity and reliability as an alternative tool to fem for magnetic field analyses of smes devices .one of the conclusions one can immediately draw from the analysis of the field distribution , as evidenced in fig .4a , is that the field is mostly confined inside the coil .being able to confine the magnetic field inside the coil is important for a number of reasons : _ i _ ) large field gradients could be avoided from the coil edges ( thus affecting the of the wire ) , and _ ii _ ) stray fields are generally detrimental to electronic devices located in the vicinity of the magnet .thus , smes designs with large numbers of coils may be preferred for as long as geometrical and critical current limitations are considered and met .not surprisingly , the highest s are found at the inner rim of an individual coil , nearest the center of the torus along the _ z _ = 0 plane , as evidenced in fig . 4 , _i.e. _ , does not exhibit a _y_-axis mirror symmetry through the center of the coil , which is clearly due to the higher density of current - carrying elements closer to the torus center . on the other hand, is shown to be several times smaller than . is supposed to vanish in the case of a perfect toroidal smes with a continuum of coils .however , there is always a finite for a smes magnet built up with a discrete number of coils .it should be noted that although is several times smaller than , ybco tapes are marked by a large anisotropy in the critical current density ( ) , and therefore the maximum determines the current density cutoff that ought to be compared against the used in smes design .the ac loss calculations are performed using a simplified algorithm .the loss originating from the field over the penetration threshold is calculated using the accepted brandt - indenbom equation , which takes into account the flat tape geometry of 2 g wires as well as demagnetization effects .once the smes geometry is specified ( see section 2.4 ) , we feed the critical current density curves for _ i _ ) mgb and _ ii _ ) 2 g ybco tape for different temperatures and field ranges into the code . the algorithm fits all curves for a given superconducting type at the specified temperature and the fitting parameters are stored for later use .subsequently , the magnetic field is scanned inside the coil and the maximal , , is utilized in the calculation of the minimal , which would serve as the smes design bottleneck .the ratio of a trial value of the current density , , to , is an indicator of the stability of the system , and is known as the `` _ _ load factor _ _ '' . depending on the value of the load factor, our algorithm chooses whether the thickness ought to increase or decrease in order for the requirement to be met .our load factor is set at 70 of , so if , the algorithm increases the coil thickness and reduces by a proportionate amount since the total energy is proportional to the square of the current .as long as the current is maintained at the same value , then so is the energy within a first order of approximation .analogously , if , the code `` removes '' turns ( and proportionately increases ) , until .the increase of the coil thickness is essential in order to avoid a quench when the critical current density of the superconducting wire is exceeded .the design flow of the entire optimization process is shown in fig .5 once the smes configuration is optimized , the algorithm performs the same operation as a function of _ i _ ) coil radius ( _ cr _ ) and _ ii _ ) coil width ( _ w _ ) ._ cr _ is a continuous variable for the purposes of the simulation , and therefore it could be varied in steps of arbitrary size . on the other hand , _ w _ is a multiple of either the diameter of the wire ( in the case of mgb , or the width of the tape ( in the case of 2 g wire ) .then , _ i _ ) the total energy , , of the optimized configuration is computed , _ ii _ ) the ac losses are calculated as well , and _ iii _ ) is divided by the total wire mass of the smes , resulting in the energy density , . upon designing the smes geometry based on a given toroidal radius and coil dimensions , we specify the wire characteristics of two different conductors : _ i _ ) a mgb wire which we assume in our calculation is a monofilamentary strand of mgb , manufactured by hyper tech research , inc . , as discussed by li __ and _ ii _ ) standard second generation ( 2 g ) tape by american superconductor , inc . .the rapid advances in applied superconductivity , such as the successful implementation of pulsed - laser deposition techniques to grow iron - chalcogenide superconducting film on metal substrates will hopefully lead to the development of commercial fe - based superconducting wire in the near future , as well .the mgb wire our simulation is based upon was manufactured via an internal magnesium diffusion ( imd ) method .the particular wire included in our simulation was researched and discussed by li _it consists of an annulus of mgb enclosed in a nb chemical barrier and an outer monel sheath .the other choice for superconducting wire used in our simulation , 2 g ybco tape , is standard american superconductor second - generation ybco tape .subsequently , we specify the critical current densities of second generation ( 2 g ) ybco tape and mgb wire at a spread of temperatures . in the case ofmgb we use data for versus at = 4.2 k , 10 k , 15 k , 20 k , and 25 k , and the 2 g data is available to us at = 4.2 k , 20 k , 30 k , 40 k , 50 k , 65 k and 77 k . decays as a function of the magnetic field .empirically , could be fitted via a stretched exponential or a power - law form ( depending on the model of choice ) .therefore , it is very important to parse the data according to the magnetic field range .when stretched exponential or power - law experimental data is fitted with the appropriate function , the fitting algorithm typically follows a approach , _i.e. _ , the effective fit is that which reduces the sum of the squares of the differences between the fit and the data points .consequently , the part of the function most susceptible to errors will be the tail , since , in our case , small changes in the shape of the curve would lead to big changes in the critical current density . therefore , for we produced a series of curves relevant in different field ranges . for t we used the entire magnetic field range . for 1 t t , we used above 1 t , for 2 t t we used the full critical current set above 2 t , and so on .if the highest scanned exceeds the highest available experimental data , the whole process is automatically aborted .we noticed a dramatic improvement in the field consistency , which is most clearly manifest in the relatively smooth energy density surfaces , exhibited in the results and discussion section .analytical fits of double - bending functions were used to obtain functional forms for s at different temperatures for both 2 g ybco and mgb wires . in principle, double - bending functions have been shown to be very effective in fitting the front and tail ends of , as if the two data curvatures are distinguishable , very much in the spirit of the data parsing used in the current research effort . however , being able to discern between two functional forms needed for a fit would require a much greater density of data points , which would be a necessary requirement for a realistic smes engineering .the correct assessment of the would be subsequently used in order to `` thicken '' or `` thin '' the coil by the algorithm in order to achieve optimal coil thickness . in section 2.5 , we discussed the methodology and importance of field scanning inside a typical smes coil .however , there are a number of prospective smes applications which require stored energies j , and most of our efforts have been focused predominantly on that energy range , particularly when considering overall dimensions on the 1 2 meter scale. the dimensional considerations of such a device are very important , especially if it is to be transportable and/or integrated with other technological instruments . in order to devise the algorithm that assesses the maximal internal magnetic field , we consider two joule smes design cases at 4.2 k :_ i _ ) a smes built of mgb wire , and _ ii _ ) a smes built of 2 g tape , both of which contain 20 turns , long radius , _ lr _ , = 1000 mm , coil radius , _ cr _ , = 200 mm , coil width , _ w _ , = 96 mm , coil thickness , _t _ , = 100 mm .then , we scan the magnetic field of the mgb smes coil transversely , going through the middle of the coil and assessing the absolute value of in the mgb case , where the field is expected to be the highest ( see inset of fig .6a ) . in the case of 2 g wire ,we scan along the edge of the coil , as shown in the inset of fig .6b , and assess only the perpendicular component , since this is the application bottleneck , as discussed in section 2.5 . as one can see from fig .6a , _ radia _ tends to produce an unphysical singularity in the vicinity of , as evidenced in the points at 50 mm and 350 mm from the inner edge of the coil at the stored energy used in the simulation .choosing any of these two points ( `` artifacts '' on the conductor boundary ) could lead to erroneous results .the excessively high field at the conductor boundary does not disappear as we change the level of segmentation of each coil ._ radia _ assumes a constant over the entire coil and does not perform any `` relaxation '' with respect to the current density / field in different areas of the conductor , suggesting this could lead to an excessively high field at the edge , and pointing to an area of future improvement .thus , instead of picking the maximal point from the scan ( the weakest point ) , we identify its location , and fit the data points on either side of the singularity with a tenth order polynomial and get the mean of the two functions at the intersection ( maximal point ) location .we get = 10.68 t using this algorithm , which is consistent with the cusp at 50 mm from the inner edge of the coil .this procedure has led to smooth analytic energy density surfaces , to be discussed later .analogously , in order to avoid any _ radia_-generated artifacts , we have imposed a 90 cutoff from the scanned in the ybco case , _i.e. _ , only field values of are considered .the truncated data is subsequently fitted with a tenth order polynomial and the maximal value is extracted from this fit .the resulting were used to calculate the ( lowest ) critical current densities found in a single smes coil for a given geometry and energy storage requirement , as outlined in the previous subsection . every time an input parameter changes ,the code would scan for _ de novo_. after the assessment of and the evaluation of its corresponding , the algorithm executes a comparative loop of the trial , which ( along with the coil thickness ) are varied until the loading factor requirement is met . finally , after the optimal configuration is derived and recomputed , we proceed to calculate the associated ac losses .the total ac loss per cycle in a ybco superconductor , , is given by : , where , _ tw _ and stand for the magnetic field perpendicular to the tape , the thickness of the tape and the critical current .the same analysis could be undertaken for mgb , but due to the essentially insignificant anisotropy , could be essentially replaced with the total . for the purposes of the calculation, we took a coil and segmented it in small pieces and scanned inside of each one , in order to obtain .because of the symmetry of the coil , we analyzed only a quarter of the test coil , focusing on azimuthal angles in the ( going from to in ten steps ) . in the dimension along the coil s symmetry axis ,_ i.e. _ , the -axis , we scanned from the middle of the coil to the coil to the edge , _i.e. _ , from = 0 to = using the wire thickness as the step size in the mgb case , while in the ybco case we arbitrarily broke the scan into 5 pieces . the radial axis , , sedimentation took place from the coil inner radius to its outer radius using the wire / coil width of the mgb wire and the 2 g tape as the natural step size for the assessment of inside of each small cube .subsequently , we evaluated for each cube in every shell ( constant surface ) of the coil , averaged for every shell and multiplied this average by the differential volume of the shell , getting the areal energy term .the s of all shells are added to obtain before finally , we calculate the for the mgb case : , where _n _ , _ tm _ and stand for the number of coils , the thickness of the monocore wire and the total volume displaced by a coil . note , that the _ 4 _ in the numerator is necessary to account for the fact that we actually scan over the volume of a quarter of a coil .analogously , in the case of 2 g ybco tape , the total ac loss per cycle is calculated according to : , where _ tt _ stands for the thickness of the 2 g tape .simulations were run for both 2 g tape and mgb wire at 4.2 k. in the case of the ( 2 g ) ybco device the simulation was run for a smes comprising 20 coils , initial coil radius , , = 400 mm , initial coil width , , = 96 mm , initial coil thickness , , = 100 mm , and constant _ lr _ = 1500 mm . in the case of the mgb wire - based smes32 coils were considered , along with = 400 , , = 96 mm , , = 100 mm and _ lr _ = 2000 mm . for both simulationsthe number of segments was chosen to be 50 , and the number of points considered in the configuration space of the project ( the plane ) was fixed at 1,500 . for both simulations was increased from = 400 mm to its final value , = 700 mm , in steps of 2 mm .the coil widths for the 2 g and mgb-based devices , however , were varied differently , since in the former case the natural step size the thickness of the tape ( 12 mm ) while in the latter case , it is the thickness of the monocore strand . therefore , for the ybco - based smes , was varied in steps of 12 mm , while in the mgb case , we arbitrarily picked the step size to be 12.933 mm ( monocore thickness = 0.933 mm ) . larger coil radii and widthswere purposely omitted from the simulation , since we noticed that at larger values of those , the coils would start to physically `` overlap '' , leading to unphysical solutions .both sets of devices were simulated under the initial provision that they would store joules of energy .the energy settings were subject to small changes , subordinating a stringent energy requirement to the objective of creating an optimized ( 70 load factor ) smes .the optimization was performed at every point in the specified segment of space , and the _ i _ ) energy , _ ii _ ) energy density , _ iii _ ) _ t _ , _ iv _ ) _ cr _ , _ v_)_w _ , _ vi _ ) ac losses , _ vii _ ) total smes mass , _ m _ , and _ viii _ ) total wire length needed , _l _ , were assessed at each of the 1,500 points ( see fig .+ for higher values of and , the value of the total energy is actually closer to j , most likely due to non - linear effects as one can expect a geometric change to bring about .larger coils , in radius and width , would have coils in a closer proximity to one another ( we never change the values of the s once fixed in the beginning ) , suggesting that fewer field lines will `` leak '' in the spaces between them , improving on the overall efficiency of the device .the whole reason why we could not extend the simulation for arbitrarily large s and s had to do with the unphysical overlap of coils that our optimization would bring about .the values of several smes parameters were obtained at the highest energy densities from our configuration space ( see table 2 ) , and from the comparison between the simulations of the 2 g ybco and mgb based smes devices , we quickly notice that while the overall dimensions and stored energies are very comparable , a ybco - based smes weighs less than half of its mgb counterpart . also , the ybco device has more than double the energy density compared to the one made of mgb wire at the present energy - storage capacity .*15 smes type & (mj / kg ) & (kg ) & (mj ) & (mm ) & (mm ) & (mm ) + ybco & 0.327 & 3,416 & 1,117.07 & 662 & 204 & 68.7 + mgb & 0.155 & 7,537 & 1,168.26 & 698 & 196.8 & 50.3 + an interesting qualitative difference between the two systems could be observed in their ac losses ( fig . 7b and 7f ) .ac losses are proportional to both the magnetic field ( and for mgb and ybco , respectively ) , and .consequently , we would expect that analytical solutions for ac losses in a smes would be virtually impossible to obtain , which shows why realistic simulations are truly indispensable in order to assess ac losses and their precise geometrical functional forms .here , we see that while ac losses in mgb decrease monotonically with and ( fig .7b ) , in the case of ybco they increase until reaching a maximum ( fig 7f ) . the critical current decreases with an increase in , suggesting that the evolution of would depend on the balance of the two .the wire in a coil carries transport current and simultaneously experiences an applied magnetic field generated by the neighboring turns and from the rest of the smes , thus ac losses in a coil are expected to be a combination of both transport and magnetization losses .an example of the complexity of ac losses could be observed in the behavior of the ac losses as a function of coil dimensions ( fig . 7b and 7f ) . while in the case of mgb the losses decrease monotonically as a function of and ( fig .7f ) , the opposite is observed in the case of ybco , and furthermore , a maximum is seen to form in the plane .the conserved quantity in our simulation is the load factor and not energy , which increases with the coil dimensions .that would mean that to first order we should expect comparable transport losses in the plane .however , the reduced proximity of the coils in the geometric space would lead to reduction in the inductive losses , which in the ybco case likely dominate the ac loss contribution .therefore , identifying ac loss maxima is particularly useful for 2g - based smes devices , particularly for cases where the large radius is fixed .we can conclude from observing the simulations results that ac losses for large energy storage devices ( j and beyond ) are quite irrelevant .notice that the highest s observed are smaller than 0.5 mj and 2 mj per cycle for ybco and mgb ( see fig 7 and 8) , respectively , whereas the lowest stored energy is 100 mj for both smes varieties .the ac loss is approximately given by the ratio between filament size and coil radius .since these are large coils ( m ) , and filament is 12 mm , we would expect an ac loss on the order of 1 , which is roughly what the calculation shows .the toroidal geometry may reduce this figure ever so slightly .however , we note that for devices that store lower energies ( j ) ac losses will tend to have a very profound effect , and they must be duly subtracted from the stored energy values for proper accounting of device efficiency . given the complicated dependencies of ac losses on device geometry , we can see how at lower stored energies ( j ) , will most likely exhibit a non - monotonic behavior in the plane , where our _ radia_-based algorithm will have an even greater relevance in locating design `` sweet spots '' .+ finally , we would like to address the cost aspect of the two types of smes devices we are considering . after optimizing each of them at every point in configuration space , we were able to get an estimate of the overall length of wire needed ( see fig .one thing to notice is that the mgb-based device would require significantly more wire , in absolute length , for comparable amount of stored energy when compared to its ybco counterpart ( see table 3 ) .+ *15 smes type & ( = 400 mm , = 100 mm)(km ) & ( = 700 mm , = 200 mm ) ( km ) + ybco & 130 & 341 + mgb & 332 & 1,018 + therefore , in order to build a 2 g wire - based smes intended to store 100 mj of energy , on the order of several millions of dollars need to be allocated for 2 g wire purchase at today s market prices . in terms of the sheer benefit offered by reduced weight , and amount of materials used , 2 g wire is currently the obvious choice for large - stored energy , large energy density smes device .in this paper we demonstrated the viability of _ radia _ as a cpu - efficient semi - analytical method for optimizing prospective superconducting energy - storage devices . by altering various device parameters , such as coil radius and thickness, we were able to calculate the optimal coil thickness and simulate total stored energy and ac losses in a ybco and mgb-based devices .this work was primarily supported by the us department of energy , office of basic energy science , materials sciences and engineering division , under contract no .additional support from air force research lab ( v.f.s . and i.k.d . ) and nyserda ( v.f.s . ) are also acknowledged .this effort was also supported by the system evaluation division of the institute for defense analyses .i.k.d . wishes to thank steve warner for interest taken in the current work as well as for critical reading of the manuscript , and michael ambroso for technical assistance .9 our definition of is essentially the engineering current density , , which is the critical current density scaled by the overall thickness of the conductor ( _ i.e. _ , ( superconducting layer thickness)/(total conductor thickness ) ) .we use _ radia _ convenient units throughout the simulation , _i.e. _ , all dimensions , fields and energies are given in millimeters , tesla and joules , and therefore we only use _ radia _ units throughout the simulation and discussion .a. p. malozemoff , s. fleshler , m. rupich , c. thieme , x. li , w. zhang , a. otto , j. maguire , d. folts , j. yuan , h .-kraemer , w. schmidt , m. wohlfart , and h .- w .neumueller , supercond .* 21 * , 034005 ( 2008 ) .martin w. rupich , xiaoping li , cees thieme , srivatsan sathyamurthy , steven fleshler , david tucker , elliot thompson , jeff schreiber , joseph lynch , david buczek , ken demoranville , james inch , paul cedrone and james slack , supercond .* 23 * , 014015 ( 2010 ) . for the purposes of the mgb-based smes calculation, we have extracted the dimensions of the mgb wire from the scanning electron micrograph from li _ et al .we take the outer radii of the mg , mgb , nb and monel layers to be 0.1 mm , 0.146 mm , 0.325 mm and 0.417 mm , respectively , with a 50 m kapton sheath on the outside ( total monocore thickness = 0.933 mm ) .the mg , mgb , nb and monel densities are taken to be 1.738 g / mm , 2.62 g / mm , 8.57 g / mm and 8.857 g / mm , respectively . the critical current density , , is extracted from the data published by li __ . for the purposes of the ybco - based smes calculationwe take that the ybco tape comprises a 2 m ybco , 50 m copper , 45 m stainless steel and 50 m kapton layers , respectively ( see section 3.3 ) .we take the tape width to be 12 mm , and the densities of ybco , copper , stainless steel and kapton to be 6.3 g / mm , 8.92 g / mm , 8.03 g / mm and 1.42 g / mm , respectively v. braccini , a. xu , j. jaroszynski , y. xin , d. c. larbalestier , y. chen , g. carota , j. dackow , i. kesgin , y. yao , a. guevara , t. shi , and v. selvamanickam , supercond .* 24 * , 035001 ( 2011 ) .
recent advances in second generation ( ybco ) high temperature superconducting wire could potentially enable the design of super high performance energy storage devices that combine the high energy density of chemical storage with the high power of superconducting magnetic storage . however , the high aspect ratio and considerable filament size of these wires requires the concomitant development of dedicated optimization methods that account for both the critical current density and ac losses in type ii superconductors . here , we report on the novel application and results of a cpu - efficient semi - analytical computer code based on the _ radia _ 3d magnetostatics software package . our algorithm is used to simulate and optimize the energy density of a superconducting magnetic energy storage device model , based on design constraints , such as overall size and number of coils . the rapid performance of the code is pivoted on analytical calculations of the magnetic field based on an efficient implementation of the biot - savart law for a large variety of 3d `` base '' geometries in the _ radia _ package . the significantly - reduced cpu time and simple data input in conjunction with the consideration of realistic input variables , such as material - specific , temperature and magnetic field - dependent critical current densities have enabled the _ radia_-based algorithm to outperform finite element approaches by a twenty - fold reduction in cpu time at the same accuracy levels . comparative simulations of mgb and ybco - based devices are performed at 4.2 k , and finally , calculations of the ac losses are computed in order to ascertain the realistic efficiency of the design configurations . _ keywords _ : energy storage , superconductivity , smes , 2 g ybco tape , mgb wire
for the greater part of a century , the special brand of mathematics called game theory has been used to understand behaviour of social animals - including humans .much research today still focus on mechanisms that can push short sighted self - rewarding behaviour towards behaviours less costly to conspecifics and hence more pareto effecient for the population .while much , but far from all , cooperation and coordination among non - human species seem to coincide with kin relations , human interactions seem more culturally loaded with elements of punishment , reputation and normative behavioural protocols . by providing mathematical clarity combined with recognizable narratives , game theory has played a central role in describing the nature and emergence of cooperative behaviour within groups and populations . in the majority of these studies ,the population being studied is assumed to have no structure , meaning that all individuals may interact directly with all others , at random , and the interactions themselves are assumed to be pairwise , rather than true multi - participant dynamics .these are all useful simplifications , as already pointed out in . however , with the increase in internet - based interactions ( such as social media , self - organized collaborative communities , sharing economy etc . )graph theory seems to re - emerge under the headline of social networks , now with the additional advantage of a new empirical contribution of large amounts of data . comparing the system behaviour in the fully connected graph ( i.e. mean field model or_ panmixia _ ) with more realistic spatial interaction models , including those with population structure , can reveal the effect of the aforementioned simplifying assumptions on the level of cooperation , see e.g. .in this paper we study perhaps the simplest version of a true multi - player dynamic : the threshold game ( this claim is elaborated on below ) .inspired by , we focus on -player games , in which out of participants have to decide to cooperate for anyone to receive a reward .thus , is the number of people interacting in a given match , and is the threshold , describing the minimal number of cooperators necessary for pay - off to take place .additionally , we follow by placing the population on a network , in which the -sized groups are picked based on the connections between individuals . this is inspired by the fact that in everyday life in both animal and human systems , an individual may participate in several different groups , with either a high or low degree of overlapping members . in either case, it is quite possible that the experiences gained in one setting are applied when deciding upon a strategy in another .note that we have not implemented any kind of load - sharing , meaning that cooperation has the same cost irrespective of the number of cooperators .it is worth noting that we still recover the qualitative results shown in , e.g , , where cost sharing is included . for those who are thus inclined, one can visualize each -player match as a party in which a certain number of guests have to volunteer for food preparation for it to be done in time .if too few people volunteer , the food is never ready and no one gets anything , hence the threshold .if too many people volunteer , chaos ensues , reducing efficiency , hence the lack of load sharing ( i.e. the cost of cooperation does not go down with increasing cooperators ) . the precise and more rigorous description of the model is given in the next section .the game has been chosen as the simplest truly multiplayer game .the criterion for being a true multiplayer game is that it should not be possible to describe the payoff from a match as a sum of two - player interactions ; i.e. the payoff from a match should behave nonlinearly as a function of cooperators .we have chosen to focus on this property because we expect that a linear payoff - dependence in an n - player game would be equivalent to a 2-player game in which fitness was calculated as an average over all interactions .as the explicit target of this investigation is n - player dynamics , we believe nonlinearity to be crucial .possibly the simplest , interesting functional form meeting this nonlinearity requirement is a step function ( fig [ fig : nonlinearschematic ] ) .it is worthwhile to note here that has shown convincingly that all multiplayer games with payoff structures depending nonlinearly on the number of cooperators have attractors qualitatively similar to what is studied here ; i.e. the step function is a suitable starting point for a more general investigation .this also indicates that that this model has the potential to exhibit behaviour different from what has already been extensively described for pair - wise models .we think that this finding by supports our intuition regarding nonlinearity very well .thus , the game studied here is a threshold game in which payoff only takes place when the number of cooperators reaches a certain threshold , . one could envision the dinner party described above , or group hunting carnivores where the prey is only brought down when a certain number of individuals participate , and where the cost , in the form of energy spent on running , depends on prey rather than participants . on another levelone could imagine a collection of interacting cells or bacteria producing growth factors or drug resistance compounds , and only if a sufficient number of cellular units contribute would the common good be manifested .we consider a finite population of individuals , each with the choice of either cooperating ( c ) or defecting ( d ) .the individual is represented by her probability , , of cooperating . is called the `` strategy '' of the individual .the individuals participate in -player games , in which each participant of a given match is rewarded if at least players cooperated , and none otherwise . in this wayit is a simplification of the game described in .irrespective of the number of cooperators , the cost of cooperating for a given individual is .the game is a generalization of other games in which multiple players must coordinate their behaviour . for ,this game is reminiscent of the classic snowdrift or hawk - dove game . for ,it is the volunteers dilemma . cooperators . ]the simulation proceeds in a number of rounds . at the beginning of each round , all individuals decide on playing either c or d , according to their respective strategies . during the course of a round , matches are played , in each of which individuals are picked from the population , and their shared payoff according to the rules of the game is calculated before returning them to the pool . the manner in which the players are matchedis described below . at the end of a round ,each individual calculates the average payoff per match of all c - players and all d - players that the individual has encountered during the round , .we assume that the player has full knowledge of the fortunes of these players , irrespective of how many of the matches they actually shared .the strategies of the individuals are then updated as follows : where is a discretization time step , equal to , while are generally of order 1 .the update takes place after each round of matches , such that these are in effect simultaneous .if a player does not meet both cooperators and defectors during a round , then ( or ) has the same value as in the previous round .all simulations are run for iterations , such that the amount of simulated time is . at the beginning of the simulation , the drawn from a uniform distribution on $ ] .simulations were run in matlab , and extensive summaries of the results have been made available as a mysql database on dryad .the model described thus far has been studied with a mean field approach in .there it was found that there are up to 4 solutions of in the mean field model : and the roots of note that in this paper will refer to the population average of strategies , evaluated instantaneously. the lower root of ( [ eq : meanfield ] ) and 1 are repellers any deviation however small will lead the system away from the solution , while 0 and the upper root are attractors , or evolutionary stable states , as they are also called .these circumstances turn out to be important to our findings , which will be discussed below . defining the parameter , i.e. the ratio between cost and reward of cooperation, it was furthermore shown in that ( [ eq : meanfield ] ) only has real roots when because of this , we will henceforth use as the principal means of describing -variation .we implement population structure by requiring that an individual is only matched with people with whom they are connected ( befriended ) . for all networks , , and , inall but the first network type , each group of is picked as the individual and its neighbours , for .this matching rule is similar to what was used in .we consider five different types of population structures , or networks : fully mixed : : : this is the standard case , in which the players for each match are picked randomly from the entire population . in network terms , we may think of this as the `` fully connected '' case . as such, it is the only network type which diverges in its matching rule , in that an individual will not necessarily be matched with all her connections in a given round .the situation is depicted in fig .[ fig : notopo ] .random , regular network : : : in the random , regular network , the population is placed on an undirected , regular graph , each vertex representing an individual .the degree of each vertex is .an example is given in fig .[ fig : randomgraph ] .ringlike , regular network , short range connections : : : in this network , we imagine the individuals placed on a circle .if is even , each individual is connected to the individuals closest on the circle .if is odd , the individual is connected to the closest neighbours , and the last connection is made to the first non - connected individual in either clock- or counter clockwise direction .it is attempted to alternate between the two directions .it is , unfortunately , not possible to create this network for all combinations of and if is even , which may be noticed by careful examinations of the following figures .an example of the layout is given in fig [ fig : nolonggraph ] .ringlike , regular , added long range connections : : : very similar to the previous type , but with added connections to the opposite side of the ring .if n is even , one long - range connection is made , if odd , two are , to avoid the above - mentioned problems concerning even .it is worth noting that these networks are closely related to the small - world networks studied in .an example is seen in fig [ fig:1long ] .social network : : : as an attempt at a realistic case , we follow the algorithm of to synthesize networks with high clustering and varying degree . asdescribed in the appendix , the two attachment probabilities can be adjusted to obtain networks with a wide range of structures . an example plot is shown in fig [ fig : socialnet ] . .see the appendix for further details on the generation and .all connections between individuals are drawn .proximity in space reflects connectedness .plot created using gephi . ]unless otherwise stated , the size of the populations are .we have made this choice after verifying that a function of for is indistinguishable from the case , meaning that any above 200 is most likely sufficient for the ranges of studied here .all reported values of the result of averaging over at least 5 different initial conditions .somewhat surprisingly , we find that population behaviour does not become representative of general multiplayer games until is at least 4 , and not 3 as would have been naively expected .an example of this behaviour is given as part of the discussion of , and is , we suspect , due to the fact that for the only permitted network structure is a collection of rings , which are very different from all other networks here studied . because of this , we have decided to focus on in the remainder of the paper . as was shown in , in the mean field model , there is an abrupt transition between a cooperative regime in which two collective states are possible , a mixed and a fully defecting , and a regime in which only the fully defecting state is possible .this transition takes place as the cost - to - reward fraction , , changes .as will be detailed below , we reproduce this qualitative behaviour for regular networks , with some adjustments depending on the details of the network . to simplify things , we will often focus exclusively on the critical value , , at which the transition takes place . to keep things simple ,we define in a numerical model as the -value for which crosses 0.1 , and in the mean field model as the point where the mixed state disappears ( i.e. where the discriminant of ( [ eq : meanfield ] ) becomes negative ) .the choice of as the point of transition stems from the observed shapes of . , see later figures for examples .we have studied -dependence in two cases ; static , and constant . in the static case ,we focus on unless otherwise stated . in fig[ fig : ndepend ] is shown for both setups , and we see an -dependence in both cases . in the case of the static ,this is somewhat surprising , since it means that even though more individuals are available to solve the same problem , the chances of a sufficient amount of cooperators decrease .we can understand this low- behaviour by taking into consideration that tend to decrease as a function of , as also predicted by the mean field model ( it is beneficial for the individual to do less when more people are there to share the burden ) , combined with the fact that the number of cooperators in any given match is stochastic : let be the number of cooperators in a given game . for a proportion of c -players of the entire population , evenly distributed across the graph , the distribution of is given by a bernstein polynomial of degree : we can now substitute the predicted value of the mean field model ( upper root of ( [ eq : meanfield ] ) ) . fig [ fig : mfpred ] shows the result of this , and we see that the exercise predicts the probability of in a given game to increase for increasing .taking into consideration that the lower root of ( [ eq : meanfield ] ) is repelling , and that full defection ( ) is a completely stable state ( there will be no fluctuations in ) , we find that the -dependence seen in fig [ fig : ndepend ] can be understood as arising from the fluctuations in in the mixed state .this conclusion is further supported by the fact that when this fluctuation analysis is repeated for constant , a similar breakdown is not predicted for large .indeed , it is predicted that decreases .this matches our observations from the simulations . , both predicted from the mean field theory as well as measured from the model implemented on regular , random networks .we see that for both static as well as relative , there is an -dependence . however , in the static case , when the dependence is not predicted by the mean field theory . ] , for , assuming that equal to the upper root of ( [ eq : meanfield ] ) , as predicted by the mean field theory .note that the different extents of the lines ( i.e. that covers a smaller part of the -axis than ) stem from the limitations of the mean field model . ]this effect resembles what was reported in , in that it reports decreasing likelihood of the population as a whole to meet the threshold , for increasing . in more general terms, it is reminiscent of the bystander effect : the chance of the necessary number of cooperators appearing _ decreases _ for an increasing number of participants .a similar effect was reported in . in fig[ fig : observations ] is shown the behaviour of different network topologies , as described in fig [ fig : cont ] .we find that while the qualitative behaviour of the same , the details depend on the topology of the network .we find that decreases when the network is wired such that the connections become as local as possible .this result is closely related to that reported by , who found that adding spatial structure to a population reduces cooperation in two - person snow drift games .however , it is worth noting that what is demonstrated here is a considerably more general result , in that we here go beyond dyadic interactions , and grid - based populations .we propose that this topology dependence comes about through the way in which each individual experiences the rest of the population - that in fact clustering leads to the individuals behaving as if the population had an effective population size is not directly relatable to the similar term in population genetics , as also pointed out in the conclusion , below .] , , making the topology - effect a finite - size effect induced by clustering .the first observation to make in this regard is fig .[ fig : effl_visual ] , where we see . for both a network with only short - range connections but large , and a fully mixed small- network .we see here both that indeed small populations are prone to smaller , but also that for a given and , an can be found such that the fully mixed model matches .this matching works as our definition of .it is in this connection worth noticing that by defining the based on comparisons to networks with the same and , any variation in must be due to effects other than the -dependence already discussed .considering the cause of , we may consider the number of individuals that a given player can meaningfully be said to interact with .we know that the individual interacts with the immediate neighbours , of which there are .however , due to the matching rules as laid out in the previous section , the player also interacts with each of their neighbours ( discounting herself ) , of which there are almost .of these , are duplicates , being the clustering coefficient of the network . in short , we may imagine the number of direct and indirect interactions of an individual to be on the form where the are weights used to signify that the interactions with neighbour s neighbours and so on are weaker than with direct neighbours . as ( [ eq : l ] )is too complicated for a direct comparison to , we have instead tried to use the same basic intuition behind ( [ eq : l ] ) to propose a much simpler expression : here is a function of both and , and the denominator is there to provide additional dependence on , since the effect appears to be marked . using ( [ eq : ltilde ] ) we may test our basic intuition about the system by checking the relationship between and . looking to fig .[ fig : effl_comp ] , we see that indeed , is well correlated with ( correlation coeff . : 0.7 ) . we interpret this to mean that our qualitative explanation of how topology influences cooperation is correct . as to why small leads to low cooperation , it seems plausible that it is related to small populations being more susceptible to fluctuations in , which will be much more pronounced in the mixed state than in the state . in other words , that the observed decrease in cooperation is driven by the differences in -fluctuations within the two evolutionary stable states , like the case was for the above discussed -dependence .an interesting question is to what extent the findings on regular networks generalize to the arguably more relevant case of non - regular networks , such as that depicted in fig [ fig : socialnet ] . to accommodate the spread in the number of players in a given match on such a network, we have here decided to use a relative , such that , with being a number between 0 and 1 .this choice was made because most choices of would result in many matches having , for networks such as the one shown in fig [ fig : socialnet ] . in fig[ fig : socialnet_crit ] is shown as a function of and ( defined as average number of neighbours + 1 ) .we see that a range of is possible , and also that the primary cause of variation seems to be the -dependence discussed above .note that the peculiar shape of the coloured region is due to the fact that we are unable to sample the -space directly , but are instead exploring the -space , for which maps to the depicted region . see the appendix for an explanation of .we see that all studied deviations from the mean field model appears to reduce cooperation , in particular , to some extent .this can be understood by considering the attractors in the mean field model : as was discussed in sec .[ sec : meanfield ] , the mean field model has been shown to have , for , two attractors and two repellers in -space .what this means is that if the system is brought to cross the repeller dividing the two attractors ( the lower root of ( [ eq : meanfield ] ) , the dynamics of the mean field model will dictate a transition from one state to the other .what the mean field model does not include is that the actual number of cooperators in a given match is stochastic , except when 0 or 1 .this leads to a certain noisiness in the simulation results , or fluctuations in perceived by the individual , in the mixed state , but not in the state .hence , since all perturbations to the initial model , studied here , have had the consequence of exacerbating the effects of the fluctuations , a higher rate of transition from mixed state to fully - defective state is observed , most critically close to , where the downwards jump in to clear the repeller is smallest .a consequence of this mechanism is that we would expect similar dependence on both and clustering for any game where two attractive states had this difference in inherent fluctuations .we note that , as shown in , the existence of the mixed state requires for the threshold game .this requirement goes a long way to explain why the structure - related behaviour described here has not already been extensively described in the literature , which has primarily been focused on 2-player games .in this paper we study the simplest truly multiplayer game , the threshold game , in structured populations . we find that the average behaviour of the players as a function of the cost - to - reward fraction , , is highly dependent on the topology of the network describing the population , both in terms of number of neighbours as well as higher level effects such as clustering .we also find that structure appears to primarily decrease cooperation , by destabilizing the mixed pseudo - equilibrium in favour of the fully defecting state ; at least to the extent that the resulting system can be compared to the mean field or fully mixed models .the observation that spatial structure can inhibit the evolution of cooperation was made earlier for the more restrictive case of dyadic interactions and only for a two - dimensional grid implementation of space exclusively with nearest neighbour interactions .here we substantiate the speculation that this evolutionary behaviour is observed in a more general context , i.e. -player scenarios and network structures representing realistic social networks .we find that , irrespective of the threshold for payoff , larger numbers of players in each match ( ) result in less cooperation .in particular we find that for very small , meaning a relatively low threshold ( ) , the stochastic nature of the game destabilizes cooperation further . given the good correspondence between the theoretical predictions and numerical observations in this paper , we suggest that related scenarios ( with payoff - functions that can be approximated by a step - function ) give rise to similar evolutionary dynamics . additionally , we find that the behaviour on highly clustered networks is similar to the behaviour observed in very small , fully mixed , populations , leading us to suggest a working notion of an effective population size , , to describe the behaviour for a given population structure with respect to the transition .it is shown in fig [ fig : effl_comp ] that to a large extent can be predicted based solely on the degree and average clustering of each vertex in the network .it should be mentioned that this effective network population size bears little resemblance to the family of well known genetic effective population sizes , except maybe the panmixia assumption .it is interesting to note that we do not seem to echo the findings of , who , by studying pair - wise interactions on graphs , found a marked increase in cooperation for heterogeneous networks . while a direct comparison is not possible in this study ( since the rules of the game , and hence the expected results , depend on the degree of the vertex ) , fig .[ fig : socialnet_crit ] does not lead us to believe that a similar effect is at play here .presumably this , like most of our other findings , is due to the bistable nature of the threshold game .the results presented in this paper show , perhaps not surprisingly , that within multiplayer games on structured populations , there is an intricate interplay between the details of the game and the structure of the population. however , it also bears noting that much of the behaviour observed can be seen to stem from properties of the mean field model - especially regarding the relative stability of the mixed and full - defection states .it is important to point out that as was shown in and , the nonlinearity of the payoff function as well as the larger number of players ( ) are both necessary requirements for the existence of the mixed state . as such , models lacking these featureswould not be expected to have a similar dependence on population structure . finally , it is worthwhile to remember that while the present model is surely quite simple - it has no reputation , no kin - selection , no outside forcing - all refined models taking these concepts into account , but retaining population structure and payoff non - linearity , will most likely exhibit behaviour related to , or at least moulded by , the mechanisms uncovered in this paper . as such , this simplified model is relevant for all such other more specialized takes on the subject of cooperation in structured populations , as was also argued in .an example of a slightly more advanced model is given in the appendix , where players are allowed to also make suboptimal choices in their update strategies .it is found that the general results are still valid .furthermore , it is relevant to note that uses slightly different payoff structures and update rules , but still present results showing that the cooperative ( mixed ) state becomes less stable when structure is introduced .this project has been supported by the seed funding program from the interacting minds centre , aarhus university .furthermore , the authors are grateful for valuable feedback from andreas roepstorff , chris and uta frith as well as our anonymous reviewers .when generating regular networks , we have chosen the quite straightforward algorithm of always starting with a ring - like network with an even number of close connections .the connection matrix for this case is trivial to create , based on the diagram in fig [ fig : nolonggraph ] .if an additional short - range connection had to be added to this , we would go about it in the following systematic manner : 1 .pick 1 vertex on the circle 2 . if it is not of sufficient degree , connect it to the first non - neighbour in clockwise / counter clockwise direction .3 . proceed to the next vertex in the clockwise direction .we would then repeat steps 2 3 alternating the clockwise / counter clockwise decision in step 2 . as has already been mentioned, this procedure is not guaranteed to work for all , even if is required to be even .we have chosen not to study those sets of where the above method did not succeed .when a random , regular network was needed , we have created a ring - like network , if needed with a single long - range connection , depending on the desired degree . to then obtain a random network , it is sufficient to repeatedly pick 2 edges at random : a b c d , and if the two pairs of vertexes happen to be unconnected , to mix the pairs such that the they become connected as a c b d .this switching was done times for each random network . 1 .pick on average random vertices as initial contacts 2 .pick on average neighbours of each initial contact as secondary contacts 3 .connect a new vertex to the initial and secondary contacts in the above , are random variables , re - evaluated for each new vertex , by using the expressions and . here, are random variables uniformly distributed between 0 and 1 , is uniformly distributed on the set , and the is the rounding operator . please note that these expressions reduce to those mentioned in for . to investigate whether the results in this paper , principally the structure dependence , depend crucially on the chosen update rule ( [ eq : update ] ) , we have conducted a smaller study in which ( [ eq : update ] ) had an additional noise term : this was chosen to mimic other update rules such as the fermi update rule , in which suboptimal decisions are possible . by varying , it is possible to investigate at what amount of noise is critical for our findings . in fig[ fig : noisy ] is seen a function of noise amplitude ( ) for two different network types and 3 different -values .we see that the structure - induced difference persists until complete model breakdown ( when ceases to be important , meaning that the relationship between cost and reward no longer has any influence ) .we also see that this breakdown occurs around , which seems very reasonable given that . as a function of noise amplitude ( in ( [ eq : update_noisy ] ) ) .we see that different depending on the network structure , until the noise becomes so strong that the model collapses , as indicated by the disappearance of the -dependence . ]santos fc , pacheco jm , lenaerts t. .proceedings of the national academy of sciences of the united states of america .2006 feb;103(9):34903494 . available from :
the study investigates the effect on cooperation in multiplayer games , when the population from which all individuals are drawn is structured i.e. when a given individual is only competing with a small subset of the entire population . to optimize the focus on multiplayer effects , a class of games were chosen for which the payoff depends nonlinearly on the number of cooperators this ensures that the game can not be represented as a sum of pair - wise interactions , and increases the likelihood of observing behaviour different from that seen in two - player games . the chosen class of games are named `` threshold games '' , and are defined by a threshold , , which describes the minimal number of cooperators in a given match required for all the participants to receive a benefit . the model was studied primarily through numerical simulations of large populations of individuals , each with interaction neighbourhoods described by various classes of networks . when comparing the level of cooperation in a structured population to the mean - field model , we find that most types of structure lead to a decrease in cooperation . this is both interesting and novel , simply due to the generality and breadth of relevance of the model it is likely that any model with similar payoff structure exhibits related behaviour . more importantly , we find that the details of the behaviour depends to a large extent on the size of the immediate neighbourhoods of the individuals , as dictated by the network structure . in effect , the players behave as if they are part of a much smaller , fully mixed , population , which we suggest an expression for . * * + kaare b. mikkelsen^1,2,*^ , lars a. bach^1,3^ , * 1 interacting minds center , aarhus university , dk-8000 aarhus c , denmark + * 2 department of engineering , aarhus university , dk-8000 aarhus c , denmark + * 3 interdisciplinary center for organizational architecture ( icoa ) , aarhus university , dk-8210 aarhus v , denmark * * * * mikkelsen.kaare.com * keywords : * game theory ; network ; threshold ; cooperation ; volunteer s dilemma * highlights : * * observed behaviour depends on the size of each player s immediate interaction neighbourhood . * when the number of players is much larger than the number of required cooperators , average payoff decreases . * most network structures lead to a decrease in cooperation compared to the fully mixed case .
networks are intrinsic to a vast number of complex forms observed in the natural and man - made world .networks repeatedly arise in the distribution and sharing of information , stresses and materials .complex networks give rise to interesting mathematical and physical properties as observed in the internet , the `` small - world '' phenomenon , the cardiovascular system , force chains in granular media , and the wiring of the brain .branching , hierarchical geometries make up an important subclass of all networks .our present investigations concern the paradigmatic example of river networks .the study of river networks , though more general in application , is an integral part of geomorphology , the theory of earth surface processes and form .furthermore , river networks are held to be natural exemplars of allometry , i.e. , how the dimensions of different parts of a structure scale or grow with respect to each other .the shapes of drainage basins , for example , are reported to elongate with increasing basin size . at present , there is no generally accepted theory explaining the origin of this allometric scaling .the fundamental problem is that an equation of motion for erosion , formulated from first principles , is lacking .the situation is somewhat analogous to issues surrounding the description of the dynamics of granular media , noting that erosion is arguably far more complex .nevertheless , a number of erosion equations have been proposed ranging from deterministic to stochastic theories .each of these models attempts to describe how eroding surfaces evolve dynamically .in addition , various heuristic models of both surface and network evolution also exist .examples include simple lattice - based models of erosion , an analogy to invasion percolation , the use of optimality principles and self - organized criticality , and even uncorrelated random networks .since river networks are an essential feature of eroding landscapes , any appropriate theory of erosoion must yield surfaces with network structures comparable to that of the real world. however , no model of eroding landscapes or even simply of network evolution unambiguously reproduces the wide range of scaling behavior reported for real river networks .a considerable problem facing these theories and models is that the values of scaling exponents for river network scaling laws are not precisely known .one of the issues we address in this work is universality .do the scaling exponents of all river networks belong to a unique universality class or are there a set of classes obtained for various geomorphological conditions ?for example , theoretical models suggest a variety of exponent values for networks that are directed versus non - directed , created on landscapes with heterogeneous versus homogeneous erosivity and so on .clearly , refined measurements of scaling exponents are imperative if we are to be sure of any network belonging to a particular universality class .moreover , given that there is no accepted theory derivable from simple physics , more detailed phenomenological studies are required . motivated by this situation ,we perform here a detailed investigation of the scaling properties of river networks .we analytically characterize fluctuations about scaling showing that they grow with system size .we also report significant and ubiquitous deviations from scaling in real river networks .this implies surprisingly strong restrictions on the parameter regimes where scaling holds and cautions against measurements of exponents that ignore such limitations . in the case of the mississippi basin , for example, we find that although our study region span four orders of magnitude in length , scaling may be deemed valid over no more than 1.5 orders of magnitude .furthermore , we repeatedly find the scaling within these bounds to be only approximate and that no exact , single exponent can be deduced .we show that scaling breaks down at small scales due to the presence of linear basins and at large scales due to the inherent discreteness of network structure and correlations with overall basin shape .significantly , this latter correlation imprints upon river network structure the effects and history of geology .this paper is the first of a series of three on river - network geometry .having addressed scaling laws in the present work , we proceed in second and third articles to consider river network structure at a more detailed level . in examine the statistics of the `` building blocks '' of river networks , i.e. , segments of streams and sub - networks .in particular , we analytically connect distributions of various kinds of stream length .part of this material is employed in the present article and is a direct generalization of horton s laws . in the third article , we proceed from the findings of to characterize how these building blocks fit together .central to this last work is the study of the frequency and spatial distributions of tributary branches along the length of a stream and is itself a generalization of the descriptive picture of tokunaga .in addressing these broader issues of scaling in branching networks , we set as our goal to understand the river network scaling relationship between basin area and the length of a basin s main stream : known as hack s law , this relation is central to the study of scaling in river networks .hack s exponent is empirically found to lie in the range from 0.5 to 0.7 . here, we postulate a generalized form of hack s law that shows good agreement with data from real world networks .we focus on hack s law because of its intrinsic interest and also because many interrelationships between a large number of scaling laws are known and only a small subset are understood to be independent .thus , our results for hack s law will be in principle extendable to other scaling laws . with this in mind, we will also discuss probability densities of stream length and drainage area .hack s law is stated rather loosely in equation ( [ eq : dev.hackslaw ] ) and implicitly involves some type of averaging which needs to be made explicit .it is most usually considered to be the relationship between _ mean _ main stream length and drainage area , i.e. , here , denotes ensemble average and is the mean main stream length of all basins of area .typically , one performs regression analysis on against to obtain the exponent . in seeking to understand hack s law ,we are naturally led to wonder about the underlying distribution that gives rise to this mean relationship . by considering fluctuations ,we begin to see hack s law as an expression of basin morphology .what shapes of basins characterized by are possible and with what probability do they occur ?an important point here is that hack s law does not exactly specify basin shapes .an additional connection to euclidean dimensions of the basin is required .we may think of a basin s longitudinal length and its width . the main stream length is reported to scale with as where typically , .hence , we have .all other relevant scaling laws exponents can be related to the pair of exponents which therefore characterize the universality class of a river network . if we have that basins are self - similar whereas if , we have that basins are elongating .so , while hack s law gives a sense of basin allometry , the fractal properties of main stream lengths need also be known in order to properly quantify the scaling of basin shape .in addition to fluctuations , complementary insights are provided by the observation and understanding of deviations from scaling .we are thus interested in discerning the regularities and quirks of the joint probability distribution .we will refer to as the _ hack distribution_. hack distributions for the kansas river basin and the mississippi river basin are given in figures [ fig : dev.hack3dpcmax_kansas](a ) and [ fig : dev.hack3dpcmax_kansas](b ) .fluctuations about and deviations from scaling are immediately evident for the kansas and to a lesser extent for the mississippi .the first section of the paper will propose and derive analytic forms for the hack distribution under the assumption of uniform scaling with no deviations . here , as well as in the following two papers of this series , we will motivate our results with a random network model originally due to scheidegger .we then expand our discussion to consider deviations from exact scaling . in the case of the kansas river ,a striking example of deviations from scaling is the linear branch separated from the body of the main distribution shown in figure [ fig : dev.hack3dpcmax_kansas](a ) .this feature is less prominent in the lower resolution mississippi data .note that this linear branch is not an artifact of the measurement technique or data set used .this will be explained in our discussion of deviations at small scales in the paper s second section .we then consider the more subtle deviations associated with intermediate scales . at first inspection , the scaling appears to be robust .however , we find gradual drifts in `` exponents '' that prevent us from identifying a precise value of and hence a corresponding universality class .both distributions also show breakdowns in scaling for large areas and stream lengths and this is addressed in the final part of our section on deviations .the reason for such deviations is partly due to the decrease in number of samples and hence self - averaging , as area and stream lengths are increased .however , we will show that the direction of the deviations depends on the overall basin shape .we will quantify the extent to which such deviations can occur and the effect that they have on measurements of hack s exponent . throughout the paper, we will return to the hack distributions for the kansas and the mississippi rivers as well as data obtained for the amazon , the nile and the congo rivers .to provide some insight into the nature of the underlying hack distribution , we present a line of reasoning that will build up from hack s law to a scaling form of .first let us assume for the present discussion of fluctuations that an exact form of hack s law holds : where we have introduced the coefficient which we discuss fully later on .now , since hack s law is a power law , it is reasonable to postulate a generalization of the form the prefactor provides the correct normalization and is the `` scaling function '' we hope to understand .the above will be our notation for all conditional probabilities .implicit in equation ( [ eq : dev.p(l|a ) ] ) is the assumption that all moments and the distribution itself also scale .for example , the moment of is which implies where .also , for the distribution it follows from equation ( [ eq : dev.p(l|a ) ] ) that we note that previous investigations of hack s law consider the generalization in equation ( [ eq : dev.p(l|a ) ] ) .et al . _ also examine the behavior of the moments of the distribution for real networks . here, we will go further to characterize the full distribution as well as both and . along these lines ,rigon _ et al . _ suggest that the function is a `` finite - size '' scaling function analogous to those found in statistical mechanics , i.e. : as and as .however , as we will detail below , the restrictions on can be made stronger and we will postulate a simple gaussian form . more generally , should be a unimodal distribution that is non - zero for an interval ] . again , while approximates the derivative throughout this intermediate range of hack s law , we can not claim it to be a precise value .we observe the same drifts in in other datasets and for varying window size of the running average .the results suggest that we can not assign specific hack exponents to these river networks and are therefore unable to even consider what might be an appropriate universality class .the value of obtained by regression analysis is clearly sensitive to the the range of used .furthermore , these results indicate that we should maintain healthy reservations about the exact values of other reported exponents .we turn now to deviations from hack s law at large scales . as we move beyond the intermediate region of approximate scaling , fluctuations in begin to grow rapidly .this is clear on inspection of the derivatives of hack s law in figures [ fig : dev.diffmeanhack_kansas](a ) and [ fig : dev.diffmeanhack_kansas](b ) .there are two main factors conspiring to drive these fluctuations up .the first is that the number of samples of sub - basins with area decays algebraically in .this is just the observation that as per equation ( [ eq : dev.papl ] ) .the second factor is that fluctuations in and are on the order of the parameters themselves .this follows from our generalization of hack s law which shows , for example , that the moments of grow like .thus , the standard deviation grows like the mean : .so as to understand these large scale deviations from hack s law , we need to examine network structure in depth . one way to dothis is by using horton - strahler stream ordering and a generalization of the well - known horton s laws .this will naturally allow us to deal with the discrete nature of a network that is most apparent at large scales .stream ordering discretizes a network into a set of stream segments ( or , equivalently , a set of nested basins ) by an iterative pruning .source streams ( i.e. , those without tributaries ) are designated as stream segments of order .these are removed from the network and the new source streams are then labelled as order stream segments .the process is repeated until a single stream segment of order is left and the basin itself is defined to be of order .natural metrics for an ordered river network are , the number of order stream segments ( or basins ) , , the average area of order basins , , the average main stream length of order basins , and , the average length of order stream segments .horton s laws state that these quantities change regularly from order to order , i.e. , where , or .note that all ratios are defined to be greater than unity since areas and lengths increase but number decreases .also , there are only two independent ratios since and .horton s laws mean that stream - order quantities change exponentially with order .for example , ( [ eq : dev.horton ] ) gives that .returning to hack s law , we examine its large scale fluctuations with the help of stream ordering .we are interested in the size of these fluctuations and also how they might correlate with the overall shape of a basin .first , we note that the structure of the network at large scales is explicitly discrete. figure [ fig : dev.al_ge10e10_mispi10 ] demonstrates this by plotting the distribution of without the usual logarithmic transformation .hack s law is seen to be composed of linear fragments .as explained above in figure [ fig : dev.hackjump ] , areas and length increase in proportion to each other along streams where no major tributaries enter .as soon as a stream does combine with a comparable one , a jump in drainage area occurs .thus , we see in figure [ fig : dev.al_ge10e10_mispi10 ] isolated linear segments which upon ending at a point begin again at , i.e. , the main stream length stays the same but the area is shifted .we consider a stream ordering version of hack s law given by the points .the scaling of these data points is equivalent to scaling in the usual hack s law .also , given horton s laws , it follows that ( using ) . along the lines of the derivative we introduced to study intermediate scale fluctuations in equation ( [ eq : hackslawlog_deriv ] ), we have here an order - based difference : we can further extend this definition to differences between non - adjacent orders : this type of difference , where , may be best thought of as a measure of trends rather than an approximate discrete derivative .using these discrete differences , we examine two features of the order - based versions of hack s law .first we consider correlations between large scale deviations within an individual basin and second , correlations between overall deviations and basin shape .for the latter , we will also consider deviations as they move back into the intermediate scale .this will help to explain the gradual deviations from scaling we have observed at intermediate scales . since deviations at large scales are reflective of only a few basins , we require an ensemble of basins to provide sufficient statistics . as an example of such an ensemble , we take the set of order basins of the mississippi basin . for the dataset used here where the overall basin itself is of order , we have 104 order sub - basins .the horton averages for these basins are km , km , and km . for each basin, we first calculate the horton averages .we then compute , the hack difference given in equation ( [ eq : dev.hackdiff ] ) . to give a rough picture of what is observed ,figure [ fig : dev.orderhack7_mispi10_2 ] shows a scatter plot of for all order basins .note the increase in fluctuations with increasing .this increase is qualitatively consistent with the smooth versions found in the single basin examples of figures [ fig : dev.diffmeanhack_kansas](a ) and [ fig : dev.diffmeanhack_kansas](b ) . in part ,less self - averaging for larger results in a greater spread in this discrete derivative .however , as we will show , these fluctuations are also correlated with fluctuations in basin shape . inwhat follows , we extract two statistical measures of correlations between deviations in hack s law and overall basin shape .these are , the standard linear correlation coefficient and , the spearman rank - order correlation coefficient . for observations of data pairs , is defined to be where is the covariance of the s and s , and their means , and and their standard deviations .the value of spearman s is determined in the same way but for the and replaced by their ranks . from , we determine a two - sided significance via student s t - distribution .we define , a measure of basin aspect ratio , as long and narrow basins correspond to while for short and wide basins , we have we now examine the discrete derivatives of hack s law in more detail . in order to discern correlations between large scale fluctuations within individual basins ,we specifically look at the last two differences in a basin : and . for each of the mississippis 104 order basins , these values are plotted against each other in figure [ fig : dev.hackorder_correls ] .both our correlation measurements strongly suggest these differences are uncorrelated .the linear correlation coefficient is and , similarly , we have .the significance implies that the null hypothesis of uncorrelated data can not be rejected .thus , for hack s law in an individual basin , large scale fluctuations are seen to be uncorrelated .however , correlations between these fluctuations and other factors may still exist .this leads us to our second test which concerns the relationship between trends in hack s law and overall basin shape . figure [ fig : dev.orderhack7_asp5_7log_mis ] shows a comparison of the aspect ratio and for the order basins of the mississippi .the measured correlation coefficients are and , giving a significance of .furthermore , we find the differences ( , and ) and ( , and ) are individually correlated with basin shape .we observe this correlation between basin shape and trends in hack s law at large scales , namely , and , repeatedly in our other data sets . in some cases, correlations extend further to .since the area ratio is typically in the range 45 , hack s law is affected by boundary conditions set by the geometry of the overall basin down to sub - basins one to two orders of magnitude smaller in area than the overall basin .these deviations are present regardless of the absolute size of the overall basin .furthermore , the origin of the basin boundaries being geologic or chance or both is irrelevant large scale deviations will still occur .however , it is reasonable to suggest that particularly strong deviations are more likely the result of geologic structure rather than simple fluctuations .hack s law is a central relation in the study of river networks and branching networks in general .we have shown hack s law to have a more complicated structure than is typically given attention .the starting generalization is to consider fluctuations around scaling . using the directed , random network model , a form for the hack distribution underlying hack s lawmay be postulated and reasonable agreement with real networks is observed .questions of the validity of the distribution aside , the hack mean coefficient and the hack standard deviation coefficient should be standard measurements because they provide further points of comparison between theory and other basins . with the idealized hack distribution proposed , we may begin to understand deviations from its form . as with any scaling law pertaining to a physical system, cutoffs in scaling must exist and need to be understood . for small scales ,we have identified the presence of linear sub - basins as the source of an initial linear relation between area and stream length . at large scales , statistical fluctuations and geologic boundaries give rise to basins whose overall shape produces deviations in hack s laws .both deviations extend over a considerable range of areas as do the crossovers which link them to the region of intermediate scales , particularly the crossover from small scales . finally , by focusing in detail on a few large - scale examples networks ,we have found evidence that river networks do not belong to well defined universality classes .the relationship between basin area and stream length may be approximately , and in some cases very well , described by scaling laws but not exactly so .the gradual drift in exponents we observe suggests a more complicated picture , one where subtle correlations between basin shape and geologic features are intrinsic to river network structure .this work was supported in part by nsf grant ear-9706220 and the department of energy grant de fg02 - 99er 15004 .
this article is the first in a series of three papers investigating the detailed geometry of river networks . branching networks are a universal structure employed in the distribution and collection of material . large - scale river networks mark an important class of two - dimensional branching networks , being not only of intrinsic interest but also a pervasive natural phenomenon . in the description of river network structure , scaling laws are uniformly observed . reported values of scaling exponents vary suggesting that no unique set of scaling exponents exists . to improve this current understanding of scaling in river networks and to provide a fuller description of branching network structure , here we report a theoretical and empirical study of fluctuations about and deviations from scaling . we examine data for continent - scale river networks such as the mississippi and the amazon and draw inspiration from a simple model of directed , random networks . we center our investigations on the scaling of the length of sub - basin s dominant stream with its area , a characterization of basin shape known as hack s law . we generalize this relationship to a joint probability density and provide observations and explanations of deviations from scaling . we show that fluctuations about scaling are substantial and grow with system size . we find strong deviations from scaling at small scales which can be explained by the existence of linear network structure . at intermediate scales , we find slow drifts in exponent values indicating that scaling is only approximately obeyed and that universality remains indeterminate . at large scales , we observe a breakdown in scaling due to decreasing sample space and correlations with overall basin shape . the extent of approximate scaling is significantly restricted by these deviations and will not be improved by increases in network resolution .
in each multicellular organism a single cell proliferates to produce and maintain tissues comprised of large populations of differentiated cell types .the number of cell divisions in the lineage leading to a given somatic cell governs the pace at which mutations accumulate .the resulting somatic mutational load determines the rate at which unwanted evolutionary processes , such as cancer development , proceed . in order to produce differentiated cells from a single precursor cellthe theoretical minimum number of cell divisions required along the longest lineage is . to achieve this theoretical minimum, cells must divide strictly along a perfect binary tree of height ( fig . [ fig1]a ) . in multicellular organismssuch differentiation typically takes place early in development .it is responsible for producing the cells of non - renewing tissues ( e.g. , primary oocytes in the female germ line ) and the initial population of stem cells in self - renewing tissues ( e.g. , hematopoietic stem cells or the spermatogonia of the male germ line ) . in self - renewing tissues , which require a continuous supply of cells ,divisions along a perfect binary tree are unfeasible . strictly followinga perfect binary tree throughout the lifetime of the organism would require extraordinarily elaborate scheduling of individual cell divisions to ensure tissue homeostasis , and would be singularly prone to errors ( e.g. , the loss of any single cell would lead to the loss of an entire branch of the binary tree ) . instead , to compensate for the continuous loss of cells , mechanisms have evolved to replenish the cell pool throughout the organism s lifetime . in most multicellular organismshierarchically organized tissue structures are utilized . at the root of the hierarchyare a few tissue - specific stem cells defined by two properties : self - replication and the potential for differentiation . during cell proliferation cellscan differentiate and become increasingly specialized toward performing specific functions within the hierarchy , while at the same time losing their stem cell - like properties ( fig .[ fig1]b ) .a classic example is the hematopoietic system , but other tissues such as skin or colon are also known to be hierarchically organized .identifying each level of the hierarchy , however , can be difficult , especially if the cells at different levels are only distinguished by their environment , such as their position in the tissue ( e.g. , the location of the transit - amplifying cells along intestinal crypts ) . as a result ,information on the details of differentiation hierarchies is incomplete .mature cells from a single precursor with a minimum number of cell divisions , , strict division along a perfect binary tree is necessary . in multicellular organismssuch `` non - renewable '' differentiation typically takes place early in development .b ) however , in self - renewing tissues , where homeostasis requires a continuous supply of cells , a small population of self - replicating tissue - specific stem cells sustain a hierarchy of progressively differentiated and larger populations of cell types , with cells of each type being continuously present in the tissue . , scaledwidth=60.0% ] nonetheless , in a recent paper , tomasetti and vogelstein gathered available information from the literature and investigated the determinants of cancer risk among tumors of different tissues .examining cancers of 31 different tissues they found that the lifetime risk of cancers of different types is strongly correlated with the total number of divisions of the normal self - replicating cells .their conclusion that the majority of cancer risk is attributable to bad luck arguably results from a misinterpretation of the correlation between the logarithms of two quantities .however , regardless of the interpretation of the correlation , the data display a striking tendency : the dependence of cancer incidence on the number of stem cell divisions is sub - linear , i.e. , a 100 fold increase in the number of divisions only results in a 10 fold increase in incidence .this indicates that tissues with a larger number of stem cell divisions ( typically larger ones with rapid turnover , e.g. , the colon ) are relatively less prone to develop cancer .this is analogous to the roughly constant cancer incidence across animals with vastly different sizes and life - spans ( peto s paradox ) , which implies that large animals ( e.g. , elephants ) possess mechanisms to mitigate their risk relative to smaller ones ( e.g. , mice ) .what are the tissue - specific mechanisms that explain the differential propensity to develop cancer ?it is clear that stem cells that sustain hierarchies of progressively differentiated cells are well positioned to provide a safe harbor for genomic information .qualitative arguments suggesting that hierarchically organized tissues may be optimal in reducing the accumulation of somatic mutations go back several decades . as mutations provide the fuel for somatic evolution ( including not only the development of cancer , but also tissue degeneration , aging , germ line deterioration , etc . )it is becoming widely accepted that tissues have evolved to minimize the accumulation of somatic mutations during the lifetime of an individual .the potential of hierarchical tissues to limit somatic mutational load simply by reducing the number of cell divisions along cell lineages , however , has not been explored in a mathematically rigorous way . here , we discuss this most fundamental mechanism by which hierarchical tissue organization can curtail the accumulation of somatic mutations .we derive simple and general analytical properties of the divisional load of a tissue , which is defined as the number of divisions its constituent cells have undergone along the longest cell lineages , and is expected to be proportional to the mutational load of the tissue .models conceptually similar to ours have a long history , going back to loeffler and wichman s work on modeling hematopoietic stem cell proliferation , and several qualitative arguments have been made suggesting why hierarchically organized tissues may be optimal in minimizing somatic evolution . in a seminal contribution nowak et al . showed that tissue architecture can contribute to the protection against the accumulation of somatic mutations .they demonstrated that the rate of somatic evolution will be reduced in any tissue where geometric arrangement or cellular differentiation induce structural asymmetries such that mutations that do not occur in stem cells tend to be washed out of the cell population , slowing down the rate of fixation of mutations . here, we begin where nowak et al . left off : aside of structural asymmetry , we consider a second and equally important aspect of differentiation , the dynamical asymmetry of tissues , i.e. , the uneven distribution of divisional rates across the differentiation hierarchy .more recently a series of studies have investigated the dynamics of mutations in hierarchical tissues with dynamical asymmetry and found that hierarchical tissue organization can ( i ) suppress single as well as multiple mutations that arise in progenitor cells , and ( ii ) slow down the rate of somatic evolution towards cancer if selection on mutations with non - neutral phenotypic effects is also taken into account .the epistatic interactions between individual driver mutations are , however , often unclear and show large variation among cancer types .the fact that the majority of cancers arise without a histologically discernible premalignant phase indicates strong cooperation between driver mutations , suggesting that major histological changes may not take place until the full repertoire of mutations is acquired .for this reason , here we do not consider selection between cells , but rather , focus only on the pace of the accumulation of somatic mutations in tissues , which provide the fuel for somatic evolution .the uneven distribution of divisional rates considered by werner et al. followed a power law , however , this distribution was taken for granted without prior justification .their focus was instead on `` reproductive capacity '' , an attribute of a single cell corresponding to the number of its descendants , which is conceptually unrelated to our newly introduced `` divisional load '' , which characterizes the number of cell divisions along the longest cell lineages of the tissue .here we show mathematically , to the best of our knowledge for the first time , that the minimization of the divisional load in hierarchical differentiation indeed leads to power law distributed differentiation rates .more generally , evolutionary thinking is becoming an indispensable tool to understand cancer , and even to propose directions in the search for treatment strategies .models that integrate information on tissue organization have not only provided novel insight into cancer as an evolutionary process , but have also produced direct predictions for improved treatment .the simple and intuitive relations that we derive below have the potential to further this field of research by providing quantitative grounds for the deep connection between organization principles of tissues and disease prevention and treatment . according to our results ,the lifetime divisional load of a hierarchically organized tissue is independent of the details of the cell differentiation processes .we show that in self - renewing tissues hierarchical organization provides a robust and nearly ideal mechanism to limit the divisional load of tissues and , as a result , minimize the accumulation of somatic mutations that fuel somatic evolution and can lead to cancer .we argue that hierarchies are how the tissues of multicellular organisms keep the accumulation of mutations in check , and that populations of cells currently believed to correspond to tissue - specific stem cells may in general constitute a diverse set of slower dividing cell types .most importantly , we find that the theoretical minimum number of cell divisions can be very closely approached : as long as a sufficient number of progressively slower dividing cell types towards the root of the hierarchy are present , optimal self - sustaining differentiation hierarchies can produce terminally differentiated cells during the course of an organism s lifetime from a single precursor with no more than cell divisions along any lineage .intermediate levels of partially differentiated cells .b ) five microscopic events can occur with a cell : ( i ) symmetric cell division with differentiation , ( ii ) asymmetric cell division , ( iii ) symmetric cell division without differentiation , ( iv ) single cell differentiation , and ( v ) cell death . to the right of each type of event present in optimal hierarchies we give the corresponding per cell rate that is used to derive eq .[ dotd ] . ] to quantify how many times the cells of self - renewing tissues undergo cell divisions during tissue development and maintenance , we consider a minimal generic model of hierarchically organized , self - sustaining tissue .according to the model , cells are organized into hierarchical levels based on their differentiation state .the bottom level ( level ) corresponds to tissue - specific stem cells , higher levels represent progressively differentiated progenitor cells , and the top level ( level ) is comprised of terminally differentiated cells ( fig .[ fig2]a ) .the number of cells at level in fully developed tissue under normal homeostatic conditions is denoted by . during homeostasis cells at levels can differentiate ( i.e. , produce cells for level ) at a rate , and have the potential for self - replication . at the topmost level of the hierarchyterminally differentiated cells can no longer divide and are expended at the same rate that they are produced from the level below .the differentiation rates are defined as the total number of differentiated cells produced by the cells of level per unit time .the differentiation rate of a single cell is , thus . in principlefive microscopic events can occur with a cell : ( i ) symmetric cell division with differentiation , ( ii ) asymmetric cell division , ( iii ) symmetric cell division without differentiation , ( iv ) single cell differentiation , and ( v ) cell death ( fig . [ fig2]b ) .our goal is to determine the optimal tissue organization and dynamics that minimize the number of cell divisions that the cells undergo until they become terminally differentiated .for this reason cell death , except for the continuous expenditure of terminally differentiated cells , is disallowed as it can only increase the number of divisions .we note , however , that cell death with a rate proportional to that of cell divisions would simply result in a proportionally increased divisional load and , thus , would have no effect on the optimum .similarly , we also disregard single cell differentiation , because if it is rare enough ( i.e. , its rate is smaller than the asymmetric cell division rate plus twice the rate of symmetric cell division without differentiation ) then it can be absorbed in cell divisions with differentiation ; otherwise it would merely delegate the replication burden down the hierarchy towards the less differentiated and supposedly less frequently dividing cells , and would be sub - optimal .two of the remaining three microscopic events involve differentiation .if we denote the fraction of differentiation events that occur via symmetric cell division at level by , then the rate of symmetric cell division at level can be written as ( the division by accounts for the two daughter cells produced by a single division ) , while the rate of asymmetric cell division is .symmetric cell division with differentiation leaves an empty site at level , which will be replenished either ( i ) by differentiation from the level below or ( ii ) by division on the same level .assuming the first case and denoting the fraction of replenishment events that occur by differentiation from the level below by , the combined rate of the contributing processes ( asymmetric cell division and symmetric cell division with differentiation from the level below ) can be written as . by definitionthis is equal to , the differentiation rate from level , leading to the recursion relation alternatively , if replenishment occurs by cell division on the same level , i.e. , as a result of symmetric cell division without differentiation , the corresponding rate is . to keep track of how cell divisions accumulate along cell lineages during tissue renewal , we introduce the divisional load for each level separately defined as the average number of divisions that cells at level have undergone by time since the stem cell level was created at time zero . using the rates of the microscopic events ( also shown in fig .[ fig2]b ) , considering that each division increases the accumulated number of divisions of both daughter cells by one , and taking into account the divisional loads that the departure of cells take and the arrival of cells bring , the following mean - field differential equation system can be formulated for the time evolution of the total divisional load ( ) of levels of a fully developed tissue : \ , .\label{dotd}\end{aligned}\ ] ] because stem cells can not be replenished from below we have . the terminal level can be included in the system of equations by specifying and formally defining .the above equations are valid when each level contains the prescribed number of cells of a fully developed , homeostatic tissue and , therefore , do not directly describe the initial development of the tissue from the original stem cells .this shortcoming can , however , be remedied by introducing virtual cells that at the initial moment ( ) fill up all levels .as the virtual cells gradually differentiate to higher levels of the hierarchy , they are replaced by the descendants of the stem cells .tissue development is completed when the non - virtual descendants of the initial stem cell population fill the terminally differentiated level for the first time , expelling all virtual cells . using this approach the initial development of the tissueis assumed to follow the same dynamics as the self - renewal of the fully developed tissue . even though cell divisions in a developing tissue might occur at an elevated pace , such differences in the overall pace of cell divisions ( along with any temporal variation in the tissue dynamics ) are irrelevant ,as long as only the relation between the number of cell divisions and the number of cells generated are concerned . using the recursion relation the above differential equation system simplifies to revealing that the average number of cell divisions is independent of both the fraction of symmetric division in differentiation , and the fraction of differentiation in replenishment . from any initial condition converges to the asymptotic solution which shows that the divisional load of the entire tissue grows linearly according to the differentiation rate of the stem cells ( ) , and the progenitor cells at higher levels of the hierarchy have an additional load ( ) representing the number of divisions having led to their differentiation . by definition ,the additional load of the stem cells ( ) is zero .the convergence involves a sum of exponentially decaying terms , among which the slowest one is characterized by the time scale which can be interpreted as the transient time needed for the cells at level to reach their asymptotic behavior . can also be considered as the transient time required for the initial development of the tissue up to level .the rationale behind this is that during development the levels of the hierarchy become populated by the descendants of the stem cells roughly sequentially , and the initial population of level takes about time after level has become almost fully populated .plugging the asymptotic form of into the system of differential equations and prescribing , the constants can be determined , and expressed as where we have introduced the ratios between any two subsequent differentiation rates .the asymptotic solution then becomes this simple formula , which describes the accumulation of the divisional load along the levels of a hierarchically organized tissue , is one of our main results .the number of mutations that a tissue allows for its constituent cells to accumulate can be best characterized by the expected number of mutations accumulated along the longest cell lineages . on average, the longest lineage corresponds to the last terminally differentiated cell that is produced by the tissue at the end of the lifetime of the organism .therefore , as the single most important characteristics of a hierarchically organized tissue , we define its lifetime divisional load , , as the divisional load of its last terminally differentiated cell .if the total number of terminally differentiated cells produced by the tissue during the natural lifetime of the organism per stem cell is denoted by , then the lifetime of the organism can be expressed as , where the first term is the development time of the tissue up to level , and the second term is the time necessary to generate all the terminally differentiated cells by level at a rate of . because the last terminally differentiated cell is the result of a cell division at level , its expected divisional load , , is the average divisional load of level increased by : note that the complicated term drops out of the formula .a remarkable property of is that it depends only on two structural and two dynamical parameters of the tissue .the two structural parameters are the total number of the terminally differentiated cells produced by the tissue per stem cell , , and the number of the hierarchical levels , .the two dynamical parameters are the product and sum of the ratios of the differentiation rates , .the lifetime divisional load neither depends on most of the microscopic parameters of the cellular processes , nor on the number of cells at the differentiation levels . for fixed and ratios of the differentiation rates that minimize the lifetime divisional load can be easily determined by setting the derivatives of with respect to the ratios to zero , resulting in this expression shows that is identical for all intermediate levels ( ) and , therefore , can be denoted by without a subscript .this uniform ratio can then be expressed as as long as the condition holds , i.e. , when . for , however , the ratio has to take the value of plugging into eq .( [ d ] ) results in for and for .( [ ds ] ) is a monotonically decreasing function of , while eq .( [ dss ] ) has a minimum at levels .this together with the ratio represent the optimal tissue - structure in the sense that it minimizes the lifetime divisional load of a self - renewing tissue , yielding note that under this optimal condition the divisional rate of the stem cell level is very low : in a mature tissue ( i.e. , after the tissue has developed ) the expected number of divisions of a stem cell , which is equivalent to the expected number of differentiation to level per stem cell is only . ) show the lower limit of the lifetime divisional load of a tissue , , as a function of the number of hierarchical levels , , for and , respectively .the theoretical minimum , , achievable by a series of divisions along a perfect binary tree characteristic of non - renewing tissues , is displayed with a dashed line .here we have assumed roughly corresponding to the number of cells shed by a few square millimeters of human skin that is sustained by a single stem cell . ]remarkably , corresponds to less than two cell divisions in addition to the theoretical minimum of , achievable by a series of divisions along a perfect binary tree characteristic of non - renewing tissues . in other words , in terms of minimizing the number of necessary cell divisions along cell lineages , a self - renewing hierarchical tissue can be almost as effective as a non - renewing one .consequently , hierarchical tissue organization with a sufficient number of hierarchical levels provides a highly adaptable and practically ideal mechanism not only for ensuring self - renewability but also keeping the number of cell divisions near the theoretical absolute minimum .an important result of our mathematical analysis is that it provides a simple and mathematically rigorous formula ( eqs .[ ds ] and [ dss ] , and fig .[ fig3 ] ) for the lower limit of the lifetime divisional load of a tissue for a given number of hierarchical levels and a given number of terminally differentiated cells descending from a single stem cell .this lower limit can be reached only with a power law distribution of the differentiation rates ( i.e. , with a uniform ratio between the differentiation rates of any two successive differentiation levels ) , justifying the assumptions of the models by werner et al. . in the optimal scenario , where , the recursion relation imposes , thereby , all cell divisions must be symmetric and involve differentiation .this is a shared feature with non - renewable differentiation , which is the underlying reason , why the number of cell divisions of the optimal self - renewing mechanism can closely approach the theoretical minimum . as a salient example of self - renewing tissues ,let us consider the human skin .clonal patches of skin are of the order of square millimeters in size , the top layer of skin , which is renewed daily , is composed of approximately a thousand cells per square millimeter .if we assume that a mm patch is maintained by a single stem cell for years , this corresponds to about cells . as fig .[ fig3 ] demonstrates , the vs. curve becomes very flat for large values of , indicating that in a real tissue the number of hierarchical levels can be reduced by at least a factor of from the optimal value , without significantly compromising the number of necessary cell divisions along the cell lineages .it is a question how the total number of terminally differentiated cells ( ) produced by the tissue during the natural lifetime of the organism can be best partitioned into the number of tissue - specific stem cells ( ) and the number of terminally differentiated cells per stem cell ( ) .the initial generation of the stem cells along a binary tree requires divisions .the production of the terminally differentiated cells in a near - optimal hierarchy requires about divisions .their sum , which is about , depends only on the total number of terminally differentiated cells , irrespective of the number of stem cells .this means , that the minimization of the divisional load poses no constraint on the number of stem cells .however , since both maintaining a larger number of differentiation levels and keeping the differentiation hierarchy closer to optimum involve more complicated regulation , we suspect that a relatively large stem cell pool is beneficial , especially as a larger stem cell population can also be expected to be more robust against stochastic extinction , population oscillation , and injury .in general , how closely the hierarchical organization of different tissues in different organisms approaches the optimum described above depends on ( i ) the strength of natural selection against unwanted somatic evolution , which is expected to be much stronger in larger and longer lived animals ; and ( ii ) intrinsic physiological constraints on the complexity of tissue organization and potential lower limits on stem cell division rate .neither the strength of selection nor the physiological constraints on tissue organization are known at present . however , in the case of the germ line mutation rate , which is proportional to the number of cell divisions in lineages leading to the gametes , current evidence indicates that physiological constraints are not limiting . across species , differences in effective population size , which is in generalnegatively correlated with body size and longevity , indicate the effectiveness of selection relative to drift . as a result, differences in effective population size between species determine the effectiveness of selection in spreading of favorable mutations and eliminating deleterious ones and , as such , can be used as indicator of the efficiency of selection .this implies that , in contrast to somatic tissues , we expect germ line differentiation hierarchies to be more optimal for smaller animals with shorter life spans as a result of their increased effective population sizes . for species forwhich information is available , the number of levels across species indeed follows an increasing trend as a function of the effective population size , ranging from in humans with relatively small effective population size of approximately and correspondingly less efficient selection , in macaque with intermediate effective population size of the order of , and in mice with the largest effective population size of approximately .a qualitative examination of fig . [ fig3 ] suggests that a similar number of levels , of the order of may be present in most somatic tissues , because the vs. curve becomes progressively flatter after it reaches around twice the optimal value of at , and the reduction in the divisional load becomes smaller and smaller as additional levels are added to the hierarchy and other factors are expected to limit further increase in .alternatively , if we consider for example the human hematopoietic system , where approximately hematopoietic stem cells ( hscs ) produce a daily supply of blood cells , we can calculate that over years each stem cell produces a total of terminally differentiated cells . for this larger value of vs. curve reaches twice the optimal value of at after which , similarly to fig .[ fig3 ] , it becomes progressively flatter and the reduction in divisional load diminishes as additional levels are added .this rough estimate of levels is consistent with explicit mathematical models of human hematopoiesis that predict between and levels .active or short term hscs ( st - hscs ) are estimated to differentiate about once a year , whereas a quiescent population of hscs that provides cells to the active population is expected to be characterized by an even lower rate of differentiation .this is in good agreement with our prediction about the existence of a heterogeneous stem cell pool , a fraction of which consists of quiescent cells that only undergo a very limited number of cell cycles during the lifetime of the organism . indeed , recently busch et al. found that adult hematopoiesis in mice is largely sustained by previously designated st - hscs that nearly fully self - renew , and receive rare but polyclonal hsc input .mouse hscs were found to differentiate into st - hscs only about three times per year . for most somatic tissues the differentiation hierarchies that underpin the development of most cellular compartments remain inadequately resolved , the identity of stem and progenitor cells remains uncertain , and quantitative information on their proliferation rates is limited .however , synthesis of available information on tissue organization by tomasetti and vogelstein , as detailed above , suggests that larger tissues with rapid turnover ( e.g. , colon and blood ) are relatively less prone to develop cancer .this phenomenon , as noted in the introduction , can be interpreted as peto s paradox across tissues with the implication that larger tissues with rapid turnover rates have hierarchies with more levels and stem cells that divide at a slower pace .accumulating evidence from lineage - tracing experiments is also consistent with a relatively large number of hierarchical levels .populations of stem cells in blood , skin , and the colon have begun to be resolved as combinations of cells that are long - lived yet constantly cycling , and emerging evidence indicates that both quiescent and active cell subpopulations may coexist in several tissues , in separate yet adjoining locations .lineage - tracing techniques are rapidly developing , and may be used for directly testing the predictions of our mathematical model about the highly inhomogeneous distributions of the differentiation rates in the near future . in the context of estimates of the number of stem cells in different tissues that underlie tomasetti and vogelstein s results ,the potential existence of such unresolved hierarchical levels suggests the possibility that the number of levels of the hierarchy are systematically underestimated and , correspondingly , that the number of stem cells at the base of these hierarchies are systematically overestimated .independent of the details of the hierarchy the dynamics of how divisional load accumulates in time is described by two phases : ( i ) a transient development phase during which each level of the hierarchy is filled up and ( ii ) a stationary phase during which homeostasis is maintained in mature tissue .the dynamic details and the divisional load incurred during the initial development phase depend on the details of the hierarchy ( cf.eqs .( [ dk ] ) and ( [ tau ] ) ) .in contrast , in the stationary phase , further accumulation of the mutational load is determined solely by the rate at which tissue - specific stem cells differentiate at the bottommost level of the hierarchy .such biphasic behavior has been observed in the accumulation of mutations both in somatic and germ line cells . in both casesa substantial number of mutations were found to occur relatively rapidly during development followed by a slower linear accumulation of mutation thereafter .general theoretical arguments imply that the contribution of the mutational load incurred during development to cancer risk is substantial , but this has been suggested to be in conflict with the fact that the majority of cancers develop late in life . resolving this question andmore generally understanding the development of cancer in self - renewing tissues will require modeling the evolutionary dynamics of how the hierarchical organization of healthy tissues breaks down . spontaneously occurring mutations accumulate in somatic cells throughout a person s lifetime , but the majority of these mutations do not have a noticeable effect . a small minority , however , can alter key cellular functions and a fraction of these confer a selective advantage to the cell , leading to preferential growth or survival of a clone .hierarchical tissue organization can limit somatic evolution at both these levels : ( i ) at the level of mutations , as we demonstrated above , it can dramatically reduce the number of cell divisions required and correspondingly the mutational load incurred during tissue homeostasis ; and ( ii ) at the level of selection acting on mutations with non - neutral phenotypic effects , as demonstrated by nowak et al . and later by pepper et al . , tissues organized into serial differentiation experience lower rates of such detrimental cell - level phenotypic evolution . extending the seminal results of nowak et al . and pepper et al ., we propose that in addition to limiting somatic evolution at the phenotypic level , hierarchies are also how the tissues of multicellular organisms keep the accumulation of mutations in check , and that tissue - specific stem cells may in general correspond to a diverse set of slower dividing cell types . in summary , we have considered a generic model of hierarchically organized self - renewing tissue , in the context of which we have derived universal properties of the divisional load during tissue homeostasis . in particular ,our results provide a lower bound for the lifetime divisional load of a tissue as a function of the number of its hierarchical levels .our simple analytical description provides a quantitative understanding of how hierarchical tissue organization can limit unwanted somatic evolution , including cancer development . surprisingly , we find that the theoretical minimum number of cell divisions can be closely approached ( cf .[ fig3 ] , where the theoretical minimum corresponds to the dashed horizontal line ) , demonstrating that hierarchical tissue organization provides a robust and nearly ideal mechanism to limit the divisional load of tissues and , as a result , minimize somatic evolution .this work was supported by the hungarian science foundation ( grant k101436 ) .the authors would like to acknowledge the comments of anonymous reviewers on a previous version of the manuscript , as well as discussion with and comments from bastien boussau , mrton demeter , mte kiss , and dniel grajzel .no data was generated as part of this study .the authors declare no conflict of interest .i.d . and sz.g .designed the study , carried out research , and wrote the paper .katrin busch , kay klapproth , melania barile , michael flossdorf , tim holland - letz , susan m schlenner , michael reth , thomas hfer , and hans - reimer rodewald .fundamental properties of unperturbed haematopoiesis from stem cells in vivo . , 518(7540):5426 ,feb 2015 .benjamin werner , fabian beier , sebastian hummel , stefan balabanov , lisa lassay , thorsten orlikowsky , david dingli , tim h brmmendorf , and arne traulsen . reconstructing the in vivo dynamics of hematopoietic stem cells from telomere length distributions ., 4:e08687 , 2015 . nick barker , johan h van es , jeroen kuipers , pekka kujala , maaike van den born , miranda cozijnsen , andrea haegebarth , jeroen korving , harry begthel , peter j peters , and hans clevers .identification of stem cells in small intestine and colon by marker gene lgr5 ., 449(7165):10037 , oct 2007 .louis vermeulen , edward morrissey , maartje van der heijden , anna m nicholson , andrea sottoriva , simon buczacki , richard kemp , simon tavar , and douglas j winton .defining stem cell dynamics in models of intestinal tumor initiation ., 342(6161):9958 , nov 2013 .hitoshi takizawa , roland r regoes , chandra s boddupalli , sebastian bonhoeffer , and markus g manz .dynamic variation in cycling of hematopoietic stem cells in steady state and inflammation ., 208(2):27384 , feb 2011 .min tang , rui zhao , helgi van de velde , jennifer g tross , constantine mitsiades , suzanne viselli , rachel neuwirth , dixie - lee esseltine , kenneth anderson , irene m ghobrial , jess f san miguel , paul g richardson , michael h tomasson , and franziska michor .myeloma cell dynamics in response to treatment supports a model of hierarchical differentiation and clonal evolution ., 22(16):420614 , aug 2016 .benjamin werner , jacob g scott , andrea sottoriva , alexander r a anderson , arne traulsen , and philipp m altrock .the cancer stem cell fraction in hierarchically organized tumors can be estimated using mathematical modeling and patient - specific treatment trajectories ., 76(7):170513 , apr 2016 .iigo martincorena , amit roshan , moritz gerstung , peter ellis , peter van loo , stuart mclaren , david c wedge , anthony fullam , ludmil b alexandrov , jose m tubio , lucy stebbings , andrew menzies , sara widaa , michael r stratton , philip h jones , and peter j campbell. tumor evolution . high burden and pervasive positive selection of somatic mutations in normal human skin ., 348(6237):8806 , may 2015 .benoit nabholz , nicole uwimana , and nicolas lartillot . reconstructing the phylogenetic history of long - term effective population size and life - history traits using patterns of amino acid replacement in mitochondrial genomes of mammals and birds . , 5(7):127390 , 2013 .augustine kong , michael l frigge , gisli masson , soren besenbacher , patrick sulem , gisli magnusson , sigurjon a gudjonsson , asgeir sigurdsson , aslaug jonasdottir , adalbjorg jonasdottir , wendy s w wong , gunnar sigurdsson , g bragi walters , stacy steinberg , hannes helgason , gudmar thorleifsson , daniel f gudbjartsson , agnar helgason , olafur th magnusson , unnur thorsteinsdottir , and kari stefansson . rate of de novo mutations and the importance of father s age to disease risk . , 488(7412):4715 , aug 2012 .raheleh rahbari , arthur wuster , sarah j lindsay , robert j hardwick , ludmil b alexandrov , saeed al turki , anna dominiczak , andrew morris , david porteous , blair smith , michael r stratton , uk10k consortium , and matthew e hurles .timing , rates and spectra of human germline mutation ., 48(2):12633 , feb 2016 .
* abstract * how can tissues generate large numbers of cells , yet keep the divisional load ( the number of divisions along cell lineages ) low in order to curtail the accumulation of somatic mutations and reduce the risk of cancer ? to answer the question we consider a general model of hierarchically organized self - renewing tissues and show that the lifetime divisional load of such a tissue is independent of the details of the cell differentiation processes , and depends only on two structural and two dynamical parameters . our results demonstrate that a strict analytical relationship exists between two seemingly disparate characteristics of self - renewing tissues : divisional load and tissue organization . most remarkably , we find that a sufficient number of progressively slower dividing cell types can be almost as efficient in minimizing the divisional load , as non - renewing tissues . we argue that one of the main functions of tissue - specific stem cells and differentiation hierarchies is the prevention of cancer .
multilayer neural networks have achieved state - of - the - art performances in image recognition , speech recognition , and even natural language processing .this impressive success is based on a simple powerful stochastic gradient descent ( sgd ) algorithm , and its variants .this algorithm estimates gradients of an error function based on mini - batches of an entire dataset .gradient noises caused by mini - batches help exploration of parameter space to some extent .the parameter space is highly non - convex for a typical deep network training , and finding a good path for sgd to improve generalization ability of deep neural networks is thus challenging .as found in standard spin glass models of neural networks , a non - convex error surface should be accompanied by exponentially many local minima , which hides the ( isolated ) global minima and thus makes any local search algorithms easily get trapped . in addition , the error surface structure of deep networks might behave similarly to random gaussian error surface , which demonstrates that critical points ( defined as zero - gradient points ) of high error have a large number of negative eigenvalues of the corresponding hessian matrix .consistent with this theoretical study , empirical studies on deep network training showed that sgd is slowed down by a proliferation of saddle points with many negative curvatures and even plateaus ( eigenvalues close to zero in many directions ) .the prevalence of saddle points poses an obstacle to attain better generalization properties for a deep network , especially for sgd based on first - order optimization , while computational complexity of second - order optimization relying on hessian - vector products does not scale well in training large deep networks .many heuristic strategies were proposed to overcome the difficulty sgd encounters .for example , adding noise to gradients corresponds to randomly perturbing the -spin interaction spherical glass model by adding an external magnetic field .regularization techniques such as dropout can also be explained in this framework , although it relies on unrealistic assumptions from a practical deep network perspective ( e.g. , input independence of active paths from input to output ) .another strategy is the use of local entropy to bias sgd towards flat regions on the error surface , where a low test error is reached .the deep network shaped by the parameters in the flatter regions is less prone to over - fitting . in this paper , we show another heuristic strategy to overcome the plateaus obstacle for sgd learning .we call this strategy reinforced backpropagation ( r - backprop ) , which provides a new effective strategy to use the gradient information , i.e. , not only the current gradient information but also the previous gradient information during training are used to update model parameters , with the property that the previous gradient information is used with a reinforcement probability that grows with the number of iterations. the growth of the reinforcement probability is characterized by two different time scales : one is at the mini - batch level , and the other is at the epoch level , which we shall describe in detail in the following sections .the excellent performance of r - backprop is verified first on training a toy fully - connected deep network model to learn a simple non - linear mapping generated by a two - layer feedforward network , and then on a benchmark handwritten digits dataset , in comparison to both standard backpropagation ( backprop ) and state - of - the - art algorithm adam .we consider a toy deep network model with layers of fully - connected feedforward architecture .each layer has neurons ( so - called width of layer ) .we define the input as -dimensional vector , and the weight matrix specifies the symmetric connections between layer and layer .the same connections are used to backpropagate the error during training .a bias parameter can also be incorporated into the weight matrix by assuming an additional constant input .the output at the final layer is expressed as : where is an element - wise sigmoid function for neurons at layer , defined as .the network is trained to learn the target mapping generated randomly as , where the input is generated from a standard normal distribution with zero mean and unit variance , and the target label is generated according to the non - linear mapping , in which each entry of the data - generating matrix follows independently a standard normal distribution as well .the deep network is trained to learn this non - linear mapping from a set of examples .we generate a total of examples , in which the first examples are used for training and the last examples are used for testing to evaluate the generalization ability of the learned model .in simulations , we use deep network architecture of layers to learn the target non - linear mapping , in which the network is thus specified by --- , with indicating the dimension of the input data and the dimension of the output .we first introduce the standard backprop for training the deep network defined in sec .we use quadratic loss function defined as , where denotes a vector ( matrix ) transpose operation , and defines the difference between the target and actual outputs as . to backpropagate the error, we also define two associated quantities : one is the state of neurons at -th layer defined by ( e.g. , , ) , and the other is the weighted input to neurons at -th layer defined by .accordingly , we define two related gradient vectors : [ grad ] which will be used to derive the propagation equation based on the chain rule . it is straightforward to derive , where indicates the element - wise multiplication , and is the derivative of the non - linear transfer function with respect to its argument . by applying the chain rule , we obtain the weight update equation for the top layer as where is the learning rate , and the remaining part is the gradient information , which indicates how a small perturbation to the weight affects the change of the error computed at the top ( output ) layer . to update the weight parameters at lower layers , we first derive the propagating equations for gradient vectors as follows : [ backprop ] where . using the above backpropagation equation ,the weight at lower layers is updated as : where .the neuron state used to update the weight parameters comes from a forward pass from the input vector to the output vector at the top layer . a forward pass combined with a backward propagation of the errorforms the standard backprop widely used in training deep networks given the labeled data . to improve the training efficiency ,one usually divides the entire large dataset into a set of mini - batches , each of which is used to get the average gradients across the examples within that mini - batch .one epoch corresponds to a sweep of the full dataset .the learning time is thus measured in unit of epoch . for one epoch , the weight is actually updated for times ( is the size of a mini - batch ) . in the above standard backprop ,only current gradients are used to update the weight matrix . therefore in a non - convex optimization , the backpropation gets easily stalled by the plateaus or saddle points on the error surface , and is hard to escape from these regions . to avoid the expensive second - order methods, we conjecture that the history of gradient information contains additional information about the landscape of the error surface , which can also be used to update the weight matrix , and might drive the learning dynamics towards dense regions with nice generalization properties .the dense region containing many good solutions can be accessed by maximizing a local entropy around some location in high dimensional parameter space , as observed previously in other studies of deep network training . to enable sgd to use previous gradient information ,we define a stochastic process for gradients at each learning step as follows : where denotes the gradient estimated from the average over examples within the current mini - batch for specific weight , and correspondingly contains information about the history of the evolving gradients . is a value close to but smaller than one , which is updated at the epoch level , i.e. , decays very slowly as epoch increases .therefore the current gradient has an increasing probability to be reinforced by the previous gradients , and retains its current value otherwise .as grows , previous gradients are accumulated and affect the current update of model parameters with a probability approaching one .this fluctuation caused by the stochastic reinforcement may help the learning dynamics to escape from plateaus or bad - quality regions of the error surface at earlier learning stage . at the later stage ,the learning dynamics may be attracted by the dense region with nice generalization properties ( low test error ) , because the accumulated gradient information may give a strong bias towards the good region if the dynamics approaches it .this good region is expected to be flat on the loss landscape , and contains atypical solutions to the non - convex deep neural network training .we will test this intuitive interpretation by extensive training simulations of a toy deep learning model .( [ rbp ] ) is a very simple way to re - use the previous gradient information , and thus forms the key component of the r - backprop .in addition to eq .( [ rbp ] ) , we also introduce two time scales to control the dynamics for the reinforcement probability .the first time scale is given by , and the second time scale is set by for the exponential decay of , where refers to the learning time in unit of epoch . is usually fixed to a value very close to one .we show a typical trace of the reinforcement probability and in fig .[ reprob ] , and will test effects of hyper - parameters on training dynamics in sec .note that by setting , one recovers the standard backprop . in simulations , we use decaying learning rate , where and , but the lowest learning rate is allowed to be . to show the efficiency of r - backprop , we also compare its performance with that of state - of - the - art stochastic optimization algorithm , namely adaptive moment estimation ( adam ) .adam performs a running average of gradients and their second raw moments , which are used to adaptively change the learning step - size .we use heuristic parameters of adam given in ref . , except that with the lowest value set to .large as we use in r - backprop does not work in the current setting for adam . ) as a function of iterations .the inset shows exponential decay of ( ) .note that the learning time is measured in unit of epoch in the inset . ]we first compare backprop with r - backprop in performance .we use a -layer deep network architecture as --- .training examples are divided into mini - batches of size . in simulations, we use the parameters , unless otherwise specified .although the chosen parameters are not optimal to achieve the best performance , we still observe the outstanding performance of r - backprop . in fig .[ comprbp ] ( a ) , we compare standard backprop with r - backprop .we clearly see that the test performance is finally improved at -th epoch by a significant amount ( about ) . meanwhile , the training error is also significantly lower than that of backprop .a salient feature of r - backprop is that , in the intermediate stage , the reinforcement strategy guides sgd to escape from possible plateau regions of high error surrounding saddle points , and finally reach a region of very nice generalization properties .this process is indicated by the temporary peak in both training error and test error for r - backprop .remarkably , even before or after this peak , there are a few small but significant fluctuations in both training and test errors .these fluctuations are assumed to be key characteristics of r - backprop , caused by a probabilistic reinforcement of the gradient information .this procedure guides sgd to explore the huge dimensional parameter space more carefully .we conjecture that the previous gradients contain information about the non - convex structure of the parameter space , and can be re - used in a well - designed stochastic way at the early stage of learning ( see fig .[ reprob ] ) .but at the later stage , the sgd approaches the good region of better generalization capabilities , and in this case , the reinforcement strategy will accelerate the learning process . compared to state - of - the - art adam ,r - backprop still improves the final test performance by a significant amount ( about , see fig .[ comprbp ] ( b ) ) .note that adam is able to decrease both training error and test error very quickly , but for the current setting , the decrease becomes slow after about epochs .in contrast , r - backprop keeps decreasing both errors by a more significant amount than adam , despite the presence of slightly significant fluctuations .a closer inspection of the later stage reveals that the excellent performance of r - backprop may be ascribed to the stochastic fluctuations introduced by the reinforcement probability .another key feature of fig .[ comprbp ] ( b ) is that , a region in the parameter space with low training error does not generally have low test error .the training error reached by r - backprop is clearly higher than that of adam , but the network architecture learned by r - backprop has nicer generalization property .this observation is consistent with recent study of maximizing local entropy in deep networks .training examples .note that the test performance is improved by , compared with backprop , and by , compared with adam ., title="fig : " ] .05 cm training examples .note that the test performance is improved by , compared with backprop , and by , compared with adam ., title="fig : " ] .05 cm we then study the effects of reinforcement parameters on the learning performance , as shown in fig . [ epara ] .if the exponential decay rate is large , r - backprop over - fits the data rapidly at around epochs .this is because , decays rapidly from , and thus a stochastic fluctuation at earlier stage of learning is strongly suppressed , which limits severely the exploration ability of r - backprop in the high - dimensional parameter space . in this case, r - backprop is prone to get stuck by bad regions with poor generalization performances .however , maintaining the identical small decay rate , we tune the initial value of , and find that a larger value of leads to a smaller test error ( inset of fig .[ epara ] ) . for relatively large values of ,the learning performance is not radically different .we also study the effects of training data size ( ) on the learning performances of all three algorithms compared in this paper .clearly , we see from fig .[ esamp ] , the test error decreases with the training data size as expected .r - backprop outperforms the standard backprop , and even adam .finally , we evaluate the test performance of r - backprop on mnist dataset .the mnist handwritten digit dataset contains training images and an extra images for testing .each image is one of ten handwritten digits ( to ) with pixels .therefore the input dimension is . for simplicity ,we choose the network structure as --- , with mini - batch size of for sgd . to save the simulation time, we show the performance only on training images .[ mnist ] shows that r - backprop improves significantly over backprop , reaching a similar test performance to that of adam .r - backprop achieves a test error of , compared to for backprop , and for adam .the test error is averaged over five independent runs ( different sets of training and test examples ) ., ) on the learning performance for r - backprop based on training examples .the inset is an enlarged view for the later stage ( the worst performance with ( , ) is omitted for comparison ) ., title="fig : " ] .05 cm .05 cm training examples , compared with backprop and adam . , title="fig : " ] .05 cmin this paper , we propose a new type of effective strategy to guide sgd to avoid the plateau problem in typical non - convex error surface of deep neural networks .this strategy takes into account previous gradient information when updating current weight matrix during standard backprop .it introduces a stochastic fluctuation to current gradients , and this fluctuation comes from previous memory of the error surface structure .hence this fluctuation is essentially different from an independently random noise added to the gradient during the learning process .r - backprop seems to work under a similar mechanism to maximizing a local entropy around some location in high dimensional parameter space , because r - backprop is able to cross barriers on the error surface and reach a region with better generalization properties , and this region was conjectured to be typically wide , and have possibly a higher error than the global minimum . by maximizing the local entropy ,the entropy - driven sgd proposed in the recent work requires additional markovian chain monte carlo sampling to evaluate the gradient of local entropy .in contrast , our method is much more less expensive in computational cost , and thus can be scalable to modern deep network training .this is because r - backprop uses only gradient information , requiring less computer memory . in currentsetting , its performance is comparable to or even better than that of adam , which requires one - fold more computer memory to store the uncentered variance of the gradients .it is very interesting to evaluate the performance of r - backprop on more complicated deep network model and complex datasets . instead of using only the current gradient information at each learning step as in standard backprop, we apply the reinforcement strategy to guide the normal sgd towards good - quality regions of parameter space . to some extent ,r - backprop may be able to avoid vanishing or exploding gradient problem typical in training a very deep network .in addition , it may take effect in recurrent neural network training .although r - backprop is proved to be an effective strategy to explore the extremely huge parameter space , it remains an open question how the structure of the error surface is connected to the efficiency of r - backprop .therefore theoretical understanding of the intrinsic structure of the error surface is still an extremely challenging problem in future studies .thanks dr .alireza goudarzi for a lunch discussion which later triggered h.h .to have the idea of this work .this work was supported by the program for brain mapping by integrated neurotechnologies for disease studies ( brain / minds ) from japan agency for medical research and development , amed , and by riken brain science institute .alex krizhevsky , ilya sutskever , and geoffrey e. hinton .imagenet classification with deep convolutional neural networks . in p.bartlett , f.c.n .pereira , c.j.c .burges , l. bottou , and k.q .weinberger , editors , _ advances in neural information processing systems 25 _ , pages 11061114 , 2012 .xavier glorot and yoshua bengio . understanding the difficulty of training deep feedforward neural networks . in yeew. teh and d. m. titterington , editors , _ proceedings of the thirteenth international conference on artificial intelligence and statistics ( aistats-10 ) _ , volume 9 , pages 249256 , 2010 .carlo baldassi , christian borgs , jennifer t. chayes , alessandro ingrosso , carlo lucibello , luca saglietti , and riccardo zecchina . unreasonable effectiveness of learning neural networks : from accessible states and robust ensembles to basic algorithmic schemes ., 113(48):e7655e7662 , 2016 .carlo baldassi , alessandro ingrosso , carlo lucibello , luca saglietti , and riccardo zecchina .subdominant dense clusters allow for simple learning and high computational performance in neural networks with discrete synapses ., 115:128101 , 2015 .
standard error backpropagation is used in almost all modern deep network training . however , it typically suffers from proliferation of saddle points in high - dimensional parameter space . therefore , it is highly desirable to design an efficient algorithm to escape from these saddle points and reach a good parameter region of better generalization capabilities , especially based on rough insights about the landscape of the error surface . here , we propose a simple extension of the backpropagation , namely reinforced backpropagation , which simply adds previous first - order gradients in a stochastic manner with a probability that increases with learning time . extensive numerical simulations on a toy deep learning model verify its excellent performance . the reinforced backpropagation can significantly improve test performance of the deep network training , especially when the data are scarce . the performance is even better than that of state - of - the - art stochastic optimization algorithm called adam , with an extra advantage of less computer memory required .
in this paper , human travel records were collected from four great cities by taxis , subways and surveys ( see details in methods [ mm : dataset ] ) .it is demonstrated that the exponential law of collective human movements does exist in urban areas of cities ( see methods [ mm : distr ] ) . in order to understand the exponential law of collective human mobility in urban areas ,it is essential to model individual flows from one region to the other in a city .although the gravity model has already been applied widely to predict flows , including human travel , cargo ship movement and telephone communications , it still has some flaws such as incompetence to explain the discrepancy of the numbers of individual flows in both directions between a pair of locations . then in order to fix its disadvantages , simini et al . put forward the radiation model without parameters . in this model , the expected flux from location to defined as where and are the populations of location and , is the number of trips starting from and is the total population of locations ( except and ) from which to the distances are less than or equal to ( the distance between and ) .the model can predict population movements between counties or cities successfully , but it is not clear whether the model applies to intra - urban movements as well .especially noteworthy is that in urban areas , it is difficult to obtain population distribution directly because of high mobility .moreover , because people often move frequently for various purposes in cities , it is unsuitable to use resident population to model individual flows . compared to the resident population , the average daily population occurring in a zone of city is more reasonable to characterize the urban mobility . because it establishes a bond between human travel intensity and the function of the zone .so , in this paper , we regard the number of trips arriving at a zone as the population of the zone , which is proportional to the actual average daily population approximately . after calculating the population of zones , the results of simulation in beijing by the radiation modelis shown in fig .[ fig : radiation ] . from the figure, it seems that the predicted flux has a large deviation from the actual ones and the model underestimates the probabilities of trips with distances larger than 1 km .similarly , the same phenomena can also be observed in other three cities , which are not included here .the possible reason to account for the incapability of radiation model is that there are different travel habits and preferences existing in trips at different scales of space .therefore , it is necessary to consider a new model to understand intra - urban human mobility patterns .stands for the actual flows equal with predicted ones .the black points are the mean values of predicted flux in the bins .the ends of whisker represent the 9th and 91st percentile in the bins.,title="fig : " ] stands for the actual flows equal with predicted ones .the black points are the mean values of predicted flux in the bins .the ends of whisker represent the 9th and 91st percentile in the bins.,title="fig : " ] inspired by the gravity model , we assume that the probability arriving at a location has a positive correlation with the population density of the location , but has a negative correlation with the euclidean distance between the location and the originated location .hence , in our model the probability of a trip reaching the location , conditioned on starting from the location , is defined as follows where is the population density function and is a function of distance between locations , which is usually given by two frequently used forms : power law and exponential .likewise , as for regions , the probability of a trip arriving at the region , conditioned on originating from the region , is defined as where is the population of regions .then the probability of a trip from region to can be derived as where means the normalized population indicating the possibility of originating a trip from the region .assuming is the total number of trips , the expected number of trips from region to can be concluded as where .as described in the gravity model , the number of trips from region to is equal with the one from region to .however , that is not the case in our model because the values of and depend on geographic positions of and respectively , which are often not equal to each other .therefore , it is more consistent with actual situations .after given the actual trips , the parameter of function in our model can be determined by using the method of maximum likelihood estimation ( mle ) . after inspecting the two forms of function carefully , it is found that the power - law form is much better .the power - law exponents in our model for the four cities - beijing , london , chicago and los angeles - are 1.601 , 0.402 , 1.832 and 1.805 respectively . by using our model to simulate human travels in the four cities ,the relationships between actual and predicted traffic flows are shown in fig .[ fig : prediction ] . from the subgraphs, it can be observed that the red lines almost lie between the 9th and the 91st percentiles in all bins except the last bin in los angeles , indicating that the model can predict the number of trips between regions accurately .moreover , the comparisons of distributions of actual and simulated trip lengths are illustrated in fig .[ fig : pred_distdistr ] .it is discovered that the simulated distributions accord with the actual ones very well in the four cities .the fitted exponential parameters for the simulated distance distributions in the four cities are , , and respectively , which are very close to the fitted values for actual trip - length distributions as shown in table [ tb : mle_exp ] . in summary, our model with the form of power law can be treated as an appropriate model to predict traffic flows in urban areas . from the model ,it is apparent that the spatial population distribution has an important impact on collective human mobility .taking beijing as an example , it is investigated that how the geographic population distribution could affect the trip - length distribution .first , considering the population distribution is uniform , individual trips could be predicted by our model with different parameters . as shown in fig .[ fig : pop_bj](a ) , contrary to the actual trip - length distribution , the simulated distance distributions accord to power laws with exponential cutoff very well and decay more slowly .the power - law exponents of the two simulated distributions are -0.716 for ( green triangles ) and -1.100 for ( red stars ) , which approach to the analytical results ( see methods [ mm : proof ] ) .second , remaining the distribution of population numbers of cells unchanged , three synthetic population distributions are generated by randomized rearranging the population numbers of cells . in simulations of human trips ,the parameter of our model is the fixed value 1.601 , which is the same as the actual one in beijing . in fig .[ fig : pop_bj](b ) , the simulated distributions are similar to each other and could be described by power - law distributions better . in summary , these demonstrate that not only the distribution of population numbers but also the layout of them could influence the trip - length distribution .thus , it is necessary to study the spatial distribution of urban population . herethe relationships between the normalized average density and the distance to urban centers are plotted in fig .[ fig : pop_density ] for four cities ( the details of calculating densities can be referred in methods [ mm : pop ] ) . from the graph , it can be observed that , for different selected centers of each city , the average urban densities have similar trends .more importantly , the densities for four cities all decay exponentially with the increase of distance to the urban centers . andit is worth noting that the declining slopes are not far from the exponents of exponential estimated from the corresponding distance distributions shown in table [ tb : mle_exp ] . assuming the density function is a negative exponential function that depends on the distance to the center where c is a constant .the distance distribution can be derived as where and are constants ( see the proof in methods [ mm : proof ] ) .hence when , the exponential section dominates and begins to decay exponentially .then , it is aimed to verify the analytical result through simulating human trips based on our model further .and our model is simulated on grid cells whose size is .when fixing the model parameter , as shown in fig .[ fig : sim_proof](a ) , the simulated trip - length distributions all exhibit exponential tails and the parameters of exponential distributions approach to the corresponding parameters of population density distribution . from the fig .[ fig : sim_proof](b ) , the distance distributions have similar rates of exponential decay indicating that the model parameter has little influence on the exponential tails of distributions when fixing the parameter of the density function . in conclusion, the result of proof agrees with the simulations very well . .the simulated distance distributions , corresponding to different ( 0.1 , 0.2 and 0.4 ) , decay exponentially with slope 0.085 ( blue dashed line ) , 0.180 ( green dash - dot line ) , 0.414 ( red dotted line ) respectively .( b ) different model parameters with the fixed population density distribution ( ).,title="fig : " ] .the simulated distance distributions , corresponding to different ( 0.1 , 0.2 and 0.4 ) , decay exponentially with slope 0.085 ( blue dashed line ) , 0.180 ( green dash - dot line ) , 0.414 ( red dotted line ) respectively .( b ) different model parameters with the fixed population density distribution ( ).,title="fig : " ] according to the analytical result , it could be explained why the exponential parameters of actual distance distributions are close to the ones of population density distributions in the four cities .furthermore , in urban areas , the density usually decreases significantly leading to a large exponent , thus a short range of power - law section .meanwhile , it must be noticed that the power of is often larger than -1 because of like the four cities in our datasets , which is such a slow power - law decay and obviously not a lvy walk that has been observed in collective human movements in large scale of space .therefore , it can explain the reason why the distance distribution of human trips in urban areas accords with an exponential distribution much better .it is worth mentioning that the empirical density distributions in four cities is the same as the clark s model , which is the most influential model for describing urban population density .since then , some studies have proposed other mathematical forms for population density .for example , the inverse power function is employed by smeed .though some controversies , parr suggests that the negative exponential function is more appropriate to model the density in urban areas , while the inverse power function is more appropriate to model the variation of density in the urban fringe and hinterland .therefore , the phase transition of population density function in different scales of space may be able to explain the different laws emerging in collective human mobility patterns .in the paper , it is aimed to understand the exponential law of intra - urban human mobility at the population level .the four travel datasets in urban areas of cities are analyzed , which further confirm the exponential law . through considering the travel flows between regions ,it is clear that the radiation model is incapable to model collective human movements in urban areas . because of this, a new model is proposed , which can predict traffic flows between regions very well . based on our model , it is discovered that the average population density decreasing exponentially with distance to the urban center ultimately leads to the exponential law of collective human mobility patterns .moreover , the difference of population distribution in different scales of space may be able to explain the different laws ( power - law and exponential ) in collective human movements .in fact , from the exponential law , it is hard to conclude that the trip lengths at the individual level follow a power - law distribution .it must be noted that most empirical studies about human mobility patterns are at the population level , but many models are aimed to explore the origin of scaling law at the individual level . demonstrated that a scale - free distribution of the aggregated movement lengths can also be obtained from individuals with different exponential distributions of movement lengths .a new evidence in human temporal dynamics is that , although the aggregated intercall durations follow a power - law distribution , the durations at the individual level follow a power - law distribution for only a small number of individuals and a weibull distribution for the majority .in addition , the research of human mobility is inspired by animals .there are yet some controversies on whether animals exhibit lvy - like behaviour .because of these , the individual mobility patterns should be considered carefully and more comprehensive human travel records are needed for deeply empirical analysis .the dataset is about the taxis gps data generated by over 10 thousand taxis in beijing , china , during three months ended on dec .31st , 2010 . based on taxis locations and statuses of occupation ( with passengers or without passengers ) ,trajectories of passengers can be observed . after dividing the urban areas of beijing ( inside the 6th ring road ) into grid - like cells with size , a total of 11776743 trajectories between 3450 cells were extracted .the dataset contains about 5% samples of human trips by tube in london , which were captured by oyster cards during a week in november 2009 ( available online at http://www.tfl.gov.uk/businessandpartners/syndication/ ) . in the dataset ,the stations that a journey started or ended at were recorded .it was noticed that some stations were very close to each other , even less than 200 m. because of too small area of regions , it could not reflect the regular mobility patterns between regions obviously . after merging some adjacent stations , we obtained 183 voronoi cells based on stations and a total of 667584 trips between them .the dataset used here comes from the household travel tracker survey in chicago metropolitan areas conducted by chicago metropolitan agency for planning from january 2007 to february 2008 ( available online at http://www.cmap.illinois.gov/travel-tracker-survey/ ) .the survey contained various kinds of information about households and travel activities of household members . among them ,the trips occurred in the cook county were considered which is seen as the urban area of the city according to the population density .then we extracted a total of 43881 trips between the 1314 zones which correspond to the census tracts .the post - census regional household travel survey , sponsored by the southern california association of governments in 2001 , was aimed to investigate human travel behavior in the los angeles region of california ( available online at http://www.scag.ca.gov/travelsurvey/ ) .the region consisted of six counties . in terms of the survey data ,it was paid more attention to the movements in the los angeles county . as a result , based on the census tracts ,the county was divided into 2017 zones and a total of 46000 tracks were identified between these zones .there are three candidate models ( power law , exponential and truncated pareto ) to be compared for describing the empirical distribution of travel distances .the parameters of these models are determined by the method of maximum likelihood estimation ( mle ) .the akaike weights are calculated , which are considered as relative likelihoods being the best model .the model with the largest akaike weight should be selected as the best one .the details of methods can be found in . as shown in fig .[ fig : dist_distr ] , the distribution of trip lengths in each city is better fitted by the exponential rather than the power law or truncated paretos . moreover ,the akaike weight of exponential model ( equals about 1.0 ) is significantly larger than the ones of the other two models in each city .these all demonstrate that the exponential law of collective human movements exists in urban areas of cities extensively .furthermore , the fitted results of the exponential model in four cities are shown in table [ tb : mle_exp ] .the 95% ci represents the 95% confidence interval of the exponential parameter . and the goodness of fit ( gof ) is measured by the kolmogorov - smirnov statistic .the smaller the value is , the more similar the empirical distance distribution and the fitted exponential model are .the spatial distribution of population in a city is often characterized by average population density with the distance to the urban center . as for the dataset of beijing ,the trips are in very fine granularity and there are similar - sized regions ( cells ) with small area . after selecting the cells with high densities as urban centers ,the average densities with distance to centers are calculated easily .but , for the datasets of other three cities , there are coarse granularity of travels and irregular zones .it is not suitable to compute the average density directly . therefore , assuming that population density in each zone is uniform , the urban area is divided into grid - like cells with size .the population density of each cell is regard as the density of the zone in which the cell lies . finally , the average densities with distance can be calculated based on these divided grid cells . as shown in fig .[ fig : monocenter ] , the center is the point and the non - increasing density distribution is , where is the distance to the center and is the size of a city . by using our model , we can estimate the displacement distribution of human movements as follows :
the vast majority of travel takes place within cities . recently , new data has become available which allows for the discovery of urban mobility patterns which differ from established results about long distance travel . specifically , the latest evidence increasingly points to exponential trip length distributions , contrary to the scaling laws observed on larger scales . in this paper , in order to explore the origin of the exponential law , we propose a new model which can predict individual flows in urban areas better . based on the model , we explain the exponential law of intra - urban mobility as a result of the exponential decrease in average population density in urban areas . indeed , both empirical and analytical results indicate that the trip length and the population density share the same exponential decaying rate . understanding human movement patterns is considered as a long - term fundamental but challenging task for decades . it is an essential component in urban planning , epidemics spreading and traffic engineering . during the past few years , various mobile devices ( e.g. , cellphones and gps navigators ) that support geolocation have been widely used in our daily life . as proxies , these devices record massive amounts of individual tracks , which provide a great opportunity to the research of human mobility . in recent studies of human movements in large scale of space , including trips between counties or cities , it is found that human mobility patterns exhibit the lvy walk characteristic , which corresponds to scale invariant step - lengths and is also observed in animals . for instance , brockmann et al . discover human travel displacements can be described by a power - law distribution by investigating the dispersal of bank notes in the united states . gonzlez et al . study mobility patterns of mobile phone users in european countries and find that their travel distances are distributed according to a truncated power law . moreover , the similar scaling laws are also observed in and separately . therefore , in order to understand the origin of the scaling law , some researchers try to propose possible explanations from the viewpoint of individual movements . however , regarding to the human movements in the urban area , many studies find that the human travel behavior could not be characterized by the scaling law but the exponential . for example , trajectories of passengers by taxis are investigated independently in three cities : lisbon , beijing and shanghai . and the three studies all suggest that trip distances obey exponential distributions rather than power - law ones . bazzani et al . analyze daily round - trip lengths of private cars drivers in florence and reveal an exponential law of lengths , too . in addition , the distances of individual movement in the london subway are found to significantly deviate from the power - law distribution as well . thus , yan et al . think the exponential distribution is produced by a single means of transportation under maxwell - boltzmann statistics . but a more convincing evidence is that exponential distributions of intra - urban travel distances are demonstrated respectively in eight cities of northeast china by analyzing the mobile phone data , which is not restricted to means of transportation . moreover , with the aid of `` checkins '' of foursquare users , noulas et al . also discover that the trip - length distributions in different cities could not be approximated by power - law functions . they believe that urban human movements are driven by the distribution of places of interest ( pois ) in the city . however , the conjecture might be challenged by the fact that the visit probabilities of pois depend not only on their geographical locations , but also on their sizes and popularities . although more and more evidence demonstrates the exponential law in intra - urban human movements , the origin of this universal rule is still missed . considering the significant role of the human mobility pattern in reality , it is essential to fill this vital gap . first , for most citizens , the majority of their trips occur in urban areas and just traverse small distances , while only few trips with large distances take place between counties or cities . second , understanding the exponential law provides important guidance to model intra - urban movements . compared to the scaling law , the exponential law implies lower probability of long travel distance , which could not be characterized by lvy walks . thus , modeling urban mobility according to the exponential law is more accordant with the real situations and is helpful for researches relevant to epidemic propagation control , wireless protocol designing and urban planning . third , the discrepancy of trip - length distributions at different spatial scales can offer deeper insights into the consistency between individual and collective human mobility patterns . it is worth emphasizing that most empirical studies aforementioned are based on collective human trips . yet there is no strong evidence that individual movements have the similar patterns with collective movements . for example , yan et al . observe the absence of scaling law of travel distances at the individual level , though it indeed exists in aggregated trip lengths . likewise , noulas et al . discover that the power - law distribution of trip lengths is only found at the aggregated level . these recent discoveries all imply the multiformity of individual mobility patterns . because of the diversity , it is worth further discussing whether it is reasonable that applying the properties observed in collective movements to model individual mobility patterns or exploring the source of the scaling law at the individual level . in this paper , we try to uncover the origin of the exponential law in intra - urban mobility from the perspective of collective movements . a new model is presented to predict individual flows between different regions with high fidelity , which could also reproduce the actual distributions of trip lengths in cities . then from this model , we find that the traffic flux depends on the spatial population distribution heavily . finally , both the empirical simulations and the analytical proof indicate that the exponential law is caused by the distribution of population density and they share the same decaying rate , too .
the implementation of nondestructive structural health monitoring ( shm ) systems for industrial objects is a challenging problem . in general , shm is a method of integrity analysis of an object via real - time measurements of a class of parameters , which characterize the object s integrity .the main assignment of shm systems is an alert of the system operator in case of exceeding the measured parameters of a certain critical value , where this value is determined from preliminary tests .problems of a design of shm are of current interest for civil and industrial buildings , pipelines , airplanes , water transport , and in many other scopes .composite materials have unique physical and chemical properties .these materials have corrosion resistance , high heat resistance , wear resistance , rigidity and low mass .the properties make them useful in a wide spectrum of technological applications , _e.g. _ , in space technologies , aviation engineering , shipbuilding , and etc .the design of shm systems for composite materials is especially topical .there are a two main approaches to design of a measurement part of shm systems .the first method of shm involves mounting sensors directly into the material .the second one is monitoring of the construction by the installation of sensors to certain elements of the structure . in the case of composite material structures , shm systemsare mostly based on embedding the sensors into the material . in this way ,the main question is what type of sensors is the most useful for shm of composite materials .practically , fiber optic sensors form the most useful class of sensors for shm systems .there are several types of fiber - based optic sensors .the most common solutions are sensors based on measuring light intensity using changes in fiber curvature or reflected light from a mirrored surface , the fabry perot sensors , and the fiber bragg grating ( fbg ) sensors. using of terahertz vision system for nondestructive testing of composite materials is of current interest as well .using of an embedded fbg sensors for measurement of distributed stress and strain fields of the composite material has significant advantages .first of all , fbg sensors can be easily embedded into materials to provide damage detection or internal strain field mapping .the fbg sensors supposed to be located in fiber optic cable mounted in the sample .furthermore , the key feature of fbgs sensors is that the information about perturbations is encoded in wavelength .indeed , the main principle of fbg sensors work is transform of spectral or phase characteristics of the test optical signal propagating in the sample according to the perturbations of a measured physical quantity . due to their high noise resistance to the effects of non - informative influencing factors , fbg sensorsattract increased attention .fbg is a periodic structure of refractive index which is formed in a fiber core ( see , fig .shm systems have a measurement part , which typically consists from : fbg sensors , a source of test optical signal , and a measuring device for the signal reflected from fbg .important problem is the embedding the sensor into the material sample .useful solution of this problem is proposed in .mm with typical grating period being nm.,scaledwidth=25.0% ] on the other hand , practical application and implementation of shm systems require using of a software solution for control , fast collection , reliable storage , and effective processing of large data flow from the measurement part in real time .indeed , the data proceeding is an another kind of problem , which can be solved efficiently only using software solution .the software part is needed for the solution of the following classes of problems : processing of the measurements from fbg sensors , visualization of the measurements results , and `` user measurement part '' interaction , and control for the system . in the present work, we present developed shm system for composite materials based on the fiber bragg grating sensors .we present the software solution for developed shm system .the system has two main scopes of applications .the first is preliminary tests system for novel composite materials .the other one is application for real - time shm with an alert for an operator .there are two physical reasons for the fluctuation of the fbg sensor central wavelength : the mechanical stress and the temperature variations .the central wavelength of bragg grating as function of the strain and the temperature can be presented as follow where is the effective refractive index and is the grating period .the first term of the right - hand side of eq .( [ equation ] ) describes the effect of temperature on the wavelength where is the thermal expansion coefficient and is the thermo - optic coefficient of the fiber . in turn ,second part of the right - hand side of eq .( [ equation ] ) describes the nature of the dependence of the central wavelength from the applied strain , which can be represented in the form ,\ ] ] where is the relative strain , is strain constant for optical fiber , and are the pockels coefficients in the optical stress tensor , and is the poisson coefficient .functioning of the measurement part of shm system with fbg sensors is based on the following principles ( fig .each channel of the measurement system is a fiber optic cable .there are 16 channels , embedded in the composite material .all of these sensors are sequentially arranged bragg sensors .sensor have a unique central wavelength of the reflected signal in channel .the aim of shm systems is to measure the strain impact , but for the reliability of the data it is necessary to consider the effect of temperature . in other words ,the idea is to isolate temperature and strain effects .for this reason , strain - invariant ( called `` temperature '' ) sensors in fbg are used for measuring the temperature effect only .the measurement part ( measurement system block , fig .1 ) is intended for interrogating the sensors by applying the test input of each channel of broadband optical signal ( the bandwidth is nm ) .the part makes the measurement of wavelengths reflected from the bragg grating , _ i.e. _ , it works as an optical spectrum analyzer .signal reflected from fbg sensors is the input to an optical switch ( switch , fig .1 ) , which is recorded after the measuring system . the switch produces a serial connection channel to measurement system block , which performs measurements of the central wavelengths of the reflected radiation from each fbg sensor ( see fig 2 ) .these measurement system block and optical switch are connected to pc via the standard usb - interface .-4 mm start our consideration from the simplest case of a one - channel measurement part .suppose that there are sensors in the channel , where is a quantity of bragg sensors and is a quantity of temperature bragg sensors .initial data for the structural monitoring system is the set of the reflection wavelength where is the reflection wavelength of sensor .the result of the measurement in the channel is following set where is the measured reflecting wavelength of sensor .first , initial reflection wavelength of strain - invariant sensors compared with measured for identifying a temperature effect .the set of wavelengths is formed from for excluding the temperature effect .second , component of the vector compared with component of the vector for temperature sensors .main aim of this procedure is to identify sensors with non - zero difference finally , from grating period , number of sensor and the sample length it is possible to localize the position of strain effects of the sample .it is clear that for multi - channel system , _e.g. _ , for our system , there is matrix where is the sensor number and is the channel number .matrix elements , obviously , are the reflection wavelengths of the sensor in the channel. however , general procedure of data processing is the same .we describe the principles of operation established for that purpose multi - threaded software applications .we note that there is a energy depend memory part in the measurement system block and its dll library used for getting data .the gui software application is a system of tabs with different settings and for displaying data .screen of the main control page of developed software solution is presented on the fig .3 . from software point of view, every sensor is characterized via set of parameters . as it has been mentioned before , every sensor measures both temperature and mechanical strain . in this way ,first parameter of the sensor is a type .in addition , there are channel number , lower and upper limit among them . on practice ,special sensors which are sensitive only to changes in temperature are used .this allows to correct measurement results obtained from other sensors . after the addition of sensors in the channels formed by the summary table , which allows the operator to control the number and types of currently active sensors .the operator at any time is able to move the sensors for tracking in real time . via dll libraries optical switch and measurement system implemented algorithms for channel management in the optical switch and selection the optimum mode of data collection , _i.e. _ , a combination of channels , the number of active sensors in the channel sample rate sensors . additionally , the software part solves the problem of data visualization .2 presents our laboratory experimental setup in accordance with the general experimental setup ( fig .2 ) . fig .4 shows the results of processing measurement signals from the measurement equipment in the test laboratory strain of the composite material in various load conditions : this is strain as function of time .it should be noted that an important element in developing the software module is a data storage as an integral part of the system when used in practice , is the storage and archiving .structural monitoring systems are actively used to solve problems of the analysis of the dynamics of aging infrastructure in order to prevent their destruction .developed systems allows to realize non - destructive structural health monitoring of structures made of composite materials .software allows to perform the following kinds of problem : control the measurement part ( activation of sensors , one of their number , channel selection and polling frequency sensors ) .visualization of measurement results in real time is obtained via using a software application .the use of software and hardware optoelectronic measurement system integrated continuous non - destructive testing is promising for a wide range of engineering problems .the software part of the complex can be easily modified to the case of using another type of sensor or monitoring by the sensors to certain elements of the structure .* acknowledgments*. we thank scientific - educational center `` photonics and infrared technology '' for support and k.i .zaytsev for useful comments .akf is a dynasty foundation fellow .we are grateful to the organizers of the 23rd laser physics workshop ( sofia , july 1418 , 2014 ) for kind hospitality .[ tanaka ] t. tanaka , m. yamamoto , t. katayama , k. nakahira , k. yamane , y. shimano , and k. hirayama , http://dx.doi.org/10.1088/0964-1726/10/3/320[earthquake engineering and engineering seismology * 4 * , 75 ( 2002 ) ] .[ osa ] a.k .fedorov , v.a .lazarev , i.p .makhrov , n.o .pozhar , m.n .anufriev , and a.b ._ collection of scientific papers of the third international conference `` high performance computing '' _ ( kyiv , 2013 ) .
we present a structural health monitoring system for nondestructive testing of composite materials based on the fiber bragg grating sensors and specialized software solution . the developed structural monitoring system has potential applications for preliminary tests of novel composite materials as well as real - time structural health monitoring of industrial objects . the software solution realizes control for the system , data processing , and alert of an operator .
our approach begins with the mixed - membership stochastic block model ( mmsbm ) , which has been used to model networks with overlapping communities or groups . as in the original mmsbm and in related models , we assume that each node in the bipartite graph of users and items belongs to a mixture of groups .however , unlike in , we do not assume that these group memberships affect the presence or absence of an link , i.e. , the event that a given user rates a given item .instead , we take the set of links as given , and attempt to predict the ratings .we do this with an mmsbm - like model where the rating a user gives an item is drawn from a probability distribution that depends on their group memberships .let us set down some notation .we have users and items , and a bipartite graph of links , where the link indicates that item was given a rating ( observed or unobserved ) by user . for each , the rating belongs to some finite set such as . given a set of observed ratings ,our goal is to classify the users and the items , and to predict the rating of a link for which the rating is not yet known .our generative model for the ratings is as follows .there are groups of users and groups of items . for each pair of groups , there is a probability distribution over of the rating that gives , assuming that belongs entirely to group and belongs entirely to group . to model mixed group memberships ,each user has a vector , where denotes the extent to which user belongs to group .similarly , each item has a vector .these vectors are normalized , i.e. , . given and , the probability distribution of the rating is then a convex combination , = \sum_{k,\ell } \theta_{uk } \eta_{i\ell } p_{k\ell}(r ) \ , .\label{eq : model}\ ] ] abbreviating all these parameters as , the likelihood of the observed ratings is thus as we discuss below , we infer the values of the parameters that maximize this likelihood using an efficient expectation - maximization algorithm .we can then use the inferred model to predict unobserved ratings .our work is different from previous work on collaborative filtering in several ways .first , unlike matrix factorization approaches such as or their probabilistic counterparts , we do not think of the ratings as integers . as has been established in the literature, giving a movie a rating of 5 instead of 1 does not mean the user likes it five times as much .our results suggest that it is better to think of different ratings simply as different labels that appear on the links of the network .moreover , our method yields a distribution over the possible ratings directly , rather than a distribution over integers or reals that must be somehow mapped to the space of possible ratings . from this point of view, our model is a bipartite mmsbm with metadata ( or labels ) on the edges ; a similar model based on the stochastic block model ( sbm ) , where each user and item belongs to only one group , was given in .an alternative approach would be to consider a multi - layer representation of the data as in .second , we do not assume that the matrices have any particular structure .in particular , we do not assume homophily , where groups of individuals correspond to groups of items , and individuals prefer items that belong to their own group : that is , we do not assume that is larger on the diagonal for higher ratings .thus our model , and our algorithm , can learn arbitrary couplings between groups of individuals and groups of items , and do so independently for each possible rating .third , unlike some approaches that use inference methods similar to ours , and as stated above , our goal is not to predict the _ existence _ of links . in particular, we do not assume that individuals only see movies ( say ) that they like , and we do not treat missing links as zeroes or low ratings .to put this differently , we are not trying to complete to a full matrix of ratings , but only to predict the unobserved ratings in .thus the only terms in the likelihood of our model correspond to observed ratings .as we describe below , our model also has the advantage of being mathematically tractable .it yields an expectation - maximization algorithm for fitting the parameters which is highly efficient : each iteration takes linear time as a function of the number of users , items , and observed links . as a result , we are able to handle quite large datasets , and achieve a higher accuracy than standard methods .in most practical situations , marginalizing exactly over the group membership vectors and and the probability matrices ( similar to ref . ) is too computationally expensive . as an alternative we propose to obtain the model parameters that maximize the likelihood using an expectation - maximization ( em ) algorithm . in particular, we use a classic variational approach ( see methods ) to obtain the following equations for the model parameters that maximize the likelihood , here and denote the neighborhoods of and respectively ; and are the node degrees , i.e. , the number of observed ratings for user and item respectively ; and is the variational method s estimate of the probability that the rating is due to and belonging to groups and respectively .these equations can be solved iteratively with an em algorithm . starting with an initial estimate of , , and ,we repeat the following steps until the parameters converge : 1 .( expectation step ) use to compute for , 2 .( maximization step ) use - to compute , , and . the number of parameters and terms in the sums in eqs . -is . assuming that and are constant , this is , andhence linear in the size of the dataset ( see fig .s1 in supplementary materials ( sm ) ) . as the set of observedratings is typically very sparse because only a small fraction of all possible user - item pairs have observed ratings , our algorithm is feasible even for very large datasets .we test the performance of our algorithm by considering six datasets : the movielens 100k and 10 m datasets with 100,000 and 10,000,000 ratings respectively , yahoo ! songs , amazon books , and the dataset from libimseti.cz dating agency , which we split into two datasets , consisting of males rating females and vice versa .these datasets are diverse in the types of items considered , the sizes of the sets of possible ratings , and the density of observed ratings ( see table [ t - data ] ) . for each datasetwe perform a five - fold cross - validation , splitting it into five equal subsets , and using each one as a test set after training the model on the union of the other four .we compare our algorithm to three benchmark algorithms ( see methods ) : a baseline naive algorithm that assigns to each test rating the average of the observed ratings for item ; the item - item algorithm , which predicts based on the observed ratings of user for items that are the most similar to ; and `` classical '' matrix factorization .for all these benchmark algorithms we use the implementation in the lenskit package .additionally , for the smallest datasets , we also use the ( un - mixed ) stochastic block model approach of ref . ; however , that algorithm does not scale well to larger datasets .for our algorithm , we set , i.e. , we assume that there are 10 groups of users and 10 groups of items ( recall that we do not assume any correspondence between these groups ) .we considered some other choices of and as well ( see fig .s2 in the sm ) . since iterating the em equation of eqs .- can lead to different solutions depending on the initial conditions , we perform sampling of 500 independent runs with random initial conditions .we average the predicted probabilities over the 500 runs because we typically do not observe that one solution has much higher likelihood than the others ( see fig .s3 of the sm for results obtained using the maximum likelihood solution ) . as a result , for each rating a user gives an item we have a probability distribution of ratings that results from the average of the probabilities for all the sampling set .therefore , we can choose how to make predictions from the probability distribution of ratings : the most likely rating , the mean or the median .in contrast , recommender systems like mf and item - item give only the most probable rating .we measure the performance in terms of accuracy , i.e. , the fraction of ratings that are exactly predicted by each algorithm , and the mean absolute error ( mae ) . for our algorithm, we find that the best estimator for the accuracy is the most likely rating from the probability distribution of ratings , while for the mae the best estimator is the median . ]we find that in most cases our approach outperforms the item - item algorithm and matrix factorization ( fig .[ f.compare ] ) . indeed , when considering the accuracy , i.e. , the fraction of times an algorithm exactly predicts the correct rating , the mmsbm is significantly better than matrix factorization for all the datasets we tested , and better than the item - item algorithm in five out of six datasets , the only exception being the amazon books dataset . in terms of the mean absolute error ( mae ), the mmsbm is the most accurate in four out of the six datasets ( item - item and matrix factorization produce smaller mae in the amazon books and movielens 10 m datasets ) .interestingly , our approach produces results that are almost identical to those of the un - mixed sbm for the two examples for which inference with the sbm is feasible .in particular , we achieve the same accuracy with in the mixed - membership model as with around groups in the un - mixed sbm .this suggests that many of the groups observed in are in fact mixtures of a smaller number of groups , and that the additional expressiveness of the mmsbm allows us to succeed with a lower - dimensional model .matrix factorization ( mf ) is one of the most successful and popular approaches to collaborative filtering , both in its `` classical '' and its probabilistic form .however , as we have just discussed , our mmsbm gives consistently more accurate results for the ratings , often by a large margin .here , we analyze the origin of this improvement in performance .we start by giving an interpretation of matrix factorization in terms of our mmsbm .a matrix is of rank if and only if its entries can be written as inner products of -dimensional vectors associated with its rows as columns .based on this idea , matrix factorization assumes that the expected rating that user gives item is , where and are -dimensional vectors representing the user and the item respectively .one can apply a variety of noise models or loss functions , as well as regularization terms for the model parameters , but this does not alter significantly the considerations that we present next .the limitations in expressiveness of matrix factorization become apparent when we interpret matrix factorization as a mixture model .assume that there are groups of users and that is the probability that user belongs to group .similarly , assume that there are groups of items and that is the probability that item belongs to group .finally , assume that users in group _ only _ like items in group ; in particular , users in assign a baseline rating of to items in group and a rating of to items in all other groups. finally , let and be user and item `` intensities '' that correct for the fact that some users rate on average higher than others , and that some items are generally more popular than others .then the expected ratings are given by identifying and , this becomes the matrix factorization model .thus ( nonnegative ) matrix factorization corresponds to a model where each group of users corresponds to a group of items , and users in a given group only like items in the corresponding group .we argue that these assumptions are too limiting to model user recommendations realistically .( note that our interpretation of matrix factorization as a mixture model is independent of attempts in the literature to combine matrix factorization with other mixture models . )our mmsbm relaxes these implausible assumptions by allowing the distribution of ratings to be given by arbitrary matrices , where the entry is the probability that a user in group gives an item in group the rating .matrix factorization is roughly equivalent to assuming that is diagonal , at least for high ratings .we believe that the improved performance of the mmsbm over matrix factorization is due to this greater expressive power .indeed , fig .[ f.pmatrices ] shows that the matrices inferred by our model are far from the purely diagonal structure implicitly underlying matrix factorization .moreover , the generality of the mmsbm allows it to account for many of the features of real ratings .for instance , the distribution of ratings is highly nonuniform : as shown in fig .[ f.pmatrices ] , is quite rare whereas is quite common .different groups of users have very different distributions of ratings : users in group rate most movies with , while those in group often give ratings .similarly , movies in group are consistently rated by most users , while movies in group are rated quite often .it is also interesting that some groups of users agree on some movies but disagree on others : for example , users in groups agree that most movies in group should be rated , but they disagree on movies in group , rating them and respectively .these observations highlight the limitation in expressiveness of matrix factorization , and explain why our approach based on mmsbm yields better predictions of the ratings .because in the mmsbm all terms have a clear and precise ( probabilistic ) interpretation , our approach can naturally deal with situations that are challenging for other algorithms .an example of this is the cold start problem , that is , a situation in which we want to predict ratings for users or items ( or both ) for which we do not have training data . in the mmsbm ,the matrices are the same for all users and items ; in this sense , new users or items pose no particular difficulty .however , for a new user we need to calculate their group membership vector ( and analogously for a new item ) . since on average users tend to have a higher probability of belonging to some groups than to others , lacking all information about a user we can assume that they are proportionally more likely to belong to the same groups . in practice, this means that to any new user we can assign a group membership vector that is the average of the vectors of the observed users , this provides a principle method to deal with the cold start problem , without the need to add additional elements to the model . of cold start cases on average ; the movielens 10 m dataset ( ) ; men rating women ( m - w ) in the libimseti dataset ( ) ; women rating men ( w - m ) in the libimseti dataset ( ) ; and amazon books ( ) .we did not encounter any cold start cases in the cross - validation experiments with yahoo !songs ; this is to be expected since yahoo ! songs requires that users and songs have at least 20 ratings .the left column displays the accuracy for each dataset , and the right column the mean absolute error .the bars show the average of a five - fold cross - validation and the error bars show the standard error of the mean .[ f.coldstart ] ] in fig .[ f.coldstart ] we show that , also in cold start situations , our mmsbm outperforms the alternatives in most cases . in terms of accuracy , mmsbm is always more accurate than mf ( although in one case the difference is not significant ) , and more accurate than just assigning the most common rating to an item in all cases but one . in terms ofmean absolute error , our approach is more accurate than mf in four out of five cases ( in one , not significantly ) , and more accurate than using the most common rating in four out of five cases .finally , the expressiveness of the mmsbm enables us to investigate the social and psychological processes that determine user behaviors .to illustrate this idea , we analyze the user profiles in the movielens 100k dataset , which lists the age and gender of each user .specifically , we compare the user profiles of pairs of users by computing the cosine similarity . , we compute the cosine similarity of their user profiles .panel a shows the average similarity for pairs of females ( f - f ) , pairs of males ( m - m ) and mixed gender pairs ( f - m ) .the boxes show the mean ( black line ) and one standard error of the mean ; the bars show two standard errors of the mean .panel b shows average user similarities among users in the same age group , as a function of age .note that there are no female users of age greater than .the data suggests that male users are slightly more similar to each other than female users are , and that for all gender pairs similarity decreases with age ( f - f : spearman s ; p - value ; m - m : spearman s , p - value ; spearman s , p - value ) . ] figure [ f.agesex ] shows that when when we divide users according to gender , pairs of male users have more similar profiles than pairs of female users or male - female pairs ( see fig .[ f.agesex]a ) .interestingly , when we combine gender and age to define user groups , we find that gender profile similarities are not independent of the age groups ( see fig .[ f.agesex]b ) .in fact , we observe the general tendency that young users within a gender group seem to have larger profile similarities than older users .interestingly , this tendency is more apparent for female users who are the group with larger similarity for ages 10 - 20 and the one with lower similarity for ages 40 - 50 .our results show that the mmsbm we propose , and its associated expectation - maximization algorithm , is a robust , well - performing , and scalable solution to predict user - item ratings in different contexts .additionally , the interpretability of its parameters enables the analysis of the underlying social behavior of users .for example , we found that the similarity of users behavior is correlated with their gender and their age .these findings could conceivably lead to extensions of the model that take such behavioral considerations into account , for example by adding metadata to users ( e.g. age and gender ) and items ( e.g. genre ) .in fact , stochastic block models with node metadata have recently been proposed and may be a promising way to extend our approach .another advantage of the interpretability of our model and its parameters is that it can be readily applied to ( and performs well in ) situations that are challenging to other approaches , such as a cold start where no prior information is available about a new user or item .finally , the mmsbm outperforms matrix factorization in all the cases we consider , often by a large amount .as we have discussed , this is due to the fact that mmsbm is a more expressive generalization of the model underlying matrix factorization ; matrix factorization corresponds roughly to the special case of mmsbm where the matrices are diagonal , and where we assume the rating probabilities for different are strongly correlated ( corresponding to treating as a number rather than a symbol ) .matrix factorization is a widely used tool with many applications beyond recommender systems ; given our findings and the scalable expectation - maximization algorithm , it may make sense to use mmsbms in those other applications as well .in the mmsbm , each user has a vector describing how much she belongs to group , and each item has a vector describing how much it belongs to group .we treat these as probabilities , and normalize them as similarly , the matrices are normalized to give probability distributions of ratings over , we maximize the likelihood as a function of using an expectation maximization ( em ) algorithm .we start with a standard variational trick that changes the log of a sum into a sum of logs , writing here is the estimated probability that a given ranking is due to and belonging to groups and respectively , and the lower bound in the third line is jensen s inequality .this lower bound holds with equality when giving us the update equation for the expectation step . for the maximization step ,we derive update equations for the paremeters by taken derivatives of the log - likelihood . including lagrange multipliers for the normalization constraints , we obtain where is the degree of the user .similarly , where is the degree of item .this completes the derivation of and . finally , including a lagrange multiplier for , we have completing the derivation of .[ cols="<,^,>,>,>,^",options="header " , ] [ t - data ] we perform experiments on six different datasets : the movielens 100k and 10 m datasets ( ) , yahoo ! songs ( , ) , amazon books ( ) , and the libimseti.cz dating agency ( ) . we split the libimseti.cz dataset into two datasets : women rating men ( w - m ) and men rating women ( m - w ) .we neglected the links of women rating women and men rating men ; unfortunately these links constituted only 1% of the dataset . in table[ t - data ] , we show the characteristics of each dataset in terms of the scale of ratings , the total number of users , the total number of items , the number of ratings and the average percentage of cold start cases . the movielens 100k dataset also provides demographic information for the users , namely the age in years and gender .* naive model * as a baseline for comparison , we consider a naive model .its prediction for a rating is simply the average of s observed ratings , * item - item * the item - item algorithm uses the cosine similarity between items , based on the -dimensional vectors of ratings they have received , adjusted to remove user biases towards higher or lower ratings .the cosine similarity of items and is then .the predicted rating is the similarity - weighted average of the closest neighbors of that user has rated .we use the default , optimized implementation of the algorithm in lenskit with .* matrix factorization * one of the most widely used recommendation algorithms is matrix factorization ( mf ) . like the block model, the intuition behind matrix factorization is that there should be some latent features that determine how a user rates an item .however , it uses linear algebra to reduce the dimensionality of the problem .specifically , it assumes that the matrix of ratings ( with rows and columns ) is of rank , in which case it can be written where is a matrix and is a matrix .if we denote the rows of matrix as and the columns of as , then individual ratings are inner products .we then assume that some noise and/or bias has been applied to to produce the observed ratings .for example , some users rate items higher than others , and some items are systematically highly rated . in order to take this into consideration ,the unobserved ratings are estimated using where and are the biases of users and items respectively and is the average rating in . for the purpose of making recommendations ,it is convenient to pose the decomposition problem as an optimization one ; in particular , minimizing the error and applying a regularization term gives \ , .\end{split}\ ] ] where is a regularization parameter . as funksoriginally proposed one can solve this problem numerically using stochastic gradient descent .we use the lenskit implementation of the algorithm , with and a learning rate of as suggested in ref . . * stochastic block model * the stochastic block model ( sbm ) assumes that the probability that two nodes form a link between them , such as a relationship between actors in a social network , depends on what groups they belong to .analogously , the sbm recommender algorithm assumes that the probability of a rating of a user for an item depends on the groups , to which they belong ; unlike this paper , it assumes that each user or item belongs to a single group rather than a mixture .it uses a bayesian approach that deals rigorously with the uncertainty associated with the models that could potentially account for the observed ratings .mathematically , the problem is to estimate that the unobserved rating of item by user is given the observable ratings .this is an integral over all possible block models , where is the probability that if the ratings where actually generated using model , and is the probability of model given the observation ( assuming for simplicity that all models are equally likely a priori ) .this integral is over the continuous and discrete parameters of the block model . in particular, for each and each pair of groups we integrate over the continuous parameters = p_{k\ell}(r)12 & 12#1212_12%12[1][0] link:\doibase 10.1155/2009/421425 [ * * , ( ) ] * * , ( ) link:\doibase 10.1016/j.bbi.2008.05.010 [ * * , ( ) ] link:\doibase 10.1103/physrevx.5.011033 [ * * , ( ) ] link:\doibase 10.1155/2009/785152 [ * * , ( ) ] \doibase http://dx.doi.org/10.1016/j.csda.2006.11.006[ * * , ( ) ] * * , ( ) http://hdl.handle.net/1853/20058[__ ] , ( , ) link:\doibase 10.1073/pnas.0308531101 [ * * , ( ) ] , link:\doibase 10.1103/physreve.84.036103 [ * * , ( ) ] * * , ( ) in _ _ , ( , , ) pp .link:\doibase 10.1145/1390156.1390267 [ ( ) ] ( , , ) pp .link:\doibase 10.1145/2043932.2043958 [ ( ) ] http://arxiv.org/abs/1311.1704 [ * * ( ) ] in link:\doibase 10.1145/2766462.2767755 [ _ _ ] , ( , , ) pp . in link:\doibase 10.1145/2783258.2783381[ _ _ ] , ( , , ) pp . in http://www.occamslab.com/petricek/papers/dating/brozovsky07recommender.pdf[__ ] ( , , ) in _ _ , ( ) pp . in __ ( , ) pp . in _ ( , ) pp .[ * * ( ) ] in link:\doibase 10.1145/371920.372071 [ _ _ ] , ( , , ) pp . in _ ( ) pp .link:\doibase 10.1016/0165 - 1684(84)90013 - 6 [ * * , ( ) ] * * , ( ) * * , ( ) link:\doibase 10.1073/pnas.0908366106 [ * * , ( ) ]
with ever - increasing amounts of online information available , modeling and predicting individual preferences for books or articles , for example is becoming more and more important . good predictions enable us to improve advice to users , and obtain a better understanding of the socio - psychological processes that determine those preferences . we have developed a collaborative filtering model , with an associated scalable algorithm , that makes accurate predictions of individuals preferences . our approach is based on the explicit assumption that there are groups of individuals and of items , and that the preferences of an individual for an item are determined only by their group memberships . importantly , we allow each individual and each item to belong simultaneously to mixtures of different groups and , unlike many popular approaches , such as matrix factorization , we do not assume implicitly or explicitly that individuals in each group prefer items in a single group of items . the resulting overlapping groups and the predicted preferences can be inferred with a expectation - maximization algorithm whose running time scales linearly ( per iteration ) with the number of observed ratings . our approach enables us to predict individual preferences in large datasets , and is considerably more accurate than the current algorithms for such large datasets . the goal of recommender systems is to predict what movies we are going to like , what books we are going to purchase , or even who we might be interested in dating . the rapidly growing amount of data on item reviews , ratings , and purchases from a growing number of online platforms holds the promise to facilitate the development of finer and more informed models for recommendation . at the same time , however , it poses the challenge of developing algorithms that can handle such large amounts of data both accurately and efficiently . a plausible expectation when developing recommendation algorithms is that similar users relate to similar objects in a similar manner , i.e. , they purchase similar items and give the same item similar ratings . this means that we can use the rating history of a set of users to make recommendations , even without knowing anything about the characteristics of users or items ; this is the basic underlying assumption of collaborative filtering , one of the simplest and most common approaches in recommender systems . however , most research in recommender systems has not focused on precisely formalizing these general assumptions into plausible and rigorous models , but rather on the development of scalable algorithms , often at the price of implicitly using models that are overly simplistic or unrealistic . for example , matrix factorization and latent feature approaches assume that users and items live in some abstract low - dimensional space , but whether such a space is expressive enough to accommodate for the rich variety of user behaviors is rarely discussed . as a result , such state - of - the - art scalable approaches have significantly lower accuracies than inference approaches based on models of user preferences that are socially more realistic . on the other hand , these more realistic approaches do not scale well with dataset size , which makes them unpractical for large datasets . here , we develop an approach to predict user ratings that makes explicit hypotheses about rating behavior . in particular , our approach is based on the assumption that there are groups of users and of items , and that the rating a given user assigns to a given item is determined probabilistically by their group memberships . importantly , we do not assign users and items to a specific group ; rather , we allow each user and each item to belong simultaneously to mixtures of different groups . all of these elements are combined in a model with a precise probabilistic interpretation , which allows for rigorous inference algorithms . happily , the inference problem for our model can be solved very efficiently : specifically , we propose an expectation - maximization algorithm whose running time , per iteration , scales linearly with the number of observed ratings , and which appears to converge rapidly in practice . we demonstrate that our model is more realistic than those implicit in other approaches ( particularly matrix factorization ) and that , as a consequence , our approach consistently outperforms state - of - the - art collaborative filtering approaches , often by a large margin . moreover , because our model has a clear interpretation , it can deal naturally with some situations that are challenging for other approaches ( for example , the cold start problem ) and can help to build theories about user behavior . we argue that our approach may also be suitable for other areas where matrix factorization is increasingly used such as image reconstruction , textual data mining , cluster analysis or pattern discovery .
this part should serve as a collection of academic test examples for calculating the minimum time function for several , mainly linear control problems which were previously discussed in the literature .the numerical approximation of the minimum time function is based on the calculation of reachable sets with set - valued quadrature methods and is described in full details in the first part .many links to other numerical approaches for approximating reachable sets are also given there .in several examples , we compare the error of the minimum time function studying the influence of its regularity and of the smoothness of the support functions of corresponding set - valued integrands .we verify the obtained error estimates which involve time and space discretization .we will not repeat the importance , applications and study of the minimum time function in this second part and refer also for the notations to the first part .we first consider several linear examples with various target and control sets and study different levels of regularity of the corresponding minimum time function .the control sets are either one- or two - dimensional polytopes ( a segment or a square ) or balls and are varied to study different regularity allowing high or low order of convergence for the underlying set - valued quadrature method . in all linear examples ,we apply a set - valued combination method of order 1 and 2 ( the set - valued riemann sum combined with euler s method resp .the set - valued trapezoidal rule with heun s method ) . for the nonlinear example in subsection [ subsec_nonlin ], we would like to approximate the time - reversed dynamics of directly by euler s and heun s method .this example demonstrates that this approach is not restricted to the class of linear control systems .although first numerical experiences are gathered , its theoretical justification has to be gained by a forthcoming paper . in subsection [ subsec_non_exp_prop ]one example demonstrates the need of the strict expanding property of ( union of ) reachable sets for characterizing boundary points of the reachable set via time - minimal points ( compare ( * ? ? ?* propositions 2.18 and 2.19 ) ) .the two - dimensional system in example [ ex : counter_ex_1 ] is not normal and has only a one - dimensional reachable set .the section ends with a collection of examples which either are more challenging for numerical calculations or partially violate ( * ? ? ?* assumptions 2.13 ( iv ) and ( iv) in proposition 2.19 ) .finally , a discussion of our approach and possible improvements can be found in section [ sec : concl ] .the following examples in the next subsections are devoted to illustrating the performance of the error behavior of our proposed approach .the space discretization follows the presented approach in ( * ? ? ?* subsec .3.2 ) and uses supporting points in directions u \subset { { \mathbb r}}^2 ] with step size . in the linear ,two - dimensional , time - invariant examples [ ex:1][ex:3b ] we can check ( * ? ? ?* assumption 2.13 ( iv ) ) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ is _ strictly expanding _ on the compact interval } .\ ] ] we consider either the small ball or the origin as target set .this is a simple time - invariant example with 0 & 0 \end{bmatrix} ] .its fundamental solution matrix is the identity matrix , therefore and any method from ( i)(iii ) gives the exact solution , i.e. due to the symmetry of .for instance , the set - valued euler scheme with yields therefore , and the error is only due to the space discretizations , and does not depend on ( see table [ tab:1 ] ). the error would be the same for finer step size and in time or if a higher - order method is applied .note that the error for the origin as target set ( no space discretization error ) is in the magnitude of the rounding errors of floating point numbers . we choose and for the computations . the set - valued riemann sum combined with euler s method is used .it is easy to check that the minimum time function is lipschitz continuous , since one of the equivalent petrov conditions in , ( * ? ? ?iv , theorem 1.12 ) with or ^ 2 } \end{cases } \end{aligned}\ ] ] is constant with respect to the time , so it is trivially arbitrarily continuously differentiable with respect to with bounded derivatives uniformly for all . , & ^ 2 ] , ' '' '' + & & & ' '' '' + & & & ' '' ''+ & & & ' '' '' + & & & ' '' '' + + in fig .[ fig:1_ball ] the minimum time functions are plotted for example [ ex:1 ] for two different control sets ( left ) and ^ 2 ] , we now study well - known dynamics as the double integrator and the harmonic oscillator in which the control set is one - dimensional .the classical rocket car example with hlder - continuous minimum time function was already computed by the hamilton - jacobi - bellman approach in ( * ? ? ? * test 1 ) and , where numerical calculations are carried out by enlarging the target ( the origin ) by a small ball .[ ex:2 ] a ) the following dynamics is the _double integrator _ , see e.g. ..\ ] ] we consider either the small ball or the origin as target set .then the minimum time function is continuous for the first choice of see and the support function for the time - reversed dynamics )=\delta \bigg(l,\begin{bmatrix } 1 & -(t-\tau ) \\[0.3em ] 0 & 1 \end{bmatrix } \begin{bmatrix } 0 \\[0.3em ] -1 \end{bmatrix}[-1,1]\bigg)=\big|(t-\tau,-1 ) \cdotl \big|\ ] ] is only absolutely continuous with respect to for some directions with .hence , we can expect that the convergence order for the set - valued quadrature method is at most .+ we fix as maximal computed value for the minimum time function and . in table[ tab:2 ] the error estimates for two set - valued combination methods are compared ( order 1 versus order 2 ) .since the minimum time function is only continuous we expect as overall convergence order resp .a least squares approximation of the function for the error term reveals , for euler scheme combined with set - valued riemann sum resp . , ( if is fixed , then ) for heun s method combined with set - valued trapezoidal rule .hence , the approximated error term is close to the expected one by ( * ? ? ?* theorem 3.7 and remark 3.8 ) .very similar results are obtained with the runge - kutta methods of order 1 and 2 in table [ tab:22 ] in which the set - valued euler method is slightly better than the combination method of order 1 in table [ tab:2 ] , and the set - valued heun s method coincides with the combination method of order 2 , since both methods use the same approximations of the given dymanics . here we have chosen to double the number of directions each time the step size is halfened which is suitable for a first order method . for a second order methodwe should have multiplied by 4 instead . from this pointit is not surprising that there is no improvement of the error in the fifth row for step size ..error estimates for ex .[ ex:2 ] a ) for combination methods of order 1 and 2 [ cols="<,^,^,^",options="header " , ] + the minimum time function for this example is shown in fig .[ exam_5 ] ., width=432,height=288 ] the next example violates the continuity of the minimum time function ( the dynamics is not normal ) .nevertheless , the proposed ( * ? ? ?* algorithm 3.4 ) is able to provide a good approximation of the discontinuous minimum time function .consider the dynamics [ ex : counter_ex_1 ] \dot{x}_2 \end{bmatrix}=\begin{bmatrix } 0 & -1 \\[0.3em ] 2 & 3 \end{bmatrix}\begin{bmatrix } x_1 \\[0.3em ] x_2 \end{bmatrix}+ u_1 \begin{bmatrix } 1 \\[0.3em ] -1 \end{bmatrix}\ ] ] with ] .the kalman rank condition yields = 1 < 2 \end{aligned}\ ] ] so that the normality of the system is not fulfilled .the fundamental system ( for the time - reversed system ) is the same as in example [ ex:3a ] so that )=e^{\tau - t}\vert l_1-l_2 \vert = e^{\tau - t } \delta ^*(l , v ) , \\\intertext{with the line segment . since } & \int_0^t \delta^*(l , \phi(t,\tau)\bar b(\tau)[-1,1 ] ) d\tau = e^{\tau - t } \bigg|_{\tau=0}^t \cdot \delta ^*(l , v ) \\ = & ( 1 - e^{-t } ) \cdot \delta ^*(l , v ) = \delta ^*(l , ( 1 - e^{-t } ) v ) , \\\mathcal{r}(t ) & = \int_0^t \phi(t,\tau)\bar b(\tau)[-1,1 ] d\tau = ( 1 - e^{-t } ) v. \end{aligned}\ ] ] hence , the reachable set is an increasing line segment ( and always part of the same line in , i.e. it is one - dimensional so that the interior is empty ) .clearly , both inclusions i.e. ( see ( * ? ? ? * ( 2.12 ) ) ) , hold , but not the strictly expanding property of on ] . has negative eigenvalues -1 and -2 .hence , the reachable sets converge towards a final , bounded , convex set , see fig .[ fig : exam_n1 ] ( right ) .we experience the same numerical problems as in example [ ex : n1 ] .[ ex : n4 ] let the dynamics be given by in case a ) the reachable sets for a given end time are always balls around the origin ( see fig . [fig : exam_n4 ] ( left ) ) , if the target set is chosen as the origin . in caseb ) the point is considered as target set .[ fig : exam_n4 ] ( right ) shows that the union of reachable sets is no longer convex ., title="fig : " ] , title="fig : " ] [ ex : n5 ] let us reconsider the dynamics of example [ ex:2 ] , i.e. in the first case , let ] is chosen .the convex reachable set is not only enlarging , but also moving which results in the nonconvexity of .moreover , both ( * ? ? ?* ( 2.12 ) in remark 2.20 ) and are not fulfilled in this example as depicted in fig .[ fig : exam_n5 ] ( right ) ., title="fig : " ] , title="fig : " ]although the underlying set - valued method approximating reachable sets in linear control problems is very efficient , the numerical implementation is a first realization only and can still be considerably improved .especially , step 3 in ( * ? ? ?* algorithm 3.4 ) can be computed more efficiently as in our test implementation .furthermore , higher order methods like the set - valued simpson s rule combined with the runge - kutta(4 ) method are an interesting option in examples where the underlying reachable sets can be computed with higher order of convergence than 2 , especially if the minimum time function is lipschitz . but even if it is merely hlder - continuous with , the higher order in the set - valued quadrature method can balance the missing regularity of the minimum time function and improves the error estimate .we are currently working on extending this first approach for linear control problems without the monotonicity assumption on reachable sets and for nonlinear control problems .the authors want to express their thanks to giovanni colombo and lars grne who supported us with helpful suggestions and motivating questions .r. baier .selection strategies for set - valued runge - kutta methods . in z.li , l. g. vulkov , and j. wasniewski , editors , _ numerical analysis and its applications , third international conference , naa 2004 , rousse , bulgaria , june 29 - july 3 , 2004 , revised selected papers _ ,volume 3401 of _ lecture notes in comput ._ , pages 149157 , berlin heidelberg , 2005 .springer .r. baier and f. lempio . approximating reachable sets by extrapolation methods . in p.j. laurent , a. le mhaute , and l. l. schumaker , editors , _ curves and surfaces in geometric design .papers from the second international conference on curves and surfaces , held in chamonix - mont - blanc , france , july 1016 , 1993 _ , pages 918 , wellesley , 1994 . a k peters .r. baier and f. lempio .computing aumann s integral . in a.b. kurzhanski and v. m. veliov , editors , _ modeling techniques for uncertain systems , proceedings of a conference held in sopron , hungary , july 610 , 1992 _ , volume 18 of _ progress in systems and control theory _ , pages 7192 , basel , 1994 .birkhuser .m. bardi , m. falcone , and p. soravia .numerical methods for pursuit - evasion games via viscosity solutions . in _ stochastic and differential games_ , volume 4 of _ ann .internat .games _ , pages 105175 .birkhuser boston , boston , ma , 1999 .m. falcone .numerical solution of dynamic programming equations .appendix a. in m. bardi and i. capuzzo - dolcetta , editors , _ optimal control and viscosity solutions of hamilton - jacobi - bellman equations _ , systems & control : foundations & applications , pages 471504 .birkhuser boston inc . ,boston , ma , 1997 .
in the first part of this paper we introduced an algorithm that uses reachable set approximation to approximate the minimum time function of linear control problems . to illustrate the error estimates and to demonstrate differences to other numerical approaches we provide a collection of numerical examples which either allow higher order of convergence with respect to time discretization or where the continuity of the minimum time function can not be sufficiently granted , i.e. we study cases in which the minimum time function is hlder continuous or even discontinuous
to understand the nature and origins of mass opinion formation is the outstanding challenge of democratic societies . in a last few years an enormous development of such social networks as livejournal , facebook , twitter andvkontakte , with up to hundreds of millions of users , demonstrated the growing influence of these networks on social and political life .the small - world scale - free structure of the social networks ( see e.g. ) , combined with their rapid communication facilities , leads to a very fast information propagation over networks of electors , consumers , citizens making them very active on instantaneous social events .this puts forward a request for new theoretical models which would allow to understand the opinion formation process in the modern society of xxi century .the important steps in the analysis of opinion formation have been done with the development of various voter models described in a great detail in , ,,, , ,, .this research field became known as sociophysics ,, . in this workwe introduce several new aspects which take into account the generic features of the social networks . at first, we analyze the opinion formation on real directed networks taken from the academic web link database of british universities networks , livejournal database and twitter dataset .this allows us to incorporate the correct scale - free network structure instead of unrealistic regular lattice networks , often considered in voter models . at second , we assume that the opinion at a given node is formed by the opinions of its linked neighbors weighted with the pagerank probability of these network nodes . we think that this step represents the reality of social networks : all of network nodes are characterized by the pagerank vector which gives a probability to find a random surfer on a given node as described in .this vector gives a steady - state probability distribution on the network which provides a natural ranking of node importance , or elector or society member importance . in a certain sensethe top nodes of pagerank correspond to a political elite of the social network which opinion influences the opinions of other members of the society .thus the proposed pagerank opinion formation ( prof ) model takes into account the situation in which an opinion of an influential friend from high ranks of the society counts more than an opinion of a friend from lower society level .we argue that the pagerank probability is the most natural form of ranking of society members .indeed , the efficiency of pagerank rating is demonstrated for various types of scale - free networks including the world wide web ( www ) , _ physical review _citation network , scientific journal rating , ranking of tennis players , wikipedia articles , the world trade network and others . due to the above argument we consider that the prof model captures the reality of social networks and below we present the analysis of its interesting properties .the paper is composed as follows : the prof model is described in sec .ii , the numerical results on its properties are presented in sec.iii for british university networks . in sec .iv we combine the prof model with the sznajd model and study the properties of the prof - sznajd model . in sec.vwe analyze the models discussed in previous sections on an example of large social network of the livejournal . the results for the twitter dataset are presented in sec.vi .the discussion of the results is presented in sec.vii .the prof model is defined in the following way . in agreement with the standard pagerank algorithm determine the pagerank probability for each node and arrange all nodes in a monotonic decreasing order of the probability . in this wayeach node has a probability and the pagerank index with the maximal probability is at ( ) .we use the usual damping factor value to compute the pagerank vector of the google matrix of the network ( see e.g. ,, ) .in addition to that a network node is characterized by an ising spin variable which can take values or coded also by red or blue color respectively .the sign of a node is determined by its direct neighbors which have the pagerank probabilities . for that we compute the sum over all directly linked neighbors of node : where and denote the pagerank probability of a node pointing to node ( incoming link ) and a node to which node points to ( outgoing link ) respectively . here, the two parameters and are used to tune the importance of incoming and outgoing links with the imposed relation ( ) .the values and correspond to red and blue nodes respectively .the value of spin takes the value or respectively for or . in a certain sensewe can say that a large value of parameter corresponds to a conformist society where an elector takes an opinion of other electors to which he points to ( nodes with many incoming links are in average at the top positions of pagerank ) . on the opposite side a large value of corresponds to a tenacious society where an elector takes mainly an opinion of those electors who point to him .+ 0.2 cm as a function of number of iterations .the red and black curves ( top and bottom curves at respectively ) show evolution for two different realizations of random distribution of color with the same initial fraction at .the green curve ( middle curve at ) shows dependence for the initial state with all red nodes with top pagerank indexes ( highest values , ) .the evolution is done at and temperature ._ left panel : _ cambridge network with . _ right panel : _ oxford network with .[ fig1],title="fig:",scaledwidth=23.0% ] 0.2 cm as a function of number of iterations .the red and black curves ( top and bottom curves at respectively ) show evolution for two different realizations of random distribution of color with the same initial fraction at .the green curve ( middle curve at ) shows dependence for the initial state with all red nodes with top pagerank indexes ( highest values , ) .the evolution is done at and temperature ._ left panel : _ cambridge network with . _right panel : _ oxford network with .[ fig1],title="fig:",scaledwidth=23.0% ] -0.2 cm the condition ( [ eq1 ] ) on spin inversion can be written via the effective ising hamiltonian of the whole system of interacting spins : where the spin - spin interaction determines the local magnetic field on a given node which gives the local spin energy . according to ( [ eq2 ] ) , ( [ eq3 ] )the interaction between a selected spin and its neighbors is given by the pagerank probability : .thus from a physical view point the whole system can be viewed as a disordered ferromagnet . in this waythe condition ( [ eq1 ] ) corresponds to a local energy minimization done at zero temperature .we note that such an analogy with spin systems is well known for opinion formation models on regular lattices ,, .however , it should be noted that generally we have asymmetric couplings that is unusual for physical problems ( see discussion in ) . in view of this analogyit is possible to introduce a finite temperature and then to make a probabilistic metropolis type condition for the spin inversion determined by a thermal probability , where is the energy difference between on - site energies with spin up and down . during the relaxation processeach spin is tested on inversion condition that requires steps and then we do iterations of such steps .we discuss the results of the relaxation process at zero and finite temperatures in next section .here we present results for prof model considered on the networks of cambridge and oxford universities in year 2006 , taken from .the properties of pagerank distribution for these networks have been analyzed in .the total number of nodes and links are : , ( cambridge ) ; , ( oxford ) . both networks are characterized by an algebraic decay of pagerank probability and approximately usual exponent value , additional results on the scale - free properties of these networks are given in .we discuss usually the fraction of red nodes since by definition all other nodes are blue . :left / right column is for cambridge / oxford network .the initial fraction of red colors is ( top panel ) and nodes have red color for bottom panels with and for cambridge and oxford network respectively .nodes are ordered by the pagerank index and color plot shows only .[ fig2],title="fig:",scaledwidth=23.0% ] : left / right column is for cambridge / oxford network .the initial fraction of red colors is ( top panel ) and nodes have red color for bottom panels with and for cambridge and oxford network respectively .nodes are ordered by the pagerank index and color plot shows only .[ fig2],title="fig:",scaledwidth=23.0% ] + : left / right column is for cambridge / oxford network .the initial fraction of red colors is ( top panel ) and nodes have red color for bottom panels with and for cambridge and oxford network respectively .nodes are ordered by the pagerank index and color plot shows only .[ fig2],title="fig:",scaledwidth=23.0% ] : left / right column is for cambridge / oxford network . the initial fraction of red colors is ( top panel ) and nodes have red color for bottom panels with and for cambridge and oxford network respectively .nodes are ordered by the pagerank index and color plot shows only .[ fig2],title="fig:",scaledwidth=23.0% ] -0.2 cm the typical examples of time evolution of fraction of red nodes with the number of time iterations are shown in fig .we see the presence of bistability in the opinion formation : two random states with the same initial fraction of red nodes evolve to two different final fractions of red nodes .the process gives an impression of convergence to a fixed state approximately after iterations .a special check shows that all node colors become fixed after this time .the convergence time to a fixed state is similar to those found for opinion formation on regular lattices where .the corresponding time evolution of colors is shown in fig .[ fig2 ] for first 10% of nodes ordered by the pagerank index .the results of fig .[ fig1 ] show that for a random initial distribution of colors we may have different final states with variation compared to the initial .however , if we consider that nodes with the top index values ( from to ) have the same opinion ( e.g. red nodes ) then we find that even a small fraction of the total number of nodes ( e.g. of about 0.5% or 1% of ) can impose its opinion for a significant fraction of nodes of about .this shows that in the frame of prof model the society elite , corresponding to top nodes , can significantly influence the opinion of the whole society under the condition that the elite members have the fixed opinion between themselves .we also considered the case when the red nodes are placed on top nodes of cheirank index .this ranking is characterized by the cheirank probability for a random surfer moving in the inverted direction of links as described in . in average is proportional to the number of outgoing links .however , in this case the top nodes with a small values are not able to impose their opinion and the final fraction becomes blue .we attribute this to the fact that the opinion condition ( [ eq1 ] ) is determined by the pagerank probability and that the correlations between cheirank and pagerank are not very strong ( see discussion in ) . to find a final red fraction , shown in , in dependence on an initial red fraction , shown in axis ; data are shown inside the unit square .the values of are defined as a relative number of realizations found inside each of cells which cover the whole unit square . here realizations of randomly distributed colors are used to obtained values ; for each realization the time evolution is followed up the convergence time with up to iterations ; here ._ left column : _ cambridge network ; _ right column : _ oxford network ; here from top to bottom .the probability is proportional to color changing from zero ( blue / black ) to unity ( red / gray ) .[ fig3],title="fig:",scaledwidth=23.0% ] to find a final red fraction , shown in , in dependence on an initial red fraction , shown in axis ; data are shown inside the unit square .the values of are defined as a relative number of realizations found inside each of cells which cover the whole unit square .here realizations of randomly distributed colors are used to obtained values ; for each realization the time evolution is followed up the convergence time with up to iterations ; here ._ left column : _ cambridge network ; _ right column : _ oxford network ; here from top to bottom .the probability is proportional to color changing from zero ( blue / black ) to unity ( red / gray ) .[ fig3],title="fig:",scaledwidth=23.0% ] + to find a final red fraction , shown in , in dependence on an initial red fraction , shown in axis ; data are shown inside the unit square .the values of are defined as a relative number of realizations found inside each of cells which cover the whole unit square . here realizations of randomly distributed colors are used to obtained values ; for each realization the time evolution is followed up the convergence time with up to iterations ; here . _left column : _ cambridge network ; _ right column : _ oxford network ; here from top to bottom .the probability is proportional to color changing from zero ( blue / black ) to unity ( red / gray ) .[ fig3],title="fig:",scaledwidth=23.0% ] to find a final red fraction , shown in , in dependence on an initial red fraction , shown in axis ; data are shown inside the unit square .the values of are defined as a relative number of realizations found inside each of cells which cover the whole unit square . here realizations of randomly distributed colors are used to obtained values ; for each realization the time evolution is followed up the convergence time with up to iterations ; here ._ left column : _ cambridge network ; _ right column : _ oxford network ; here from top to bottom .the probability is proportional to color changing from zero ( blue / black ) to unity ( red / gray ) .[ fig3],title="fig:",scaledwidth=23.0% ] + to find a final red fraction , shown in , in dependence on an initial red fraction , shown in axis ; data are shown inside the unit square .the values of are defined as a relative number of realizations found inside each of cells which cover the whole unit square . here realizations of randomly distributed colors are used to obtained values ; for each realization the time evolution is followed up the convergence time with up to iterations ; here ._ left column : _ cambridge network ; _ right column : _ oxford network ; here from top to bottom .the probability is proportional to color changing from zero ( blue / black ) to unity ( red / gray ) .[ fig3],title="fig:",scaledwidth=23.0% ] to find a final red fraction , shown in , in dependence on an initial red fraction , shown in axis ; data are shown inside the unit square .the values of are defined as a relative number of realizations found inside each of cells which cover the whole unit square . here realizations of randomly distributed colors are used to obtained values ; for each realization the time evolution is followed up the convergence time with up to iterations ; here ._ left column : _ cambridge network ; _ right column : _ oxford network ; here from top to bottom .the probability is proportional to color changing from zero ( blue / black ) to unity ( red / gray ) .[ fig3],title="fig:",scaledwidth=23.0% ] -0.2 cm to analyze how the final fraction of red nodes depends on its initial fraction we study the time evolution for a large number of initial random realizations of colors following it up to the convergent time for each realization .we find that the final red nodes are homogeneously distributed in .thus there is no specific preference for top society levels for an initial random distribution .the probability distribution of final fractions is shown in fig .[ fig3 ] as a function of initial fraction at three values of parameter .these results show two main features of the model : a small fraction of red opinion is completely suppressed if and its larger fraction dominates completely for ; there is a bistability phase for the initial opinion range .of course , there is a symmetry in respect to exchange of red and blue colors . for small value have with while for large value we have , .+ 0.7 cm on the tenacious parameter ( or conformist parameter ) for initial red nodes in values of pagerank index ( ) ; black and red(gray ) curves show data for cambridge and oxford networks ; here .[ fig4],scaledwidth=44.0% ] -0.2 cm ( middle panels ) at but with uniform condition for spin flip being independent of pagerank probability ( top panels : in eq .( [ eq1 ] ) ) and pagerank probability replaced by in eq.([eq1 ] ) ( bottom panels ) ; left and right panels correspond to cambridge and oxford networks ; here and realizations are used .[ fig5],title="fig:",scaledwidth=23.0% ] ( middle panels ) at but with uniform condition for spin flip being independent of pagerank probability ( top panels : in eq .( [ eq1 ] ) ) and pagerank probability replaced by in eq.([eq1 ] ) ( bottom panels ) ; left and right panels correspond to cambridge and oxford networks ; here and realizations are used .[ fig5],title="fig:",scaledwidth=23.0% ] + ( middle panels ) at but with uniform condition for spin flip being independent of pagerank probability ( top panels : in eq .( [ eq1 ] ) ) and pagerank probability replaced by in eq.([eq1 ] ) ( bottom panels ) ; left and right panels correspond to cambridge and oxford networks ; here and realizations are used .[ fig5],title="fig:",scaledwidth=23.0% ] ( middle panels ) at but with uniform condition for spin flip being independent of pagerank probability ( top panels : in eq .( [ eq1 ] ) ) and pagerank probability replaced by in eq.([eq1 ] ) ( bottom panels ) ; left and right panels correspond to cambridge and oxford networks ; here and realizations are used .[ fig5],title="fig:",scaledwidth=23.0% ] -0.2 cm our interpretation of these results is the following .for small values of the opinion of a given society member is determined mainly by the pagerank of neighbors _ to whom he points to _( outgoing links ) .the pagerank probability of nodes , on which many nodes point to , is usually high since is proportional to the number of ingoing links .thus , at a society is composed of members who form their opinion listening an elite opinion .in such a society its elite with one color opinion can impose this opinion to a large fraction of the society .this is illustrated on fig .[ fig4 ] which shows a dependence of final fraction of red nodes on parameter for a small initial fraction of red nodes in the top values of pagerank index ( ) .we see that corresponds to a conformist society which follows in its great majority the opinion of its elite . for fraction drops significantly showing that this corresponds to a regime of tenacious society .it is somewhat surprising that the tenacious society ( ) has well defined and relatively large fixed opinion phase with a relatively small region of bistability phase .this is in a contrast to the conformist society at when the opinion is strongly influenced by the society elite .we attribute this to the fact that in fig .[ fig3 ] we start with a randomly distributed opinion , due to that the opinion of elite has two fractions of two colors that create a bistable situation since two fractions of society follows opinion of this divided elite that makes the situation bistable on a larger interval of compared to the case of tenacious society at . to stress the important role of pagerank in the dependence of on presented in fig .[ fig3 ] we show in fig .[ fig5 ] the same analysis at but for the case when in eq.([eq1 ] ) for the spin flip we take all ( equal weight for all nodes ) . the data of fig .[ fig5 ] clearly demonstrate that in this case the bistability of opinion disappears .thus the prof model is qualitatively different from the case when only the links without their pagerank weight are counted for the spin flip condition .we also test the sensitivity in respect to pagerank probability replacing by in eq.([eq1 ] ) as it is shown in fig .[ fig5 ] ( bottom panels ) .we see that compared to the case we start to have some signs of bistability but still they remain rather weak compared to the case of fig .[ fig3 ] .( middle panel ) at but at finite temperature during the relaxation process with ( top panels ) and ( bottom panels ) ; the number of random initial realizations is , the relaxation is done during iterations . left and right columns correspond to cambridge and oxford networks .[ fig6],title="fig:",scaledwidth=23.0% ] ( middle panel ) at but at finite temperature during the relaxation process with ( top panels ) and ( bottom panels ) ; the number of random initial realizations is , the relaxation is done during iterations . left and right columns correspond to cambridge and oxford networks .[ fig6],title="fig:",scaledwidth=23.0% ] + ( middle panel ) at but at finite temperature during the relaxation process with ( top panels ) and ( bottom panels ) ; the number of random initial realizations is , the relaxation is done during iterations . left and right columns correspond to cambridge and oxford networks .[ fig6],title="fig:",scaledwidth=23.0% ] ( middle panel ) at but at finite temperature during the relaxation process with ( top panels ) and ( bottom panels ) ; the number of random initial realizations is , the relaxation is done during iterations . left and right columns correspond to cambridge and oxford networks .[ fig6],title="fig:",scaledwidth=23.0% ] -0.2 cm in fact the spin flip condition ( [ eq1 ] ) can be viewed as a relaxation process in a disordered ferromagnet ( since all in ( [ eq2 ] ) , ( [ eq3 ] ) ) at zero temperature .such type of analysis of voter model relaxation process on regular lattices is analyzed in .from this view point it is natural to consider the effect of finite temperature on this relaxation . at finite the flip conditionis determined by the thermal metropolis probability as described in previous section .we follow this thermodynamic relaxation process at finite temperature up to iterations and in this way obtain the probability distribution of final fraction of red nodes obtained from initial fraction of red nodes randomly distributed over the network at .the results obtained at finite temperatures are shown at fig .they show that a finite temperature allows to have a finite fraction of red nodes when for their small initial fraction all final were equal to zero .also the bistability splitting is reduced and it disappears at larger values of .thus finite introduce a certain smoothing in distribution . over local energies obtained from the relaxation process during time iterations at temperatures ( black curve ) and ( red / gray curve ) ;average is done over random initial realizations .the insets show the distributions on a large scale including all local energies . left andright panels show cambridge and oxford networks .[ fig7],title="fig:",scaledwidth=23.0% ] 0.2 cm over local energies obtained from the relaxation process during time iterations at temperatures ( black curve ) and ( red / gray curve ) ; average is done over random initial realizations .the insets show the distributions on a large scale including all local energies . left andright panels show cambridge and oxford networks .[ fig7],title="fig:",scaledwidth=23.0% ] -0.2 cm however , the relaxation process at finite temperatures does not lead to the thermal boltzmann distribution . indeed , in fig .[ fig7 ] we show the probability distribution as a function of local energies defined in ( [ eq2 ] ) , ( [ eq3 ] ) .the distribution is obtained from the relaxation process with many initial random spin realizations .even if the temperature is comparable with typical values of local energies we still obtain a rather peaked distribution at being very different from the boltzmann distribution . ,coded by color from ( black ) to ( blue / dark gray ) , as a function of initial fraction of red nodes and the total initial energy ; each of random realizations is shown by color point ; data are shown after time iterations at .the energy is the modulus of total energy with all spin up ; here .left and right panels show data for cambridge ( ) and oxford ( ) networks ; bars show color attribution to final probability .[ fig8],title="fig:",scaledwidth=23.0% ] , coded by color from ( black ) to ( blue / dark gray ) , as a function of initial fraction of red nodes and the total initial energy ; each of random realizations is shown by color point ; data are shown after time iterations at .the energy is the modulus of total energy with all spin up ; here .left and right panels show data for cambridge ( ) and oxford ( ) networks ; bars show color attribution to final probability .[ fig8],title="fig:",scaledwidth=23.0% ] -0.2 cm we argue that a physical reason of significantly non - boltzmann distribution is related to the local nature of spin flip condition which does not allow to produce a good thermalization on a scale of the whole system .indeed , there are various energetic branches and probably nonlocal thermalization flips of group of spins are required for a better thermalization . however , the voting is a local process that involves only direct neighbors that seems to be not sufficient for emergence of the global thermal distribution .the presence of a few energy branches is well visible from the data of fig .[ fig8 ] obtained at .this figure shows the diagram of final fraction of red nodes in dependence on their initial fraction and the total initial energy of the whole system corresponding to a chosen initial random configuration of spins .most probably , these different branches prevent efficient thermalization of the system with only local spin flip procedure .in addition to the above points the asymmetric form of couplings plays an important role generating more complicated picture compared to the usual image of thermal relaxation ( see e.g. ) .we also note that the thermalization is absent in voter models on regular lattices .the sznajd model nicely incorporates the well - known trade union principle `` united we stand , divided we fall '' into the field of voter modeling and opinion formation on regular networks .the review of various aspects of this model is given in .here we generalize the sznajd model to include in it the features of prof model and consider it on social networks with their scale - free structure .this gives us the prof - sznajd model which is constructed in the following way .for a given network we determine the pagerank probability and pagerank index for all nodes .after that we introduce the definition of _ group _ of nodes .it is defined by the following rule applied at each time step : * _ i ) _ pick by random a node in the network and consider the polarization of the highest pagerank nodes pointing to it ; * _ ii ) _ if the node and all other nodes have the same color ( same spin polarization ) , then these nodes form a group whose effective pagerank value is the sum of all the member values ; if it is not the case then we leave the nodes unchanged and perform the next time step ; * _ iii ) _ consider all the nodes _ pointing to any member of the group _ ( this corresponds to the model _ option 1 _ ) or consider _ all the nodes pointing to any member of the group and all the nodes pointed by any member of the group _ ( this corresponds to the model _ option 2 _ ) ; then check all these nodes directly linked to the group : if an individual node pagerank value is less than then this node joins the group by taking the same color ( polarization ) as the group nodes ; if it is not the case then a node is left unchanged ; the pagerank values of added nodes are then added to the group pagerank and the group size is increased .the above time step is repeated many times during time , counting the number of steps , by choosing a random node on each next step .this procedure effectively corresponds to the zero temperature case in the prof model .+ 0.2 cm in the prof - sznajd model with the initial fraction of red nodes at one random realization .the curves show data for three values of group size ( blue / black ) ; ( green / light gray ) ; ( red / gray ) .full / dashed curves are for cambridge / oxford networks ; left panel is for option 1 ; right panel is for option 2 .[ fig9],title="fig:",scaledwidth=23.0% ] 0.2 cm in the prof - sznajd model with the initial fraction of red nodes at one random realization .the curves show data for three values of group size ( blue / black ) ; ( green / light gray ) ; ( red / gray ) .full / dashed curves are for cambridge / oxford networks ; left panel is for option 1 ; right panel is for option 2 .[ fig9],title="fig:",scaledwidth=23.0% ] -0.2 cm to find a final red fraction , shown in , in dependence on an initial red fraction , shown in axis ; data are shown inside the unit square .the values of are defined as a relative number of realizations found inside each of cells which cover the whole unit square . here realizations of randomly distributed colors are used to obtained values ; for each realization the time evolution is followed up the convergence time with up to steps ._ left column : _ cambridge network ; _ right column : _ oxford network ; here from top to bottom .the probability is proportional to color changing from zero ( blue / black ) to unity ( red / gray ) .[ fig10],title="fig:",scaledwidth=23.0% ] to find a final red fraction , shown in , in dependence on an initial red fraction , shown in axis ; data are shown inside the unit square .the values of are defined as a relative number of realizations found inside each of cells which cover the whole unit square . here realizations of randomly distributed colors are used to obtained values ; for each realization the time evolution is followed up the convergence time with up to steps ._ left column : _ cambridge network ; _ right column : _ oxford network ; here from top to bottom .the probability is proportional to color changing from zero ( blue / black ) to unity ( red / gray ) .[ fig10],title="fig:",scaledwidth=23.0% ] + to find a final red fraction , shown in , in dependence on an initial red fraction , shown in axis ; data are shown inside the unit square .the values of are defined as a relative number of realizations found inside each of cells which cover the whole unit square . here realizations of randomly distributed colors are used to obtained values ; for each realization the time evolution is followed up the convergence time with up to steps ._ left column : _ cambridge network ; _ right column : _ oxford network ; here from top to bottom .the probability is proportional to color changing from zero ( blue / black ) to unity ( red / gray ) .[ fig10],title="fig:",scaledwidth=23.0% ] to find a final red fraction , shown in , in dependence on an initial red fraction , shown in axis ; data are shown inside the unit square .the values of are defined as a relative number of realizations found inside each of cells which cover the whole unit square . here realizations of randomly distributed colors are used to obtained values ; for each realization the time evolution is followed up the convergence time with up to steps ._ left column : _ cambridge network ; _ right column : _ oxford network ; here from top to bottom .the probability is proportional to color changing from zero ( blue / black ) to unity ( red / gray ) .[ fig10],title="fig:",scaledwidth=23.0% ] + to find a final red fraction , shown in , in dependence on an initial red fraction , shown in axis ; data are shown inside the unit square .the values of are defined as a relative number of realizations found inside each of cells which cover the whole unit square . here realizations of randomly distributed colors are used to obtained values ; for each realization the time evolution is followed up the convergence time with up to steps ._ left column : _ cambridge network ; _ right column : _ oxford network ; here from top to bottom .the probability is proportional to color changing from zero ( blue / black ) to unity ( red / gray ) .[ fig10],title="fig:",scaledwidth=23.0% ] to find a final red fraction , shown in , in dependence on an initial red fraction , shown in axis ; data are shown inside the unit square .the values of are defined as a relative number of realizations found inside each of cells which cover the whole unit square . here realizations of randomly distributed colors are used to obtained values ; for each realization the time evolution is followed up the convergence time with up to steps ._ left column : _ cambridge network ; _ right column : _ oxford network ; here from top to bottom .the probability is proportional to color changing from zero ( blue / black ) to unity ( red / gray ) .[ fig10],title="fig:",scaledwidth=23.0% ] -0.2 cm a typical example of the time evolution of the fraction of red nodes in the prof - sznajd model is shown in fig .it shows that the system converges to a steady - state after a time scale that is comparable with the convergence times for the prof models studied in previous sections .we see that there are still some fluctuations in the steady - state regime which are visibly smaller for the option 2 case .we attribute this to a larger number of direct links in this case .the number of group nodes gives some variation of but these variations remain on a relatively small scale of a few percents .here , we should point on the important difference between prof and prof - sznajd models : for a given initial color realization , in the first case we have convergence to a fixed state after some convergence time while in the second case we have convergence to a steady - state which continue to fluctuate in time , keeping the colors distribution only on average .the dependence of the final fraction of red nodes on its initial value is shown by the density plot of probability in fig .[ fig10 ] ( option 1 of prof - sznajd model ) .the probability is obtained from many initial random realizations in a similar way to the case of fig .we see that there is a significant difference compared to the prof model ( fig .[ fig3 ] ) : now even at small values of we find small but finite values of while in the prof model the red color disappears at .this feature is related to the essence of the sznajd model : here even small groups can resist against totalitar opinion .other features of fig .[ fig10 ] are similar to those found for the prof model : we again observe bistability of opinion formation .the number of nodes , which form the group , does not affect significantly the distribution , we have smaller fluctuations at larger values but the model works in a stable way already at .the results for the option 2 of prof - sznajd model are shown in fig .[ fig11 ] . in this casethe opinions with a small initial fraction of red nodes are suppressed in a significantly stronger way compared to the option 1 .we attribute this to the fact that large groups can suppress in a stronger way small groups since the outgoing direct links are taken into account in this option . but for prof - sznajd model , option 2 .[ fig11],title="fig:",scaledwidth=23.0% ] but for prof - sznajd model , option 2 .[ fig11],title="fig:",scaledwidth=23.0% ] + but for prof - sznajd model , option 2 .[ fig11],title="fig:",scaledwidth=23.0% ] but for prof - sznajd model , option 2 .[ fig11],title="fig:",scaledwidth=23.0% ] + but for prof - sznajd model , option 2 .[ fig11],title="fig:",scaledwidth=23.0% ] but for prof - sznajd model , option 2 .[ fig11],title="fig:",scaledwidth=23.0% ] -0.2 cm the significant difference between the two options of prof - sznajd model is well seen from the data of fig .[ fig12 ] . here ,all nodes are taken in red ( compare with the prof model in fig .[ fig4 ] ) . for the option 1 the society elite succeeds to impose its opinion to a significant fraction of nodes which is increased by a factor 5 - 10 .visibly , this increase is less significant than in the prof model .however , for the option 2 of prof - sznajd model there is practically no increase of the fraction of red nodes .thus , in the option 2 the society members are very independent and the influence of the elite on their opinion is very weak .+ 0.2 cm in the prof - sznajd model with the initial red nodes for the top pagerank nodes : ( blue / black ) ; ( green / light gray ) ; ( red / gray ) ; here .full / dashed curves are for cambridge / oxford networks ; left panel is for option 1 ; right panel is for option 2 .color of curves is red , green , blue from top to bottom at maximal on both panels .[ fig12],title="fig:",scaledwidth=23.0% ] 0.2 cm in the prof - sznajd model with the initial red nodes for the top pagerank nodes : ( blue / black ) ; ( green / light gray ) ; ( red / gray ) ; here .full / dashed curves are for cambridge / oxford networks ; left panel is for option 1 ; right panel is for option 2 .color of curves is red , green , blue from top to bottom at maximal on both panels .[ fig12],title="fig:",scaledwidth=23.0% ] -0.2 cmeven if one can expect that the properties of university networks are similar to those of the real social networks it is important to analyze the previous prof models in the frame of real social network . for that we use the livejournal network collected , described and presented at . from this databasewe obtain the directed network with nodes , links which are mainly directed ( only about 30% of links are symmetric ) .the google matrix of the network is constructed in a usual way and its pagerank vector is determined by the iteration process at the damping factor . for the time evolution of fraction of red nodes we use time iterations in and defined as in previous sections .+ 0.2 cm ( full curve ) , the fitted algebraic dependence is shown by the dashed line ( for ) with the exponent and . _right panel : _ time evolution of opinion given by a fraction of red nodes as a function of number of iterations ( cf .[ fig1 ] ) at , few random initial realizations with are shown .[ fig13],title="fig:",scaledwidth=23.0% ] 0.2 cm ( full curve ) , the fitted algebraic dependence is shown by the dashed line ( for ) with the exponent and ._ right panel : _ time evolution of opinion given by a fraction of red nodes as a function of number of iterations ( cf .[ fig1 ] ) at , few random initial realizations with are shown .[ fig13],title="fig:",scaledwidth=23.0% ] -0.2 cm the pagerank probability decay is shown in fig .[ fig13 ] .it is well described by an algebraic law with .the convergence of a fraction of red nodes takes place approximately on the same convergence time scale even if the size of the networks is increased almost by a factor 20 .+ 0.2 cm on the tenacious parameter ( or conformist parameter ) in the prof model for initial red nodes in values of pagerank index ( , cf .[ fig4 ] ) . here , blue ; green , red curves ( from bottom to top at ) ; ._ right panel : _ same data as in fig . [ fig3 ] at with same parameters but for livejournal network .[ fig14],title="fig:",scaledwidth=23.0% ] on the tenacious parameter ( or conformist parameter ) in the prof model for initial red nodes in values of pagerank index ( , cf .[ fig4 ] ) . here , blue ; green , red curves ( from bottom to top at ) ; ._ right panel : _ same data as in fig .[ fig3 ] at with same parameters but for livejournal network .[ fig14],title="fig:",scaledwidth=23.0% ] -0.2 cm in a way similar to the university networks we find that the homogeneous opinion of society elite presented in a small fraction of nodes influences a large fraction of the whole society especially when the parameter is not very large ( see fig . [ fig14 ] in comparison with fig . [ fig4 ] ) .the influence of the elite at 1% of red nodes is larger in the case of livejournal network .it is possible that this is related to a 30% larger number of links but it is also possible that other structural network parameters also play a role here . in spite of certain similarities with the previous data for university networks discussed before we find that the opinion diagram for the livejournal network ( see fig . [ fig14 ] right panel ) is very different from those obtained for the university networks ( see fig .[ fig3 ] ) : the bistability practically disappeared .we think that this difference originates from a significantly slower decay exponent for pagerank in the case of livejournal .to check this assumption we compare the probability distribution of final opinion for an initial opinion fixed at using the prof model with the usual linear weight in eq.([eq1 ] ) and a quadratic weight proportional to ( see fig . [ fig15 ] ) . for the linear weightwe find that only very small values of can be found for initial while for the quadratic weight we obtain a rather broad distribution of values in the main range with a few large values .thus we see that the final opinion is rather sensitive to the weight used in eq.([eq1 ] ) .however , in contrast to the university networks ( see fig .[ fig3],fig .[ fig5 ] ) , where we have narrow one peak or double peak distributions of , for the livejournal network with quadratic weight we find a rather broad distribution of . in the spirit of a renormalization map description considered in ( see figs.1,2 there ) , it is possible to assume that one or two peaks corresponds to one or two fixed points attractor of the map .we make a conjecture that a broad distribution as in fig .[ fig15 ] ( right panel ) can correspond to a regime of strange chaotic attractor appearing in the renormalization map dynamics . in principle , such a chaotic renormalization dynamics is known to appear in coupled spins lattices when three - spin couplings are present ( see and refs . therein ) .it is possible that a presence of weight probability associated with the pagerank in a certain power may lead to a chaotic dynamics which would generate a broad distribution of final opinions .+ 0.2 cm of final opinion for a fixed initial opinion and in the prof model ._ left panel : _ usual linear weight in eq.([eq1 ] ) ._ right panel : _ a quadratic weight in eq.([eq1 ] ) .histograms are obtained with initial random realizations , the normalization is fixed by the condition that the sum of over all histogram bins is equal to unity . [ fig15],title="fig:",scaledwidth=23.0% ] 0.3 cm of final opinion for a fixed initial opinion and in the prof model ._ left panel : _ usual linear weight in eq.([eq1 ] ) ._ right panel : _ a quadratic weight in eq.([eq1 ] ) .histograms are obtained with initial random realizations , the normalization is fixed by the condition that the sum of over all histogram bins is equal to unity .[ fig15],title="fig:",scaledwidth=23.0% ] -0.2 cm we also made tests for the prof - sznajd model ( option 1 ) for the livejournal database .however , in this case at , we found only small values ( similar of those in fig .[ fig15 ] , left panel ) both for linear and quadratic weights in eq.([eq1 ] ) .it is possible that the sznajd groups are less sensitive to the probability weight .we also analyzed the opinion formation on the twitter dataset with , taken from .this size is rather large and due to that we present only main features of the prof model for this directed network .+ 0.2 cm ( full curve ) , the fitted algebraic dependence is shown by the dashed line ( for ) with the exponent and ( for the range we find ) . _right panel : _ dependence of the final fraction of red nodes on the tenacious parameter ( or conformist parameter ) in the prof model for initial red nodes in values of pagerank index ( , cf .[ fig4 ] , fig .[ fig14 ] ) . here , ( blue line at ) ; ( red curve with circles ) , ( top gree line ) ; .[ fig16],title="fig:",scaledwidth=26.0% ] 0.1 cm ( full curve ) , the fitted algebraic dependence is shown by the dashed line ( for ) with the exponent and ( for the range we find ) ._ right panel : _ dependence of the final fraction of red nodes on the tenacious parameter ( or conformist parameter ) in the prof model for initial red nodes in values of pagerank index ( , cf .[ fig4 ] , fig .[ fig14 ] ) . here , ( blue line at ) ; ( red curve with circles ) , ( top gree line ) ; .[ fig16],title="fig:",scaledwidth=19.0% ] -0.2 cm the dependence of pagerank on its index is shown in fig .[ fig16 ] ( left panel ) . for the range we find the decay exponent , being similar to those of livejournal network ( see fig . [ fig13 ] ) even if there is a faster drop of at larger values .we note that the value is rather different from the value usually found for the zipf law and the www with .it is possible that this is related to a significantly larger average number of links per node which is increased by a factor 3.5 for the twitter network compared to the university networks analyzed in the previous sections .the effect of the homogeneous elite opinion of all red nodes is shown in fig .[ fig16 ] ( right panel ) .we see that on the twitter network a small fraction of elite with the fixed opinion ( ) can impose it practically to the whole community for all values of the conformist parameter .we find that for all values are very close to unity , while for we find as it is seen in fig .[ fig16 ] , right panel .thus , the transition is very sharp .we attribute such a strong influence of elite opinion to the very connected structure of twitter network with a significantly larger average number of links per node comparing to the university and livejournal networks . at , for a fixed fraction of initial opinion , we find that the probability distribution of final opinion is located in the range of small values both for linear and quadratic weight used in eq.([eq1 ] ) ( we do not show these data ) . for the linear weightthe situation is rather similar to the case of livejournal ( see fig .[ fig15 ] ) , but for the quadratic weight we find a significant difference between two networks ( see fig .[ fig15 ] ) .the reason of such a significant difference for the quadratic weight case requires a more detailed comparison of network properties .the large size of twitter network makes numerical simulations of the prof - sznajd model rather heavy and due to that we did not study this model for this network .in this work we proposed the pagerank model of opinion formation of social networks and analyzed its properties on example of four different networks . for two university networkswe find rather similar properties of opinion formation .it is characterized by the important feature according to which the society elite with a fixed opinion can impose it to a significant fraction of the society members which is much larger than the initial elite fraction . however ,when the initial opinions of society members , including the elite , are presented by two options then we find a significant range of opinion fraction within a bistability regime .this range depends on the conformist parameter which characterizes the local aspects of opinion formation of linked society members .the generalization of the sznajd model for the scale - free social networks gives interesting examples of opinion formation where finite small size groups can keep their own opinion being different from the main opinion of the majority . in this waythe proposed prof - sznajd model shows that the totalitar opinions can be escaped by small sub - communities .we find that the properties of opinion formation are rather similar for the two university networks of cambridge and oxford .however , the results obtained for networks of livejournal and twitter show that the range of bistability practically disappears for these networks .our data indicate that this is related to a slower algebraic decay of pagerank in these cases compared to the university networks .however , the deep reasons of such a difference require a more detailed analysis .indeed , livejournal and twitter networks demonstrate rather different behavior for the -weighted function of opinion formation .the studies performed for regular networks show existence of stable or bistable fixed points for opinion formation models that have certain similarities with the opinion formation properties found in our studies . at the same time the results obtained in that three - body spin coupling can generate a chaotic renormalization dynamics . some our results ( fig .[ fig15 ] , right panel ) give indications on a possible existence of such chaotic phase on the social networks .the enormous development of social networks in a few last years definitely shows that the analysis of opinion formation on such networks requires further investigations .this research can find also various other applications .one of them can be a neuronal network of brain which represents itself a directed scale - free network .the applications of network science to brain networks is now under a rapid development ( see e.g. ) and the google matrix methods can find useful applications in this field . this work is supported in part by the ec fet open project `` new tools and algorithms for directed network analysis '' ( nadine 288956 ) .we thank a.benczr and s.vigna for providing us a friendly access to the livejournal database and the twitter dataset .zaller , _ the nature and origins of mass opinion _ , cambridge university press , cambridge uk ( 1999 ) .wikipedia , _ livejournal _ , march 9 , 2012 ` http://en.wikipedia.org/wiki/livejournal ` .wikipedia , _ facebook _ , march 9 , 2012 ` http://en.wikipedia.org/wiki/facebook ` .wikipedia , _ twitter _ , march 9 , 2012 ` http://en.wikipedia.org/wiki/twitter ` .wikipedia , _ vk ( social network ) _ , march 9 , 2012 ` http://en.wikipedia.org/wiki/vk\_(social\_network ) ` .s.n . dorogovtsev and j.f.f .mendes , _ evolution of networks _ , oxford univ .press ( 2003 ) .g. caldarelli , _ scale - free networks _ , oxford univ . press ( 2007 ) .s. galam , j. math .psychology * 30 * , 426 ( 1986 ) .liggett , _ stochastic interacting systems : contact , voter and exclusion processes _ , springer , berlin ( 1999 ) .s. galam , europhys . lett . * 70 * , 705 ( 2005 ) .watts and p.s .dodds , j. consumer research * 34(4 ) * , 441 ( 2007 ) .s. galam , int .c * 19 * , 409 ( 2008 ) . c. castellano , s. fortunato , and v. loreto , rev .81 * , 591 ( 2009 ) p. l. krapivsky , s. redner and e. ben - naim , _ a kinetic view of statistical physics _ , cambridge university press , cambridge uk ( 2010 ) . b. schmittmann and a. mukhopadhyay , phys .e * 82 * , 066104 ( 2010 ) academic web link database project ` http://cybermetrics.wlv.ac.uk/database/ ` m. kurucz , a.a .benczur , a. pereszlenyi , _ large - scale principal component analysis on livejournal friends network _ ,. _ workshop on social network mining and analysis _ held in conjunction with _13th acm sigkdd international conference on knowledge discovery and data mining ( kdd 2008 ) _ , las vegas nv , august 24 - 27 ( 2008 ) ; ` http://dms.sztaki.hu/en/letoltes/livejournal-data ` h. kwak , c.lee , h. park and s. moon , _ what is twitter , a social network or a news media ?www2010 , p.591 , acm , new york , n.y .( 2010 ) ; the web data are downloaded from the web site maintained by s.vigna http://law.dsi.unimi.it/webdata/twitter-2010/ s.brin and l.page , _ computer networks and isdn systems _ * 30 * , 107 ( 1998 ) a.m. langville and c.d .meyer c d 2006 _ google s pagerank and beyond : the science of search engine rankings _ , princeton university press , princeton ( 2006 ) s. redner , phys . today * 58(6 ) * , 49 ( 2005 ) .f. radicchi , s. fortunato , b. markines , and a. vespignani , phys .e * 80 * , 056103 ( 2009 ) .west , t.c .bergstrom , and c.t .bergstrom , coll .* 71 * , 236 ( 2010 ) ; ` http://www.eigenfactor.org/ ` f. radicchi , plos one * 6 * , e17249 ( 2011 ) .zhirov , o.v .zhirov and d.l .shepelyansky , eur .j. b * 77 * , 523 ( 2010 ) l. ermann and d.l .shepelyansky , acta phys .polonica a * 120(6a ) * , a158 ( 2011 ) ; `` k. sznajd - weron and j. sznajd , int . j. modc * 11 * , 1157 ( 2000 ) .frahm , b. georgeot and d.l .shepelyansky , j. phys , a : math . theor . * 44 * , 465101 ( 2011 ) l.ermann , a.d.chepelianskii and d.l.shepelyansky , _ towards two - dimensional search engines _ , arxiv:1106.6215[cs.ir ] ( 2011 ) ; ` http://www.quantware.ups-tlse.fr/qwlib/dvvadi/ ` s. galam and b. walliser , physica a * 389 * , 481 ( 2010 ) .n. metropolis , a.w .rosenbluth , m.n .rosenbluth , a.h.teller , and e. teller , j. chem .* 21 * , 1087 ( 1953 ) .v. sood and s. redner , phys .lett . * 94 * , 178701 ( 2005 ) .ananikian and s.k .dallakian , physica d * 107 * , 75 ( 1997 ) .zipf , _ human behavior and the principle of least effort _ , addison - wesley , boston ( 1949 ) .eguiluz , d.r .chialvo , g.a .cecchi , m. baliki , and a.v .apkarian , phys .lett . * 94 * , 018102 ( 2005 ) . x .-zuo , r. ehmke , m. mennes , d. imperati , f.x .castellanos , o. sporns and m.p .milham , cereb .cortex doi:10.1093/cercor / bhr269 ( 2011 ) .shepelyansky and o.v .zhirov , phys .a * 374 * , 3206 ( 2010 ) .
we propose the pagerank model of opinion formation and investigate its rich properties on real directed networks of universities of cambridge and oxford , livejournal and twitter . in this model the opinion formation of linked electors is weighted with their pagerank probability . we find that the society elite , corresponding to the top pagerank nodes , can impose its opinion to a significant fraction of the society . however , for a homogeneous distribution of two opinions there exists a bistability range of opinions which depends on a conformist parameter characterizing the opinion formation . we find that livejournal and twitter networks have a stronger tendency to a totalitar opinion formation . we also analyze the sznajd model generalized for scale - free networks with the weighted pagerank vote of electors .
in recent years , casimir forces arising from quantum vacuum fluctuations of the electromagnetic field have become the focus of intense theoretical and experimental effort .this effect has been verified via many experiments , most commonly in simple , one - dimensional geometries involving parallel plates or approximations thereof , with some exceptions .a particular topic of interest is the geometry and material dependence of the force , a subject that has only recently begun to be addressed in experiments and by promising new theoretical methods .for example , recent works have shown that it is possible to find unusual effects arising from many - body interactions or from systems exhibiting strongly coupled material and geometric dispersion .these numerical studies have been mainly focused in two - dimensional or simple three - dimensional constant - cross - section geometries for which numerical calculations are tractable . in this manuscript , we present a simple and general method to compute casimir forces in arbitrary geometries and for arbitrary materials that is based on a finite - difference time - domain ( fdtd ) scheme in which maxwell s equations are evolved in time .a time - domain approach offers a number of advantages over previous methods .first , and foremost , it enables researchers to exploit powerful free and commercial fdtd software with no modification .the generality of many available fdtd solvers provides yet another means to explore the material and geometry dependence of the force , including calculations involving anisotropic dielectrics and/or three - dimensional problems .second , this formulation also offers a fundamentally different viewpoint on casimir phenomena , and thus new opportunities for the theoretical and numerical understanding of the force in complex geometries .our time - domain method is based on a standard formulation in which the casimir force is expressed as a contour integral of the frequency - domain stress tensor . like most other methods for casimir calculations ,the stress tensor method typically involves evaluation at imaginary frequencies , which we show to be unsuitable for fdtd .we overcome this difficulty by exploiting a recently - developed exact equivalence between the system for which we wish to compute the casimir force and a transformed problem in which all material properties are modified to include dissipation . to illustrate this approach , we consider a simple choice of contour , corresponding to a conductive medium , that leads to a simple and efficient time - domain implementation . finally , using a free , widely - available fdtd code , we compute the force between two vacuum - separated perfectly - metallic plates , a geometry that is amenable to analytical calculations and which we use to analyze various important features of our method .an illustration of the power and flexibility of this method will be provided in a subsequent article , currently in preparation , in which we will demonstrate computations of the force in a number of non - trivial ( dispersive , three - dimensional ) geometries as well as further refinements to the method .in what follows , we derive a numerical method to compute the casimir force on a body using the fdtd method .the basic steps involved in computing the force are : * map the problem exactly onto a new problem with dissipation given by a frequency - independent conductivity .* measure the electric and magnetic fields in response to current pulses placed separately at each point along a surface enclosing the body of interest . *integrate these fields in space over the enclosing surface and then integrate this result , multiplied by a known function , over time , via eq .( [ eq : time - force ] ) .the result of this process is the exact casimir force ( in the limit of sufficient computational resolution ) , expressed via eq .( [ eq : time - force ] ) and requiring only the time - evolution of eqs .( [ eq : fdtd1][eq : fdtd2 ] ) . in this section ,we describe the mathematical development of our time - domain computational method , starting from a standard formulation in which the casimir force is expressed as a contour integral of the frequency - domain stress tensor .we consider the frequency domain for derivation purposes only , since the final technique outlined above resides entirely in the time domain . in this framework ,computing the casimir force involves the repeated evaluation of the photon green s function over a surface surrounding the object of interest .our goal is then to compute via the fdtd method .the straightforward way to achieve this involves computing the fourier transform of the electric field in response to a short pulse .however , in most methods a crucial step for evaluating the resulting frequency integral is the passage to imaginary frequencies , corresponding to imaginary time .we show that , in the fdtd this , gives rise to exponentially growing solutions and is therefore unsuitable . instead, we describe an alternative formulation of the problem that exploits a recently proposed equivalence in which contour deformations in the complex frequency - domain correspond to introducing an effective dispersive , dissipative medium at a real `` frequency '' . from this perspective, it becomes simple to modify the fdtd maxwell s equations for the purpose of obtaining well - behaved stress tensor frequency integrands .we illustrate our approach by considering a contour corresponding to a medium with frequency - independent conductivity .this contour has the advantage of being easily implemented in the fdtd , and in fact is already incorporated in most fdtd solvers .finally , we show that it is possible to abandon the frequency domain entirely in favor of evaluating the force integral directly in the time domain , which offers several conceptual and numerical advantages .the casimir force on a body can be expressed as an integral over any closed surface ( enclosing the body ) of the mean electromagnetic stress tensor . here denotes spatial position and frequency .in particular , the force in the direction is given by : the stress tensor is expressed in terms of correlation functions of the the field operators and : \\+ \varepsilon({\mathbf{r}},\omega ) \big [ \left\langle e_{i}({\mathbf{r}})\,e_{j}({\mathbf{r}})\right\rangle_\omega-\frac{1}{2}\delta_{ij}\sum_k \left\langle e_k({\mathbf{r}})\,e_k({\mathbf{r}})\right\rangle_\omega \big ] \ , , \label{eq : st}\end{gathered}\ ] ] where both the electric and magnetic field correlation functions can be written as derivatives of a vector potential operator : we explicitly place a superscript on the vector potential in order to refer to our choice of gauge [ eqs .( [ eq : ea][eq : ba ] ) ] , in which is obtained as a time - derivative of .the fluctuation - dissipation theorem relates the correlation function of to the photon green s function : where is the vector potential in response to an electric dipole current along the direction : \textbf{g}^e_j(\omega;\textbf{r},\textbf{r}^\prime ) = \delta(\textbf{r}-\textbf{r}^\prime)\hat{\textbf{e}}_j , \label{eq : geom}\ ] ] given , one can use eqs .( [ eq : ea][eq : ba ] ) in conjunction with eq .( [ eq : fdt ] ) to express the field correlation functions at points and in terms of the photon green s function : in order to find the force via eq .( [ eq : force ] ) , we must first compute at every on the surface of integration , and for every .equation ( [ eq : geom ] ) can be solved numerically in a number of ways , such as by a finite - difference discretization : this involves discretizing space and solving the resulting matrix eigenvalue equation using standard numerical linear algebra techniques .we note that finite spatial discretization automatically regularizes the singularity in at , making finite everywhere . the present form of eq .( [ eq : geom ] ) is of limited computational utility because it gives rise to an oscillatory integrand with non - negligible contributions at all frequencies , making numerical integration difficult .however , the integral over can be re - expressed as the imaginary part of a contour integral of an analytic function by commuting the integration with the operator in eqs .( [ eq : eeg][eq : hhg ] ) .physical causality implies that there can be no poles in the integrand in the upper complex plane .the integral , considered as a complex contour integral , is then invariant if the contour of integration is deformed above the real frequency axis and into the first quadrant of the complex frequency plane , via some mapping .this allows us to add a positive imaginary component to the frequency , which causes the force integrand to decay rapidly with increasing .in particular , upon deformation , eq . ( [ eq : geom ] ) is mapped to : \textbf{g}^e_j(\xi;\textbf{r},\textbf{r}^\prime ) = \delta(\textbf{r}-\textbf{r}^\prime)\hat{\textbf{e}}_j , \label{eq : cgeom}\ ] ] and eqs .( [ eq : eeg][eq : hhg ] ) are mapped to : equation ( [ eq : force ] ) becomes : [ note that a finite spatial grid ( as used in the present approach ) requires no further regularization of the integrand , and the finite value of all quantities means there is no difficulty in commuting the operator with the integration .] we can choose from a general class of contours , provided that they satisfy and remain above the real axis .the standard contour is a wick rotation , which is known to yield a force integrand that is smooth and exponentially decaying in . in general ,the most suitable contour will depend on the numerical method being employed .a wick rotation guarantees a strictly positive - definite and real - symmetric green s function , making eq .( [ eq : geom ] ) solvable by the most efficient numerical techniques ( e.g. the conjugate - gradient method ) .one can also solve eq .( [ eq : geom ] ) for arbitrary , but this will generally involve the use of direct solvers or more complicated iterative techniques . however , the class of contours amenable to an efficient time - domain solution is more restricted .for instance , a wick rotation turns out to be unstable in the time domain because it implies the presence of gain .it is possible to solve eq .( [ eq : geom ] ) in the time domain by evolving maxwell s equations in response to a delta - function current impulse in the direction of . can then be directly computed from the fourier transform of the resulting field . however , obtaining a smooth and decaying force integrand requires expressing the mapping in the time - domain equations of motion . a simple way to seethe effect of this mapping is to notice that eq .( [ eq : cgeom ] ) can be viewed as the green s function at real `` frequency '' and complex dielectric : where for simplicity we have taken and to be frequency - independent .we assume this to be the case for the remainder of the manuscript . at this point, it is important to emphasize that the original physical system at a frequency is the one in which casimir forces and fluctuations appear ; the dissipative system at a frequency is merely an artificial technique introduced to compute the green s function .integrating along a frequency contour is therefore equivalent to making the medium dispersive in the form of eq .( [ eq : dispersion ] ) . consequently , the time domain equations of motion under this mapping correspond to evolution of the fields in an effective dispersive medium given by . to be suitable for fdtd ,this medium should have three properties : it must respect causality , it can not support gain ( which leads to exponential blowup in time - domain ) , and it should be easy to implement .a wick rotation is very easy to implement in the time - domain : it corresponds to setting .however , a negative epsilon represents gain ( the refractive index is , where one of the signs corresponds to an exponentially growing solution ) .we are therefore forced to consider a more general , frequency - dependent . implementing arbitrary dispersion in fdtdgenerally requires the introduction of auxiliary fields or higher order time - derivative terms into maxwell s equations , and can in general become computationally expensive .the precise implementation will depend strongly on the choice of contour .however , almost any dispersion will suit our needs , as long as it is causal and dissipative ( excluding gain ) .a simple choice is an corresponding to a medium with frequency - independent conductivity : this has three main advantages : first , it is implemented in many fdtd solvers currently in use ; second , it is numerically stable ; and third , it can be efficiently implemented without an auxiliary differential equation .in this case , the equations of motion in the time domain are given by : writing the conductivity term as is slightly nonstandard , but is convenient here for numerical reasons .in conjunction with eqs .( [ eq : ea][eq : ba ] ) , and a fourier transform in , this yields a photon green s function given by : \textbf{g}_j(\xi;\textbf{r},\textbf{r}^\prime ) = \delta(\textbf{r}-\textbf{r}^\prime)\hat{\textbf{e}}_j , \label{eq : sigma - eom}\ ] ] this corresponds to picking a frequency contour of the form : note that , in the time domain , the frequency of the fields is , and not , i.e. their time dependence is .the only role of the conductivity here is to introduce an imaginary component to eq .( [ eq : sigma - eom ] ) in correspondence with a complex - frequency mapping .it also explicitly appears in the final expression for the force , eq .( [ eq : newforce ] ) , as a multiplicative ( jacobian ) factor .the standard fdtd method involves a discretized form of eqs .( [ eq : fdtd1][eq : fdtd2 ] ) , from which one obtains and , not . however , in the frequency domain , the photon green s function , being the solution to eq .( [ eq : geom ] ) , solves exactly the same equations as those satisfied by the electric field , except for a simple multiplicative factor in eq .( [ eq : ea ] ) .specifically , is given in terms of by : where denotes the field in the direction due to a dipole current source placed at with time - dependence , e.g. .in principle , we can now compute the electric- and magnetic - field correlation functions by using eqs .( [ eq : eeg2][eq : hhg2 ] ) , with given by eq .( [ eq : contour ] ) , and by setting in eq .( [ eq : hhg2 ] ) .since we assume a discrete spatial grid , no singularities arise for , and in fact any -independent contribution is canceled upon integration over .this is straightforward for eq .( [ eq : eeg ] ) , since the -field correlation function only involves a simple multiplication by . however , the -field correlation function , eq .( [ eq : hhg ] ) , involves derivatives in space .although it is possible to compute these derivatives numerically as finite differences , it is conceptually much simpler to pick a different vector potential , analogous to eqs .( [ eq : ea][eq : ba ] ) , in which is the time - derivative of a vector potential .as discussed in the appendix , this choice of vector potential implies a frequency - independent magnetic conductivity , and a magnetic , instead of electric , current .the resulting time - domain equations of motion are : in this gauge , the new photon green s function and the field in response to the current source are related by : where the magnetic - field correlation function : is now defined as a frequency multiple of rather than by a spatial derivative of .this approach to computing the magnetic correlation function has the advantage of treating the electric and magnetic fields on the same footing , and also allows us to examine only the field response at the location of the current source .the removal of spatial derivatives also greatly simplifies the incorporation of discretization into our equations ( see appendix for further discussion ) .the use of magnetic currents and conductivities , while unphysical , are easily implemented numerically . alternatively , one could simply interchange and , and , and run the simulation entirely as in eqs .( [ eq : fdtd1][eq : fdtd2 ] ) .the full force integral is then expressed in the symmetric form : where represent the surface - integrated field responses in the frequency domain , with . for notational simplicity ,we have also defined : here , the path of integration has been extended to the entire real -axis with the use of the unit - step function for later convenience .the product of the fields with naturally decomposes the problem into two parts : computation of the surface integral of the field correlations , and of the function .the contain all the structural information , and are straightforward to compute as the output of any available fdtd solver with no modification to the code . this outputis then combined with , which is easily computed analytically , and integrated in eq .( [ eq : force_fields ] ) to obtain the casimir force . as discussed in sec .[ sec : discretization ] , the effect of spatial and temporal discretization enters explicitly only as a slight modification to in eq .( [ eq : force_fields ] ) , leaving the basic conclusions unchanged . it is straightforward to evaluate eq .( [ eq : force_fields ] ) in the frequency domain via a dipole current , which yields a constant - amplitude current . using the frequency - independent conductivity contour eq .( [ eq : contour ] ) , corresponding to eqs .( [ eq : fdtd1][eq : fdtd2 ] ) , we find the following explicit form for : one important feature of eq .( [ eq : gw - delta ] ) is that becomes singular in the limit as . assuming that and are continuous at ( in general they will not be zero ), this singularity is integrable .however , it is cumbersome to integrate in the frequency domain , as it requires careful consideration of the time window for calculation of the field fourier transforms to ensure accurate integration over the singularity . as a simple alternative, we use the convolution theorem to re - express the frequency ( ) integral of the product of and arising in eq .( [ eq : force_fields ] ) as an integral over time of their fourier transforms and . technically, the fourier transform of does not exist because for large .however , the integral is regularized below using the time discretization , just as the green s function above was regularized by the spatial discretization .( as a convenient abuse of notation , arguments will always denote functions in the frequency domain , and arguments their fourier transforms in the time domain . )taking advantage of the causality conditions ( for ) yields the following expression for the force expressed purely in the time domain : the advantage of evaluating the force integral in the time domain is that , due to the finite conductivity and lack of sources for , will rapidly decay in time .as will be shown in the next section , also decays with time .hence , although dissipation was originally introduced to permit a natural high - frequency cutoff to our computations , it also allows for a natural time cutoff .we pick such that , for times , knowledge of the fields will not change the force result in eq .( [ eq : time - force ] ) beyond a predetermined error threshold .this approach is very general as it requires no precise knowledge of how the fields decay with time .given , the desired function is a fourier transform .however , the discretization of time in fdtd implies that the frequency domain becomes periodic and that are actually fourier series coefficients , given by : where is the discretized form of eq .( [ eq : gw ] ) and is given in the appendix by eq .( [ eq : gwd ] ) .these fourier series coefficients are computed by a sequence of numeric integrals that can be evaluated in a variety of ways .it is important to evaluate them accurately in order to resolve the effect of the singularity .for example , one could use a clenshaw - curtis scheme developed specifically for fourier integrals , or simply a trapezoidal rule with a large number of points that can be evaluated relatively quickly by an fft ( e.g. for this particular , points is sufficient ) . since it is possible to employ strictly - real current sources in fdtd , giving rise to real , and since we are only interested in analyzing the influence of on eq .( [ eq : time - force ] ) , it suffices to look at .furthermore , will exhibit rapid oscillations at the nyquist frequency due to the delta - function current , and therefore it is more convenient to look at its absolute value .figure [ fig : gt ] , below , plots the envelope of as a function of , where again , is the fourier transform of eq .( [ eq : gw ] ) .as anticipated in the previous section , decays in time .interestingly , it exhibits a transition from decay at to decay for large .the slower decay at long times for larger arises from a transition in the behavior of eq .( [ eq : gw - delta ] ) from the singularity at . for various values of , illustrating the transition from to power - law decay as increases .because there are strong oscillations in at the nyquist frequency for intermediate , for clarity we plot the positive and negative terms in as separate components.,scaledwidth=46.0%,scaledwidth=33.0% ]in this section we discuss the practical implementation of the time - domain algorithm ( using a freely - available time domain solver that required no modification ) .we analyze its properties applied to the simplest parallel - plate geometry [ fig .[ fig:1d - plates ] ] , which illustrate the essential features in the simplest possible context . in particular , we analyze important computational properties such as the convergence rate and the impact of different conductivity choices .part ii of this manuscript , in preparation , demonstrates the method for more complicated two- and three - dimensional geometries .the dissipation due to positive implies that the fields , and hence , will decay exponentially with time .below , we use a simple one - dimensional example to understand the consequences of this dissipation for both the one - dimensional parallel plates and the two - dimensional piston configuration .the simplicity of the parallel - plate configuration allows us to examine much of the behavior of the time - domain response analytically .( the understanding gained from the one - dimensional geometry can be applied to higher dimensions . )furthermore , we confirm that the error in the casimir force due to truncating the simulation at finite time decreases exponentially ( rather than as , as it would for no dissipation ) . to gain a general understanding of the behavior of the system in the time domain , we first examine a simple configuration of perfectly metallic parallel plates in one dimension .the plates are separated by a distance ( in units of an arbitrary distance ) in the dimension , as shown by the inset of fig .[ fig:1d - plates ] .the figure plots the field response , in arbitrary units , to a current source for increasing values of , with the conductivity set at . for a set of one - dimensional parallel plates as the separation is varied .the inset shows the physical setup.,scaledwidth=48.0% ] figure [ fig:1d - plates ] shows the general trend of the field response as a function of separation . for short times, all fields follow the same power - law envelope , and later rapidly transition to exponential decay .also plotted for reference is a curve , demonstrating that the envelope is in fact a power law .we can understand the power law envelope by considering the vacuum green s function in the case ( analogous conclusions hold for ) . in the case , one can easily solve for the vacuum green s function in one dimension for real frequency : we then analytically continue this expression to the complex frequency domain via eq .( [ eq : contour ] ) and compute the fourier transform . setting in the final expression , one finds that , to leading order , .this explains the behavior of the envelope in fig .[ fig:1d - plates ] and the short - time behavior of the green s functions : it is the field response of vacuum . intuitively , the envelope decays only as a power in because it receives contributions from a continuum of modes , all of which are individually decaying exponentially ( this is similar to the case of the decay of correlations in a thermodynamic system near a critical point ) . for a finite cavity ,the mode spectrum is discrete the poles in the green s function of the non - dissipative physical system are pushed below the real frequency axis in this dissipative , unphysical system , but they remain discretely spaced . at short times , the field response of a finite cavity will mirror that of an infinite cavity because the fields have not yet propagated to the cavity walls and back . as increases , the cavity response will transition to a discrete sum of exponentially decaying modes . from eq .( [ eq : contour ] ) , higher - frequency modes have a greater imaginary - frequency component , so at sufficiently long times the response will decay exponentially , the decay being determined by the lowest - frequency cavity mode . the higher the frequency of that mode , the faster the dissipation .this prediction is confirmed in fig .[ fig:1d - plates ] : as decreases , the source `` sees '' the walls sooner . from the standpoint of computational efficiency , this method then works best when objects are in close proximity to one another ( although not so close that spatial resolution becomes an issue ) , a situation of experimental interest .we now examine the force on the parallel plates . from the above discussions of the field decay and the decay of , we expect the time integral in eq .( [ eq : time - force ] ) to eventually converge exponentially as a function of time . in the interest of quantifying this convergence ,we define the time dependent `` partial force '' as : letting denote the limit of , which is the actual casimir force , we define the relative error in the -th component of the force as : we plot in fig .[ fig : force-1d ] for the one - dimensional parallel - plate structure with different values of .the inset plots for the same configuration .as expected , the asymptotic value of is independent of , and converges exponentially to zero . ) for one - dimensional parallel plates as a function of time .( inset ) : relative error as a function of on a semi log scale.,scaledwidth=45.0% ] for near zero , the force is highly oscillatory . in one dimension this gives the most rapid convergence with time , but it is problematic in higher dimensions .this is because , in higher - dimensional systems , consists of many points , each contributing a response term as in fig . [fig : force-1d ] .if is small , every one of these terms will be highly oscillatory , and the correct force eq .( [ eq : partial - force ] ) will only be obtained through delicate cancellations at all points on .small is thus very sensitive to numerical error .increasing smooths out the response functions , as higher frequency modes are damped out .however , somewhat counterintuitively , it also has the effect of slowing down the exponential convergence .one can understand the asymptotic behavior of the force by considering the equations of motion eq .( [ eq : sigma - eom ] ) as a function of and . when the response function exhibits few if any oscillations we are in the regime where . in this limit ,the approximate equations of motion are : \textbf{g}_j(\xi;\textbf{r},\textbf{r}^\prime ) = \delta(\textbf{r}-\textbf{r}^\prime)\hat{\textbf{e}}_j \label{eq : large - sigma}\ ] ] in the limit of eq .( [ eq : large - sigma ] ) , the eigenfrequency of a given spatial mode scales proportional to .the lowest - frequency mode therefore has a time - dependence , for some constant .since the decay of the force at long times is determined by this mode , we expect the decay time to scale inversely with in the limit of very high .this is suggested in fig .[ fig : force-1d ] and confirmed by further numerical experiments .additionally , from eq . ( [ eq : large - sigma ] ) we see that in the case of a homogeneous one - dimensional cavity , the solutions have a quadratic dispersion , for spatial dependence , and so the lowest cavity frequency scales as the inverse square of the cavity size .this means that the rate of exponential convergence of fig .[ fig:1d - plates ] should vary as in the limit of very large .this scaling is approximately apparent from fig .[ fig:1d - plates ] , and further experiments for much larger confirm the scaling .we thus see that in this limit , the effect of increasing by some factor is analogous to increasing the wall spacing of the cavity by the square root of that factor .the present analysis shows that there are two undesirable extremes .when is small , rapid oscillations in will lead to large numerical errors in more than one dimension .when is large , the resulting frequency shift will cause the cavity mode to decay more slowly , resulting in a longer run time .the optimal lies somewhere in between these two extremes and will generally depend on the system being studied . for the systems considered in this paper , with a typical scale , appears to be a good value for efficient and stable time - domain computation .an algorithm to compute casimir forces in fdtd has several practical advantages .fdtd algorithms that solve maxwell s equations with frequency - independent conductivity , and even more complicated dispersions , are plentiful and well - studied .they are stable , convergent , and easily parallelized .although the current formulation of our method requires the evaluation of along a surface , requiring a separate calculation of the fields for each dipole source in , all of these sources can be simulated in parallel , with no communication between different simulations until the very end of the computation .in addition , many fdtd solvers will allow the computational cell for each source to be parallelized , providing a powerful method capable of performing large computations .the calculations of this paper employed non - dispersive materials in the original ( ) system . however , the theoretical analysis applies equally well to materials of arbitrary dispersion .any materials that can be implemented in an fdtd solver ( e.g. a sum of lorentzian dielectric resonances ) can also be included , and existing algorithms have demonstrated the ability to model real materials .existing fdtd implementations also handle anisotropy in and , multiple types of boundary conditions , and other complications . in principle , the computational scaling of this fdtd method is comparable to finite - difference frequency - domain ( fdfd ) methods . in both cases ,each solver step ( either a time step for fdtd or an iterative - solver step for fdfd ) requires work for grid points .the number of time steps required by an fdtd method is proportional to the diameter of the computational cell , or in dimensions . with an ideal multigrid solver , fdfd can in principle be solved by solver steps , but a simpler solver like conjugate gradient requires a number of steps proportional to the diameter as well . in both cases , the number of points to be solver on the surface is .hence , the overall complexity of the simplest implementations ( not multigrid ) is .we believe that future boundary - element methods will achieve better efficiency , but such methods require considerable effort to implement and their implementation is specific to the homogeneous - medium green s function , which depends on the boundary conditions , dimensionality and types of materials considered .part ii of this manuscript , in preparation , will illustrate the method in various non - trivial two- and three - dimensional geometries , including dispersive dielectrics .in addition , we introduce an optimization of our method ( based on a rapidly converging series expansion of the fields ) that greatly speeds up the spatial integral of the stress tensor .we also compute forces in three - dimensional geometries with cylindrical symmetry , which allows us to take advantage of the cylindrical coordinates support in existing fdtd software and employ a two - dimensional computational cell .we would like to thank peter bermel and ardavan farjadpour for useful discussions .this work was supported by the army research office through the isn under contract no .w911nf-07-d-0004 , the mit ferry fund , and by us doe grant no .de - fg02 - 97er25308 ( arw ) .fdtd algorithms approximate both time and space by a discrete uniform mesh .bearing aside the standard analysis of stability and convergence , this discretization will slightly modify the analysis in the preceding sections .in particular , the use of a finite temporal grid ( resolution ) implies that all continuous time derivatives are now replaced by a finite - difference relation , which is most commonly taken to be a center difference : where is an arbitrary function of time .the effect of temporal discretization is therefore to replace the linear operator with .the representation of this operator is simple to compute in the frequency domain .letting act on a fourier component of of yields : where the effect of discretization on the system is thus to replace by in the derivatives , which correspond to numerical dispersion arising from the ultraviolet ( nyquist ) frequency cutoff .note that is still the frequency parameter governing the time dependence of the fourier components of and in the limit of infinite resolution ( ) . because fdtd is convergent [ ,most of the analysis can be done ( as in this paper ) in the limit .however , care must be taken in computing because the fourier transform of , eq .( [ eq : gw ] ) , does not exist as .we must compute it in the finite regime .in particular , the finite resolution requires , via eq .( [ eq : wd ] ) , that we replace in eq .( [ eq : gw ] ) by : note that the jacobian factor involves and , not and , although of course the latter converges to the former for .the basic principle is that one must be careful to use the discrete analogues to continuous solutions in cases where there is a divergence or regularization needed .this is the case for , but not for the jacobian .similarly , if one wished to subtract the vacuum green s function from the green s function , one needs to subtract the vacuum green s function as computed in the discretized vacuum .such a subtraction is unnecessary if the stress tensor is integrated over a closed surface ( vacuum contributions are constants that integrate to zero ) , but is useful in cases like the parallel plates considered here . by subtracting the ( discretized ) vacuum green s function , one can evaluate the stress tensor only for a single point between the plates , rather than for a `` closed surface '' with another point on the other side of the plates . as was noted before eq .( [ eq : gndt ] ) , the nyquist frequency regularizes the frequency integrations , similar to other ultraviolet regularization schemes employed in casimir force calculations . because the total frequency integrand in eq .( [ eq : force ] ) goes to zero for large ( due to cancellations occurring in the spatial integration and also due to the dissipation introduced in our approach ) , the precise nature of this regularization is irrelevant as long as is sufficiently small ( i.e. , at high enough resolution ) .one way to compute the magnetic correlation function is by taking spatial derivatives of the electric green s function by eq .( [ eq : hhg ] ) , but this is inefficient because a numerical derivative involves evaluating the electric green s function at multiple points . instead , we compute the magnetic green s function directly , finding the magnetic field in response to a magnetic current . this formulation , however , necessitates a change in the choice of vector potentials eqs .( [ eq : ea][eq : ba ] ) as well as a switch from an electric to magnetic conductivity , for reasons explained in this section . equations ( [ eq : ea][eq : ba ] ) express the magnetic field as the curl of the vector potential , enforcing the constraint that is divergence - free ( no magnetic charge ) . however , this is no longer true when there is a magnetic current , as can be seen by taking the divergence of both sides of faraday s law with a magnetic current , , since for a point - dipole current . instead ,since there need not be any free electric charge in the absence of an electric current source , one can switch to a new vector potential such that the desired correlation function is then given , analogous to eq .( [ eq : eeg ] ) , by where the photon magnetic green s function solves [ similar to eq .( [ eq : geom ] ) ] \textbf{g}^h_j(\omega;\textbf{r},\textbf{r}^\prime ) = \delta(\textbf{r}-\textbf{r}^\prime)\hat{\textbf{e}}_j .\label{eq : hgeom}\ ] ] now , all that remains is to map eq .( [ eq : hgeom ] ) onto an equivalent real - frequency ( ) system that can be evaluated in the time domain , similar to sec .[ sec : time - domain ] , for given by eq .( [ eq : contour ] ) .there are at least two ways to accomplish this .one possibility , which we have adopted in this paper , is to define an effective magnetic permeability , corresponding to a _magnetic _ conductivity , similar to eq .( [ eq : dispersion ] ) . combined with eq .( [ eq : contour ] ) , this directly yields a magnetic conductivity as in eq .( [ eq : fdtd1d ] ) .a second possibility is to divide both sides of eq .( [ eq : hgeom ] ) by , and absorb the factor into via eq .( [ eq : dispersion ] ) .that is , one can compute the magnetic correlation function via the magnetic field in response to a magnetic current with an _ electric _ conductivity .however , the magnetic current in this case has a frequency response that is divided by , which is simply a rescaling of in eq .( [ eq : gdual ] ) .there is no particular computational advantage to this alternative , but for an experimental realization , an electric conductivity is considerably more attractive .[ note that rescaling by will yield a new in eq .( [ eq : gw ] ) , corresponding to a new that exhibits slower decay . ] in this section , we extend the time - domain formalism presented above to cases where the dielectric permittivity of the medium of interest is dispersive . to begin with , note that in this case the dissipative , complex dielectric of eq .( [ eq : eps_c ] ) is given by : where denotes the permittivity of the geometry of interest evaluated over the complex contour .this complex dielectric manifests itself as a convolution in the time - domain equations of motion , i.e. in general , .the standard way to implement this in fdtd is to employ an auxiliary equation of motion for the polarization . for the particular contour chosen in this paper [ eq .( [ eq : contour ] ) ] , the conductivity term already includes the prefactor and therefore one need only add the dispersion due to .the only other modification to the method comes from the dependence of in eq .( [ eq : gamma1 ] ) on .we remind the reader that our definition of was motivated by our desire to interpret eq .( [ eq : force_fields ] ) as the fourier transform of the convolution of two quantities , and thus to express the casimir force directly in terms of the electric and magentic fields and , respectively . a straightforward generalization of eq .( [ eq : gamma1 ] ) to dispersive media entails setting .however , in this case , the fourier transform of eq .( [ eq : gamma1 ] ) would be given by a convolution of and in the time domain , making it impossible to obtain _ directly _ in terms of .this is not a problem however , because the stress tensor _ must _ be evaluated over a surface that lies entirely within a uniform medium ( otherwise , would cross a boundary and interpreting the result as a force on particular objects inside would be problematic ) .the dielectric appearing in eq .( [ eq : gamma1 ] ) is then at most a function of , i.e. , which implies that we can simply absorb this factor into , modifying the numerical integral of eq .( [ eq : gndt ] ) .furthermore , the most common case considered in casimir - force calculations is one in which the stress tensor is evaluated in vacuum , i.e. , and thus dispersion does not modify at all .
we introduce a method to compute casimir forces in arbitrary geometries and for arbitrary materials based on the finite - difference time - domain ( fdtd ) scheme . the method involves the time - evolution of electric and magnetic fields in response to a set of current sources , in a modified medium with frequency - independent conductivity . the advantage of this approach is that it allows one to exploit existing fdtd software , without modification , to compute casimir forces . in this manuscript , part i , we focus on the derivation , implementation choices , and essential properties of the time - domain algorithm , both considered analytically and illustrated in the simplest parallel - plate geometry . part ii presents results for more complex two- and three - dimensional geometries .
we apply the full - scale bayesian inference code borg ( bayesian origin reconstruction from galaxies , ) to the galaxies of the ` sample dr72 ` of the new york university value added catalogue ( nyu - vagc ) , based on the final data release ( dr7 ) of the sloan digital sky survey ( sdss ) .the physical model for gravitational dynamics is second - order lagrangian perturbation theory ( 2lpt ) , linking initial density fields ( ) to the presently observed large - scale structure , in the linear and mildly non - linear regime .the algorithm explores numerically the posterior distribution by sampling the joint distribution of all parameters involved , via efficient hamiltonian markov chain monte carlo ( hmc ) dynamics . each sample ( fig .[ fig : slices ] , upper panel ) is a `` possible version of the truth '' in the form of a full physical realization of dark matter particles , tracing both the density and the velocity fields .the variation between samples ( fig .[ fig : slices ] , lower panel ) quantifies joint and correlated uncertainties ( survey geometry , selection effects , biases , noise ) inherent to any cosmological observation .we generate a set of data - constrained realizations of the present large - scale structure ( fig .[ fig : slices_filter ] ) : some samples of inferred primordial conditions are evolved with 2lpt to , then with a fully non - linear cosmological simulation ( using gadget-2 , ) from to .a dynamic , non - linear physical model naturally introduces some correlations between the constrained and unconstrained parts , which yields reliable extrapolations for certain aspects of the model that have not yet been constrained by the data ( e.g. near the survey boundaries or at high redshift ) .we apply vide ( the void identification and examination toolkit , ) to the constrained parts of these realizations .the void finder is a modified version of zobov that uses voronoi tessellations of the tracer particles to estimate the density field and a watershed algorithm to group voronoi cells into voids .we find physical cosmic voids in the field traced by the dark matter particles , probing a level deeper in the mass distribution hierarchy than galaxies , and greatly alleviating the bias problem for cosmological interpretation of final results . due to the high density of tracers , we find about an order of magnitude more voids at all scales than the voids directly traced by the sdss galaxies ( fig . [fig : voids ] , left panel ) , which sample the underlying mass distribution only sparsely .our inference framework therefore yields a drastic reduction of statistical uncertainty in voids catalogs . for usual voids statistics such as radial density profiles of stacked voids ( observed in simulations to be of universal character ,e.g. ) , the results we obtain are consistent with -body simulations prepared with the same setup ( fig .[ fig : voids ] , right panel ) .99 j. jasche , f. leclercq , b. d. wandelt 2014 , http://arxiv.org/abs/1409.6308[arxiv:1409.6308 ] [ astro-ph.co ] .j. jasche , b. d. wandelt , _ mnras _ * 432 * , 894 ( 2013 ) , http://arxiv.org/abs/1203.3639[arxiv:1203.3639 ] [ astro-ph.co ] .f. leclercq , j. jasche , p. m. sutter , n. hamaus , b. wandelt , http://arxiv.org/abs/1410.0355[arxiv:1410.0355 ] [ astro-ph.co ] .v. springel , _ mnras _ * 364 * , 1105 ( 2005 ) , http://arxiv.org/abs/astro-ph/0505010[arxiv:astro-ph/0505010 ] p. m. sutter , g. lavaux , n. hamaus , a. pisani , b. d. wandelt , _ et al _ , http://arxiv.org/abs/1403.5499[arxiv:1403.5499 ] [ astro - ph ] . m. c. neyrinck , _ mnras _ * 386 * , 2101 ( 2008 ) , http://arxiv.org/abs/0712.3049[arxiv:0712.3049 ] [ astro - ph ] .m. sutter , g. lavaux , b. d. wandelt , n. hamaus , d. h. weinberg , m. s. warren , _ mnras _ * 442 * , 462 ( 2014 ) , http://arxiv.org/abs/1309.5087[arxiv:1309.5087 ] [ astro-ph.co ] . n. hamaus , p. m. sutter , b. d. wandelt , _ prl _ * 112 * , 251302 , http://arxiv.org/abs/1403.5499[arxiv:1403.5499 ] [ astro-ph.co ] .i thank nico hamaus , jens jasche , paul sutter and benjamin wandelt for fruitful collaborations and useful discussions .i acknowledge funding from an amx grant ( cole polytechnique paristech ) and benjamin wandelt s senior excellence chair by the agence nationale de la recherche ( anr-10-cexc-004 - 01 ) . this work made in the ilp labex ( anr-10-labx-63 ) was supported by french state funds managed by the anr within the investissements davenir programme ( anr-11-idex-0004 - 02 ) .
we apply the borg algorithm to the sloan digital sky survey data release 7 main sample galaxies . the method results in the physical inference of the initial density field at a scale factor , evolving gravitationally to the observed density field at a scale factor , and provides an accurate quantification of corresponding uncertainties . building upon these results , we generate a set of constrained realizations of the present large - scale dark matter distribution . as a physical illustration , we apply a void identification algorithm to them . in this fashion , we access voids defined by the inferred dark matter field , not by galaxies , greatly alleviating the issues due to the sparsity and bias of tracers . in addition , the use of full - scale physical density fields yields a drastic reduction of statistical uncertainty in void catalogs . these new catalogs are enhanced data sets for cross - correlation with other cosmological probes .
the burgeoning volume of multiwavelength galaxy observations reveals an interactive complexity of patterns that couple merging , environmental interaction , enhanced epochs of star - formation , and gas accretion histories ( e.g. * ? ? ?each mass component dark matter , stellar and compact objects , atomic and molecular gas is sensitive to different although overlapping ranges of length and time scales .key to piecing together these complex interactions is an understanding of the mutual dynamical evolution of each component . for gas in particular , winds , accretion flows and other collisionally - induced structures often produce phase interfaces at shocks .not only do the density enhancements at these shocks result in radiative cooling and instabilities , these enhancements provide gravitational _ handles _ for momentum transfer and torques . more generally , galaxies are particularly sensitive to the conditions in _ transitional _ regimes .these transitions occur in density ( e.g. between galaxy components ) , gas temperature and gravitational acceleration .for example , on the largest scales , the standard model predicts that gravitational potential of a galaxy is dominated by dark matter .the baryons in the halo are rarefied with a temperature characteristic of the escape velocity from the halo , approximately degrees k. moving inwards , there is a transition between the baryon - dominated stellar and gaseous galaxy and the collisionless dark halo . in this region, the gravitational dynamics of both components may conspire to produce structure .the increased gas density shortens the atomic and molecular cooling time , producing a dense cold gas layer .this implies that there must be an interface of coexistence between the discrete rarefied and dense phases .finally , each of these gas phases and the gravitationally dominant collisionless components has a characteristic length and time scale that produces a range of accelerations .these transitions regions provide tests of the standard and alternative galaxy formation hypotheses .in particular , numerical simulations based on the standard scenario have made a variety of predictions on small galactic scales with both successes and failures .however , first - principle physical simulations are often infeasible at the scales where many of the failures occur , so at this point , the failures may either intrinsic or methodological .alternately , some propose that the root cause of the failures may be the nature of gravity itself and that _ modified newtonian dynamics _ ( mond ) produces a better prediction of observed rotations curves than standard .since the outer galaxy will be dominated by dark matter in the standard scenario or by non - newtonian forces in mond , the response in the outer galaxy to the dynamics may provide a sensitive test for these hypotheses .in addition , the temporal differences between the collisionless and collisional responses of media within each galaxy component provides clues to the history of a galaxy s evolution that depend on the underlying cosmogony .for example , the gas responds over several hundred million years to a disturbance in the stellar and dark - matter response to an external perturbation taking place over gigayears .the observation of both responses _ together _ can provide key diagnostics to the evolutionary history and possibly the underlying physics . unfortunately , these transition regions are difficult to simulate reliably . therefore , the most productive use of the combined dsmc n - body hybrid code will be the investigation of interface dynamics and energetics on small and intermediate scales .an experimental version of our code uses the full chianti atomic database for collisional cross sections and recombination ( bound - bound and bound - free ) and standard plasma transitions ( free - free ) for the ionized regime .the current version does not treat the electrons as a separate kinetic species , but rather assumes that they follow the ions .this restriction will be relaxed in a future version of this code and this will allow a fully self - consistent treatment of electron conduction and allow the dynamical influence of fixed magnetic fields to be investigated .dsmc could be incorporated into a full - fledged pic plasma code ( e.g. * ? ? ?* ) but this is well beyond the scope of our current implementation . our main purpose for this and the companion paper ( * ? ? ?* hereafter paper 2 ) is a demonstration of the value of the kinetic approach for astrophysical flows by applying it to some classic scenarios . to facilitate this comparison , we use the standard , simplified local thermodynamic equilibrium ( lte ) scheme ( e.g. * ? ? ?* ) rather than the full - fledged self - consistent cross - section based approach. we will begin , in the next section , with a very brief overview of the different regimes for gas dynamics from rarefied to dense .this will motivate the need for an understanding of gas in the transitional regime .section [ sec : method ] describes the numerical approach in two parts : [ sec : dsmc ] introduces the dsmc algorithm and a hybrid variant that exploits the near - equilibrium solution when the mean - free path is very short , and [ sec : implement ] briefly reviews the n - body code , describes the implementation of the dsmc algorithm in parallel using mpi and presents diagnostics necessary for parameter tuning .we present two code tests in [ sec : tests ] : the standard one - dimensional shock tube ( [ sec : shock ] ) and the recovery of the kelvin - helmholtz instability ( [ sec : kh ] ) .we conclude in [ sec : summary ] with a discussion and summary .gas dynamics in galaxies is most often simulated through numerical solutions of the navier - stokes equations : where is the flow velocity , is the fluid density , is the pressure , is the stress tensor , and represents body forces ( per unit volume ) acting on the fluid .in essence , this equation is an application of newton s second law to a continuum and is most often derived from this point of view .however , to truly begin with newton s laws requires the application of kinetic theory with the boltzmann equation as a starting point ( e.g. see * ? ? ?the navier - stokes equations ( eq .[ eq : nse ] ) are rife with shock discontinuities in general and are notoriously difficult to solve .physically , the width of the shock interfaces are of order the mean - free path , and therefore are formally inconsistent with the continuum approximation . on the other hand ,these shock discontinuities are responsible for driving important astrophysical phenomena on many scales ( thermal heating , chemistry and radiation , turbulence , to name a few ) and must be treated carefully . because of this , much of the difficult work in computational fluid dynamics concerns the approximations at interfaces .for example , grid and finite element - schemes use shock - capturing methods to stabilize the solution in presence of discontinuities .these methods often introduce numerical dissipation to achieve stability and add some _ width _ to the discontinuity prevent numerically - induced oscillations .the sph method requires the introduction of artificial dissipation terms that enable the conservation energy and momentum at the otherwise unresolved shock discontinuity .therefore , the effective width of the shock will most often not correspond to the intrinsic width which is of order the mean - free path .although artificial viscosity allows the discontinuity to be resolved , the algorithm may also introduce unphysical dynamics .for example , illustrated the appearance of a smooth ordered layer of particles near discontinuities that inhibits kelvin - helmholtz and rayleigh - taylor instabilities near density gradients .a number of fixes have been recently introduced to help address this problem . in the case of discontinuous galerkin methods ,one may use order reduction or limiting to prevent spurious oscillations near discontinuities .spectral methods , which project the fluid equations onto basis functions ( e.g. spherical harmonics , chebyshev polynomials ) , yield high - accuracy solutions but one must be _ even more _ careful at interfaces .current state of the art in spectral methods is to use high - accuracy shock - fitting algorithms . in summary ,when the interactions in the shock interface are critical to the resulting energetics and observational diagnostics , artificial viscosity and shock - capturing techniques which rely on approximating the discontinuous nature of the shock will miss important physics .a more general formulation of gas dynamics follows from the collisional boltzmann equation , which describes the change to the phase - space density , , induced by the collisions between particles , . since has a six - dimensional domain and the fields in the navier - stokes equation have a three - dimensional domain , equation ( [ eq : be ] ) appears harder and certainly more time consuming to solve than equation ( [ eq : nse ] ). however , the high level of algorithmic complexity in computational fluid dynamics ( cfd ) follows from the mathematical requirements that arise from the maintenance of the continuum limit . on the other hand ,the left - hand side of equation ( [ eq : be ] ) is the collisionless part of the boltzmann that may be readily solved using n - body techniques while the left - hand describes the particle collisions . unlike the equations of hydrodynamics, the collisional boltzmann equation has _ no problem _ with transition regimes or discontinuities since the information about the collisions are carried ballistically rather than by constitutive relationships .said another way , the continuum limit will often fail by definition at transition regimes while the kinetic approach represents the physical nature of the transition and can not fail .the price for this generalization is performance : the kinetic approach requires smaller timescales and length scales .solutions of the collisional boltzmann equation are often an order of magnitude slower or more than hydrodynamic solutions for the same problem . line is shown in red . the shaded blue region is the transition region . for ,the flow is strongly in the kinetic regime , where collisions and excitation may be important overall but rare . for , the continuum limit , e.g. the navier - stokes equation ,is appropriate .a number of classic astronomical interface regimes are shown in labeled boxes.,scaledwidth=50.0% ] given the difficulties in solving equation ( [ eq : nse ] ) and its inappropriateness for true transitional regimes , it is worth exploring alternatives .direct solution of the collisional boltzmann equation ( [ eq : be ] ) using a kinetic simulation method is attractive for a number of reasons .the simplest kinetic method , molecular dynamics , solves the full n - body system including the collisions directly , at often great expense . on the other hand ,it is straightforward to incorporate any number of species and specific physical couplings that would be difficult in cfd , and generalization to multiple species and interactions is straightforward .several alternatives to pure molecular dynamics exist ; each achieves computational efficiency by approximating some aspect of equation ( [ eq : be ] ) .for example , lattice boltzmann methods ( lbm ) discretize and solve equation ( [ eq : be ] ) on a grid and naturally yield the navier - stokes equation in the small mean - free path limit .this is a form of mesoscopic solution designed to produce the correct solution on scales larger than the particle scale but smaller than the continuum macroscopic scale .the final example of a mesoscopic solution is monte carlo solution of boltzmann equation .this approach exploits the classical indeterminacy of the collisional trajectories on the mean - free - path scale .the effects of the collisions are incorporated with a monte - carlo procedure that reproduces the per - particle cross sections that affect the flow at intermediate length scales .this method of solution may be one of the few practical approaches available to understand multiscale phenomena such as turbulence .some gas flows are not fluids and _ must _ be treated by a kinetic - theory approach .the nature of the flow is described in kinetic theory by the ratio of the mean free path to the characteristic scale , called _knudsen _ number or : in astrophysics , typical characteristic scales include density , temperature , and gravitational - field scale lengths . for ,the solutions of the continuum fluid equations deviate from the exact solutions .it should not be surprising that many of the most important astrophysical regimes are near this boundary , as shown in figure [ fig : regimes ] .the knowledge of the physical state within the transition regime is necessary for predicting the physical processes that may dominate the observed emission or cooling even if is small elsewhere .in other regimes , the continuum limit is not appropriate even without a transition region or discontinuity .the direct simulation monte carlo method ( dsmc , see * ? ? ?* ) has been widely used in engineering applications for gas flows beyond the continuum limit .dsmc is a numerical method , originally conceived for modeling rarefied gas particles with mean free paths the same order or greater than the characteristic physical length scale ( i.e. ) . in such rarefied flows , the navier - stokes equations can be inaccurate .but recently , the dsmc method has been extended to model near continuum flows ( ) , making it appropriate for astrophysical problems with large dynamic ranges and multiple phases .in addition , dsmc is fully shock capturing in the sense that the discontinuity implied by the shock is naturally determined by dsmc .it reproduces standard shock tests ( see [ sec : shock ] ) and instabilities such as kelvin - helmholtz ( see [ sec : kh ] ) .because the method works for arbitrary values of , dsmc may be used in astrophysical problems that have short and long mean free paths that might occur in the interaction between cold gas and hot rarefied gas in cluster environments ( e.g. see paper 2 for an application to ram - pressure stripping ) .furthermore , dsmc is always stable ( no courant - friedrichs - lewy condition ) . as will be described in [ sec : tuning ] , there _ are _ conditions on the step size and collision parameters for optimal performance , but poor parameter choices lead to some inaccuracy but not failure through instability .the main problem with dsmc approach is computational speed : for a point of comparison , dsmc is more efficient per particle than a smoothed - particle hydrodynamics ( sph ) code , but it requires at least an order of magnitude more particles to achieve a similar resolution .the performance in our implementation is bottlenecked by the tree structure supporting the parallel domain decomposition and thereby marks this area for further technical development .nonetheless , dsmc will never compete in speed and accuracy with hydrodynamic methods in the limit .rather , the goal of the dsmc method is an investigation of the effect of microphysics and gas dynamics in the transition regimes themselves ; these regimes can not be studied within the hydrodynamic paradigm .dsmc incorporates the internal degrees of freedom of the atoms and molecules in a gas through a variety of approximations that redistribute the kinetic energy , momentum and internal energy of two collision partners . for a simple monoatomic gas ,the only relevant energies are those of translation , electronic excitation and radiative emission by the particles .this system is described by a phase - space distribution function , , that represents the expected number density of molecules of species in a small volume about the point at which have a velocity in the range to , and internal energy a given instant .the limiting kinetic equation for dsmc is the nonlinear collisional boltzmann equation .dsmc splits the simulated solution of the collisional boltzmann equation into two sequentially - applied parts .first , the particles are first advanced on collisionless trajectories using the standard n - body method .second , the flow field is divided into cells , and collisions are between the simulation particles realized consistent with the local collision rate .dsmc represents the atoms and molecules of a gas by much smaller number of simulation particles .the number of collisions reproduces the rate in the physical system by increasing the cross section for the simulation particles by the ratio of the simulation mass to the true mean atomic and molecular mass .this generic feature is exploited in a variety of ways to improve dsmc performance .for example , if the collision rate is sufficiently large , the velocity distribution in a cell will approach the equilibrium maxwell - boltzmann distribution and this may be used to limit the number of computed collisions explicitly .if the collision rate is low , however , there will be little redistribution of energy and momentum and the distribution will remain close to the collisionless solution which may be far from thermodynamic equilibrium .the details of the algorithm will be outlined in [ sec : algorithm ] below .dsmc is presently the most widely used numerical algorithm in kinetic theory . in dsmc , particle pairs are randomly chosen to collide according to the probability distribution implied by the interparticle potential .this probability is proportional to the particles relative speed and effective geometric cross section .the post - collision velocities are determined by randomly selecting the collision angles and redistributing the energy into atomic and molecular internal degrees of freedom .therefore , unlike molecular dynamics , dsmc particles are chosen to collide even if their actual trajectories do not overlap .this is not an inconsistency but required by the probabilistic nature of the solution .the application of the _ classic _ dsmc algorithm is restricted to dilute gases .recently , the consistent boltzmann algorithm ( cba ) was introduced as a simple variant of dsmc for dense gases .the cba collision algorithm follows the dsmc algorithm with two additions .first , the unit vector parallel to the line connecting the centers at impact is computed from the pre- and post - collision velocities of the colliding pair .each particle is displaced along this direction corresponding to the mean separation they would have experienced had they collided as hard spheres .secondly , the collision rate must be increased over the dilute rate to account for the volume displaced by the hard spheres representing the atoms and molecules themselves .these algorithmic changes are unlikely to play a major role in astrophysical flows , but are trivial to include . with these two simple additions, cba yields the hard sphere equation of state at all densities . showed that the high - density transport properties are not exact but remain comparable to with molecular dynamics simulations and the enskog approximation .although cba can be generalized to any equation of state here we will only consider the hard sphere gas whose particle diameter is a constant fraction of the bohr diameter .the two sequential steps that simulate equation ( [ eq : be ] ) are a ballistic or collisionless computation based on the left - hand side and a collisional computation based on the right hand side .practically , one advances the particles according the collisionless boltzmann equation , using standard n - body techniques .one then solves for the collisions using the collision operator is \,d\omega\,d\mathbf{v}_\ast \label{eq : collop}\end{aligned}\ ] ] where denotes the collision cross section , the primed and describe all the possible post - collisional velocities of two particles colliding with respective pre - collisional velocities and , and denotes the interaction angles . in dsmc , the collision operator is solved by a monte carlo realization of equation ( [ eq : collop ] ) as follows : 1 .move all of the particles collisionlessly according to the mean field ( eq . [ eq : advect ] ) using the n - body solver .2 . partition particles into cellswhose linear scale is of order the mean free path .3 . compute collision frequency in a cell for all particle species and interactions of interest .this discretizes the spatial dependence of the phase - space density to provide .4 . select random collision partners within cell ; this samples and in equation ( [ eq : collop ] ) .we assume that probability that a pair collides only depends on their relative velocity .that is , all particles in the cell are valid collision partners .the post - collision velocities ( 6 quantities ) are constrained by the conservation of momentum ( 3 constraints ) and energy ( 1 constraint ) . the post - collision direction in the center of mass frameis specified by two randomly chosen variates .repeat these steps beginning with step ( i ) . in this way ,the net change in the phase - space density implied by equations ( [ eq : be ] ) and ( [ eq : collop ] ) are computed by evolving velocity distributions consistent with the interaction potential implicit in the cross section .this , in turn , leads to mass , momentum , and energy flux through the collision cells . see and for additional practical details and theoretical underpinning of this approach .dsmc is most efficient and accurate when the interaction cell size is of order the mean free path and a free particle crosses the cell in a time step ( where is the mean one - dimensional particle velocity and is the time step ) ; let us call he _ flight length_. this motivates defining two scale - free quantities : the mean - free path ratio , , is the mean - free path in units of the cell size .the flight - length ratio , , is the length of flight in one time step in units of cell size .the cell size is chosen to satisfy two additional constraints : 1 ) should be smaller than any flow scale of interest ; and 2 ) the cell size should enclose approximately 10 particles .clearly , if the cell size is so small that the probability of occupation is small , no interactions are possible .conversely , if the occupation number is large , the expense may increase without improving the accuracy of the result .this parametrization leads to the following limiting scenarios and tuning prescriptions : * .some particles will pass through multiple cells in one time step , possibly leading to artificial viscosity ( see [ sec : mfp_eff ] ) .the remedy is to decrease step size , . *particles can not reach their interaction partners in one time step .this leads to excess correlation .the remedy is to increase step size , . in our dsmc implementation, the appropriate time step is selected automatically selected for each gas particle based on these criteria as long as the times step criteria demanded by the poisson solver ( eqs .[ eq : eps1][eq : eps3 ] ) are not violated . *mean free path is very short compared to any scale of interest .the system is approaching the continuum limit with many collisions per particle in a time step .one remedy is to use a collision limiting scheme .here , we use the equilibrium particle simulation method ( epsm ) described in [ sec : epsm ] . when using epsm , the relevant velocity in equation ( [ eq : epsv ] ) is the mean flow velocity not the mean ballistic velocity .of course , if this condition obtains everywhere in the computational volume , all cells will use epsm and the resulting calculation will be in the cfd regime . *mean free path is very large compared to any scale of interest .the partial remedy is to increase and possibly as long as remains smaller than any scale of interest .otherwise , the system is approaching the collisionless limit and all is well .this regimes are summarized schematically in figure [ fig : tuning ] . plane .the light green box shows the desirable range for the two parameters for efficiency .the dark green box shows the ideal range .if is too large , particles may cover multiple cells for ( red shaded region ) ; the step size should be reduced .if it is too small , the particles may interact with particles that can not be reached by free flight in one time step . for ( blue shaded region ) ,the limiting epsm algorithm is used.,scaledwidth=50.0% ] dsmc is computationally expensive , although much cheaper than molecular dynamics .the expense can be mitigated by only using dsmc where it is needed !a number of recent contributions to the literature describe hybrid navier - stokes dsmc solvers using finite - element methods and amr techniques ( e.g. * ? ? ?* ; * ? ? ?* ; * ? ? ?because of the unusual geometries and large dynamic range in astrophysical flows , the implementation of the kinetic theory is greatly simplified by using a particle method throughout these simulations rather than a hybrid dsmc navier - stokes code .a pure particle code easily accommodates the arbitrary geometry of interfaces , simplifies the transition between regimes without the numerical artifacts owing to the statistical noise incurred by moving between the particle simulation and continuum representation .however , in the high density regions that are collision dominated and make dsmc infeasible , the system will approach thermodynamic equilibrium .based on this , suggested that the number of collisions per particle be limited to only the number necessary to achieve equilibrium .a further simplification eliminates individual collision computations altogether when the density and collision rates are high .if one can predict that the number of collisions will be sufficient to achieve equilibrium without simulating the collisions , the limiting thermodynamic state for a collision - dominated cell may be computed a priori , and one may achieve a significant computational advantage . this high - collision limit of dsmcwas proposed by who called it the equilibrium particle simulation method ( epsm ) .this approach has been more recently explored by .in essence , epsm selects new velocity components from the equilibrium thermodynamic distribution within each cell while simultaneously conserving the total energy and momentum .an estimate of the temperature , may be derived from the total energy of the finite sample of particles in a cell using the standard relation for the mean energy of single degree of freedom .although pullin s application was a single species in one dimension , the approach is straightforwardly generalized to any number of species in three dimensions .we will combine the dsmc and epsm collision computation approximations using a heuristic such as the mean number of collisions between particles to switch between the two .more sophisticated collision limiting approaches with fewer limitations are under investigation ( see * ? ? ?* ) and will be explored in the future .owing to the random realization of evolved states in both dsmc and epsm , the epsm solution will differ from the dsmc solution and the exact molecular dynamics solution , even after many collisions .these differences lead to statistical scatter about the exact solution .presumably , the scatter will be larger for dsmc than for epsm , since initially identical initial states will contain slightly different total amounts of energy and momentum .it might also be possible to improve the use of both algorithms together by conserving or matching incoming and outgoing fluxes at cell boundaries between the two regimes .epsm is expected to be more efficient than dsmc close to thermodynamic equilibrium .however in dsmc , the time step is chosen to be of order the collision time .obviously , this condition must be relaxed in epsm , otherwise the mean number of collisions per particle would be and the distribution could be far from equilibrium . in both schemes , large cell sizes and step sizes larger than the local collision time may be used in regions where the flow gradients are small .[ sec : mfp_eff ] for additional complications . to summarize, epsm is an infinite collision rate limit of dsmc with a finite sample of simulator particles .if the time step exceeds the time of flight across a typical cell , or equivalently , then particles with momentum and energy representative of the equilibrium distribution of one particular cell can be _ non - physically _ transported to a distant cell with possibly different equilibrium conditions .close to equilibrium , this distance is approximately , where is the particle mass . on the other hand, the distance covered by particles in should be at least as large as the mean free path to prevent inducing non - physical correlations .if the criteria described in [ sec : tuning ] are close to ideal , the probability that particles will cross the cell in any one time step is small in the high - collision - rate limit .however , the ideal tuning parameters are not always computationally feasible . to understand the implications of these limits , let us consider transport of particles between two adjacent cells .consider a euclidean set of coordinate axes for a collision cell , with chosen perpendicular to the cell face common to the adjacent cells .let this face be located at .let the mass density of the two cells to the left and right of the boundary be denoted by and , respectively . for notational simplicity , let factor the phase - space distribution function into a density and velocity distribution function: ; this is consistent with our discretization of the phase space into collision cells .the velocity distribution function describing the state in the adjacent cells will be different in general : .that is , particles that cross the cell boundary with ( ) have the parent distribution function ( ) .this discontinuity gives rise to an effective viscosity .demonstrating this explicitly is an easy exercise using the standard techniques from kinetic theory ( e.g. * ? ? ?let denote some velocity moment of , such as the momentum .the net flux of the across the interface , , can be split into two contributions by the direction of the particles crossing the cell face , , as follows : in general , the gradient in velocity parallel to cell wall will not vanish across the interface : .setting and taking and to be maxwell - boltzmann distributions with the temperatures and in the adjacent cells , the momentum fluxes parallel to the cell wall are \rho^{<0 } \langle v_x^{<0}\rangle\langle v_y^{<0}\rangle \nonumber \\ & & + \sqrt{\frac{k_bt^{<0}}{2\pi m } } e^{-\langle v_x^{<0}\rangle^2/2k_bt^{<0}/m } \rho^{<0 } \langle v_y^{<0}\rangle \\ { { \cal f}}^{>0}_{mv_y } & = & \frac12\left[1 - { \mathop{\rm erf}\nolimits}\left(\langle v_x^{>0}\rangle/\sqrt{2k_bt^{>0}/m}\right)\right ] \rho^{>0 } \langle v_x^{>0}\rangle\langle v_y^{>0}\rangle \nonumber \\ & & - \sqrt{\frac{k_bt^{>0}}{2\pi m } } e^{-\langle v_x^{>0}\rangle^2/2k_bt^{>0}/m } \rho^{>0 } \langle v_y^{>0}\rangle \\\end{aligned}\ ] ] where and analogously for .assume that there are no additional gradients for simplicity ; this implies that and .in addition , assume that the mean velocity is small compared to the thermal velocity , . then , the net momentum flux across the cell boundary becomes now , define the distance that the mean particle transports its momentum as .the value of will depend both the details of dsmc code and the physics of the interactions and gives us an approximation to the gradient : combining these relations , we may express the net momentum flux as this is a shear stress induced at the cell boundary by the discretization .physically , viscosity arises from the shear stress at an interface that opposes an applied force .the classic example is the laminar flow of a viscous fluid in the space between two relatively moving parallel plates , known as couette flow .the force applied to the plates causes the fluid between the plates to shear with a velocity gradient in the direction of relative motion .in other words , the shear stress between layers is proportional to the velocity gradient in the direction perpendicular to the layers : where is the proportionality factor called the _viscosity_. comparing equation ( [ eq : stress ] ) that describes our discretization shear stress to the usual relation between mean free path and viscosity for planar couette flow ( eq . [ eq : couette ] ) , we may estimate the effective viscosity for the simulation in the near equilibrium limit to be the mean free path for a hard sphere with viscosity is .this implies that the effective mean free path due to the transport across the cell boundary in the simulation is then .the epsm algorithm mixes the transported momentum into the entire cell of size .interestingly , this suggests that and the effective mean free path in dsmc simulation becomes where is the local mean free path .this is equivalent to the dsmc requirement that the cell size should be smaller than the collisional mean free path for physical accuracy .however , when scale of all macroscopic observables is large compared to the mean free path , the less expensive epsm solution should closely approximate the dsmc solution .in other words , dsmc and epsm should give similar results when the number of collisions per particle per time step in dsmc is large , which justifies the use of cell sizes much larger than the mean free path and artificially limiting the number of collisions per particle in this limit .similarly , the same arguments leading to equation ( [ eq : stress ] ) suggest that if dsmc grid cells are too large ( ) , non - local particles will be selected for collisions resulting in the numerical diffusion of gradients on the cell - size scale .conversely , if dsmc cells are too small ( ) , but still contain the same number of particle per cell , the number of simulated particles becomes far more than necessary , resulting in an accurate but highly inefficient simulation . in some flow problems of astrophysical interest , it is not possible to use a sufficient number of particles to populate cells at the mean - free - path scale . in these cases, we resort to using an under - resolved collision - limited dsmc solution with large cells .as we have seen , the overall effect of such an approximation is to misrepresent the transport coefficients giving rise to an effective viscosity .this should not lead to significant misestimations as long as the gradients of the flow are small with length scales greater than the cell size .we have implemented the dsmc kinetic algorithm in our n - body expansion code , exp . owing to its long effective relaxation times ,the expansion or `` self - consistent field '' algorithm ( scf , e.g. hernquist & ostriker 1992 ) is ideal for representing the large - scale global response of a galaxy to a long - term disturbance .weinberg ( 1999 ) extended this algorithm from fixed analytic bases to adaptive bases using techniques from statistical density estimation to derive an optimal smoothing algorithm for scf that in essence selects the minimum statistically significant length scale .this is combined with the empirically determined orthogonal functions that best represent the particle distribution and separate any correlated global patterns .the rapid convergence of the expansion series using this matched basis minimizes fluctuations in the gravitational force by reducing the number of degrees of freedom in the representation and by increasing the signal - to - noise ratio for those that do contribute .in addition , exp contains a particle - in - cell poisson solver as well as a parallelized direct - summation solver .the former may be ideal for investigating the importance of small self - gravitating spatial inhomogeneities that are a small part larger astrophysical flow .the exp code is modular with an object - oriented architecture and easily incorporates dsmc .the standard scf algorithm is `` embarrassingly '' parallel and the implementation here uses the message passing interface communications package ( mpi , e.g. ) , making the code easily portable to a variety of parallel systems . practically speaking , parallel scf with the new algorithmmakes tractable disk and halo simulations on modest - sized pc - based clusters with up to 100 million particles .exp uses a multiple time step algorithm as follows .we begin by partitioning phase space ways such that each partition contains particles that require a time step where is the largest time step .the time step with corresponds to the smallest single time - step in the simulation . since the total cost of a time step is proportional to the number of force evaluations , this algorithm improves the run time by the following factor where .if all particles were the deepest level , , we have .on the other hand , if most of the particles are in the level , we have . for an nfw dark - matter profile with particles as an example, we find that and , an enormous speed up ! forces in the scf algorithm depend on the expansion coefficients and the leap frog algorithm requires interpolation of these coefficients to maintain second - order error accuracy per step . the contribution to each expansion coefficient for particles in time - step partition separately accumulated and linearly interpolated for levels as needed .higher - order interpolation would have higher - order truncation error than the ordinary differential equation solver and would be wasteful .exp uses the minimum of three separate time step criteria for choosing the appropriate time - step partition for each particle : 1 . a local _ force _ time scale : this sets the time step to be a fraction of the estimated change of local velocity by the gravitational force ; that is , this time step is based on the rate of velocity change implied by the gravitational field .2 . a local _ work _ time scale : this sets the time step to be a fraction of the time required to change the particle s potential energy .a local _ escape _ time scale : this time step will be small for a fast moving particle near a local feature in the gravitational field or for a particle that samples a large range of gravitational field strengths over its orbit . we calibrate the leading coefficients for accuracy using a test simulation for each new set of initial conditions .typically yields better than 0.3% accuracy over a dynamical time .our implementation uses an octree for partitioning space to make dsmc collisions cells .each node in the octree is a rectangular prism most often chosen to be a cube .each node is dividing into eight child nodes of equal volume formed by a single bisection in each dimension .octrees are the three - dimensional analog of quadtrees . unlike kd - trees ,the octree partitioning is on volume alone , so that the aspect ratio of all nodes are self similar .this is desirable for collisions cells that assume that any particle in a cell whose linear dimension is approximately a mean free path is a good collision partner for another particle from that same cell .most contemporary supercomputers are networked clusters with multiple multi - core processors per node .each of the cpu cores share the memory and communication resources of the parent node , and each node is interconnected to the others through a fast low - latency network . to simplify the parallel domain decomposition and exploit the typical cluster topology, we use a two - level nested tree as follows .the first level is a coarse octree constructed with approximately an order of magnitude more cells than nodes .the computational work for all particles belonging to leaves of the coarse tree is accumulated and used to balance load among the nodes when the domain is decomposed .the second level constructs a tree in each leaf of the coarse first - level tree .spatially contiguous leaves are preferentially assigned to the same node .the goal of the second - level tree are collections of interaction cells that contain approximately simulation particles .aggregated cells or super cells used for statistical diagnostics contain particles , typically .to minimize the intranode communication overhead , we run one multithreaded process per node .multithreaded tasks include determining expansion coefficients , force evaluation , and dsmc collision computations .the full domain decomposition is computationally expensive and would horribly dominate the run time if performed at the smallest time step : in equation ( [ eq : ts ] ) .rather , the tree is extended as necessary by updating the particle - node association and constructing new tree nodes as necessary for time intervals short compared to the evolution time scale but long compared .the interim procedure updates the octree consistently but without exchanging particles between processors . at some preselected intermediate time step assigned by choosing with , the domain decomposition is reconstructed anew , exchanging particles between processors as necessary . in other words , at time steps with , both the coarse- and fine - grained octrees are recomputed from scratch . at time steps with ,the fine - grained octrees are updated only .it is therefore possible to have the _ same _ cell populated by _ different _particles on separate nodes for a short period of time during the simulation .this will introduce some errors since the densities in the duplicated cells will be underestimated .however , the value of is chosen so that remains smaller than any characteristic dynamical time except for the collision time , and therefore , i do not anticipate that this procedure will lead to significant quantitative or qualitative errors .if there is any doubt , the value of may be increased to as check . along the same lines , dsmc and dsmc - epsmexplicitly conserve energy and momentum at the collision scale , including those describing atomic and molecular internal degrees of freedom. therefore , even if small - scale features are mildly compromised by duplication errors , these are unlikely to propagate .putting together the requirements on collision cells scales and time steps described in [ sec : tuning ] and [ sec : mfp_eff ] , we may summarise the interaction between particle number , resolution , cell size and time step as follows : 1 . the time step must be sufficiently small that the dsmc particles do not move through more than one cell of length in one time step , .that is the flight - length ratio should be of order unity .the time - step criterion becomes .the fundamental resolution scale of the fluid is determined by the mean - free path , .this suggests that collision - cell size scale , , should be of order .increasing beyond may lead to viscosity as described in [ sec : mfp_eff ] .large separations between collision partners when is large also reduces spatial accuracy . proposed defining virtual _ subcells _ within collision cells to maintain finer spatial control by choosing neighboring collision partners .this will be implemented and explored in a future version of our code .each simulation particle represents many atoms and molecules and the variance per cell will scale as inversely with the number of particles per cell .however , the computational effort scales as so large values of lead to infeasible simulations .empirically , the overall accuracy scales as , and many tests suggest that is a good compromise between these competing demands . in summary ,the physical properties of the simulation determine the number of particles required for an accurate dsmc simulation .too few particles degrade the resolution , since collision cells must have such that .values may lead to artificial viscosity if the characteristic scales of flow gradients are not resolved .however , if the mean - free path is smaller than any scale of interest and the energetics _ in _ shocks are not critical , the may be increased artificially without sacrificing reality .some knowledge of the consequences will help motivate whether computational feasibility justifies this trade off physically .for example , the flows on very large scales are likely to be correct owing to accurate energy and momentum conservation in dsmc independent of as long as the small scale features are not critical to determining features on large scales . unlike traditional cfd or n - body simulations ,a dsmc simulation does not converge with increasing particle number ; rather , increasing the particle number increases the resolution .a `` converged '' solution is obtained by averaging the mean quantities from multiple simulations .the monte carlo realization of the collision integral depends on the interaction cross section from equation ( [ eq : collop ] ) and defines the probability of internal excitation , ionization , recombination , radiation or scattering .since the spontaneous emission times are most often very small compared to the collision times for astrophysical flows , therefore we do not need track the excitation state of the atoms and molecules between collisions .thus , collisional cooling is a straightforward natural by - product of dsmc .however , there are some intrinsic difficulties in doing this accurately that require some additional specialized methods . for example, the existence of trace ionized components ( e.g. the elements c , n , o ) that have a significant effect on cooling but carry negligible momentum can not be simulated naively if there will be fewer than one trace - species particle per interaction cell .we elect therefore to change the ratio of the true particle number to simulation particle number for trace species .then , we populate the simulation with sufficient numbers of the trace species to yield accurate statistical representation of the trace interactions with weights to account for their true mass fraction .a further issue is the treatment of free electrons . in principle, it is not difficult to implement free electrons as a separate species .however as a consequence , the large velocities of the electrons relative to the ions in equipartition require very small time steps and lead to infeasibly large computational expense .we will approach this problem in two incremental steps . in the first, we will assume that the electrons remain in the vicinity of their parent ions , both preventing charge separation and eliminating the need for time steps on order of the mean flight time of the simulated electrons .in the second , we will include the free electrons as a separate species to feel the electrostatic field induced by charge separation .computational requirements will restrict this application to very small simulation regions and will be used , primarily , to explore mesoscale dynamics in various astrophysical regimes . alternatively , one may account for the energetic affects of the trace species by using their reaction rates computed from the thermodynamic properties approximated from the dsmc fields .in particular , the electronic level populations due to collisional and radiative excitation may be described by rate equations using the electron density , temperature and background radiation field .even if the flow is far from equilibrium , as long as the processes that establish the electronic level populations occur much more quickly than the time for any significant flow pattern to change , an equilibrium distribution will be a fair approximation in many situations .this approach is called the _ quasi steady state _ ( qss ) approximation .the electron densities , temperatures and ion densities may then be used to evaluate the qss rates .the qss rates can be obtained from individual particle cross sections by assuming that the electron velocity distributions are maxwell - boltzmann .these same set of assumptions yields the standard cooling curves used in cosmological and ism continuum gas simulations ( e.g. * ? ? ?heating by cosmic rays , photoelectric heating , etc . may be computed similarly using the qss method . in this and paper 2, we will employ such an approximate qss solution based on lte . then to treat the electrons, we estimate the ionization fraction based on the local thermal state and assume that the electrons follow the ions in space .the implementation here computes the effective temperature from the super cell of particles ( see [ sec : parallel ] ) and estimates collision rates based on a total effective hard - sphere geometric cross section . for very small mean free paths , our algorithm transitions to the navier - stokes equations using the equilibrium particle simulation through the use of epsm ( [ sec : epsm ] ) .if the number of mean collisions per body in a cell exceeds some preselected value , the epsm update is performed in one of two ways .first , using the original algorithm .this algorithm constructively computes random variates while simultaneously enforcing the constraints of momentum and energy conservation , much the same way as in the proof for distribution of the consistency of the sample variance ( e.g. * ? ? ?alternatively , it is straightforward to realize normal variates with any convenient mean and variance , followed by a shift and scale operation to recover the conserved total momentum and energy .both algorithms were implemented for completeness and yielded equivalent values but the latter is faster in tests and is the default .the use of epsm may be toggled by a run - time parameter .in addition , if epsm is not used , one may elect to limit the total number of collisions per cell to a maximum value as suggested by while maintaining the correct heating and cooling rates consistent with the predicted number of collisions .the simulation particles directly represent the physical properties of the gas .they are not _ tracers _ of an underlying field but rather all momentum , kinetic energy , internal energy and chemical fluxes must be computed directly from the particle distribution .therefore , the density , temperature and other traditional field quantities may only be _ estimated _ as an ensemble average .owing to the typically small number of particles , , in an interaction cell , these estimates are computed using the super cells with particles ( see [ sec : parallel ] ) . as a consequence ,the estimates used for producing a field representation for the gas physical state have lower spatial resolution than the simulation but are useful diagnostics nonetheless .in contrast , graphical representations from hydrodynamic simulations represent the full computed resolution of the field quantities .our current implementation computes ensemble temperature , density , knudsen number , and cell size to flight - length ratio .these quantities are carried by each gas simulation particle that are saved in phase - space output files .the exp - dsmc module keeps copious diagnostics on the physical parameters necessary to verify the validity of the dsmc approximation .for example , the time of flight across the cell , the mean - free path , and energy radiation rates per cell to ensure that the conditions required for an accurate dsmc simulation are maintained ( see [ sec : tuning ] ) . the necessary time step required for the particles in each cellis fed back into the time step selection algorithm to adaptively change the stem steps for particles as described in [ sec : nbody ] .the riemann shock tube is a special case of a riemann problem in eulerian hydrodynamics and is defined by an initial state with two fluids of different density and pressure at rest divided by a planar interface .the rankine - hugoniot conditions allow one to compute the flow properties across the shock . using this ,an exact solution may be obtained analytically for an adiabatic gas from the one - dimensional euler equations written in conservation form .see for details .because an exact solution is straightforwardly computed , the riemann shock tube has become a standard benchmark for computational fluid dynamics problems .the shock tube simulation is a strong test for hydrodynamics solvers which must explicitly and stably compute the shock - front and contact discontinuities .failure may lead to post - shock oscillation in the solution . here , we perform four tests for two different sets of initial conditions .the first set of initial conditions is the original sod example with and the second set is a _ strong _ shock case with suggested by f. x. timmes . for each initial condition, we tested a pure dsmc and the hybrid epsm / dsmc with indistinguishable results .figure [ fig : st1 ] compares the results of the pure dsmc simulation and the exact riemann solution for both cases .the mean - free path is for the sod initial conditions and for the strong - shock initial conditions , determined by the particle numbers and computational efficiency as described in [ sec : accuracy ] .we expect that the shock and contact discontinuities will be smeared over several mean - free paths owing to the internal kinetics of the gas - particle interactions and some positional variance in the interfaces .the reproduction of the density regimes by the dsmc and the dsmc - epsm hybrid methods without issues of stability or oscillation is expected owing to the local nature of the momentum and energy transport in a kinetic simulation .the traditional kelvin - helmholtz ( kh ) instability is defined for the flow of two incompressible inviscid fluids with an infinite plane - parallel interface .each fluid has different bulk velocities and parallel to the interface between the two fluids with densities and .we assume that .for ease of discussion , we observe the fluid from the frame of reference moving at ; this implies that fluid 1 moves to the right with velocity and fluid 2 moves to the left with velocity . for this initial condition ,the vorticity is only non - zero at the interface between the two fluids . with this initial condition ,an external sinusoidal perturbation causes a growing instability as follows . consider a sinusoidal perturbation to the interface .the pressure increases in the concave regions and decreases in convex regions of the interface ( i.e. the bernoulli principle ) which allows the perturbation to grow .the peak of the vortex sheet is carried forward by fluid 1 and the trough is carried backward by fluid 2 .this causes the initially sinusoidal corrugation interface to stretch , tighten and eventually roll up with the same sense as the vorticity at the original interface .in nature , such interfaces abound and the kh instability is thought to be critical for understanding a wide variety of astrophysical phenomena .for example , the non - linear development of the kh instability leads to turbulent mixing in a `` free shear layer '' .these same interfaces are critical in understanding the development of jets in radio galaxies and quasars .figure [ fig : kh ] shows the non - linear development of a kh instability for a fluid box with periodic boundary conditions in the and directions and reflecting boundary conditions in the direction .the simulation has particles and unit dimensions and temperatures k and k. the density ratio of the shearing fluids is 2 with pressure equilibrium at the boundary , .the relative fluid velocity has mach number 1/2 .the disturbance is seeded everywhere in the box with a transverse velocity amplitude of 1/4 the shear velocity and spatial frequency of 1/3 according to the kelvin - helmholtz dispersion relation .the instability develops and evolves as expected , consistent with the spatial frequency and velocity of the linear mode .the figure illustrates that the interface is well - maintained throughout the evolution .the sound wave that results from the linear - mode seeded in the initial conditions can be clearly seen throughout the box .of course , the particle numbers place a limitation on the maximum spatial frequency that may be resolved owing to the dsmc requirements on cell size ( see [ sec : accuracy ] ) .nearly all gas dynamical simulations on galaxy scales or larger are numerical solutions of the navier - stokes equations ( also known as cfd ) .computational expediency has motivated the use of cfd even when the mean free path of particles becomes appreciable to the scales of interest , such as in the early phases of galaxy cluster formation or in the coronal neutral gas interface at large galactocentric radii . in the transition between these regimes and in the rarefied regime where the mean free paths are of order or larger than scales of interest , the standard fluid approximation breaks down and one must solve the boltzmann equation with collisions using the kinetic theory of gases in order to understand the true nature of the flow. moreover , astrophysical flows are rife with multiphase interfaces and shocks .shock interfaces are discontinuities in the fluid limit and there are many accurate and elaborate schemes for their numerical computation in the cfd pantheon . however , in many cases of interest , the dynamics of the particles in this interface regime is critical to understanding the overall energetics through cooling and heating and the observational signatures in the form of line and continuum strength predictions and are unlikely to be well - described by the lte approximation .such calculations also require a kinetic theory approach .this paper describes an implementation of a monte carlo solution to the collisional boltzmann equation known as _direct simulation monte carlo _ ( dsmc ) .algorithmically , it splits the full boltzmann equation into a purely collisionless left - hand side and a purely collisional right - hand side and solves the two parts sequentially .the solution of the former is solved is provided by a standard n - body procedure .the solution of the latter uses a space partition to define interaction domains for the simulated gas particles that are used to evaluate the boltzmann collision integral ( see [ sec : algorithm ] ) .the implementation described here uses a doubly nested octree for decomposing the spatial domain ; the first - level coarse - grained tree is used for load balancing and the second - level fine - grained tree is used by each process to construct interaction domains ( see [ sec : parallel ] ) .this approach is much faster than the brute - force approach of molecular dynamics but still much slower than cfd .although dsmc will work even in the limit of dense gases , we would like to use the full kinetic approach only when needed owing to its computational expense .a number of such approaches have been proposed in the literature , e.g. present adaptive mesh refinement schemes that use the navier - stokes equations or dsmc depending on the regime .the implementation in this paper also takes a multiscale adaptive approach based on the local density of particles : when the mean free path becomes small compared the density scale , we use a particle - only approach which solves the navier - stokes equations in the limit of large particle number ( see [ sec : epsm ] ) .furthermore , although a bit noisier and possibly slower than the cfd particle hybrid approaches , a particle - only scheme is straightforwardly parallelized and mated with a traditional n - body code .although we have thrown away the large - scale averaging implicit in the continuum hydrodynamic equations by adopting a particle - only approach , we have gained a method that is explicitly stable ( i.e. no courant - friedrichs - lewy condition ) and shock boundaries are naturally resolved without need for artificial viscosity or shock - capturing techniques .we have seen in [ sec : tests ] that this code correctly reproduces the standard shock - tube tests and develops kelvin - helmholtz instabilities that follow the analytic dispersion relation , both in the continuum limit .this approach does require an order of magnitude ( at least ) more fluid particles in the continuum limit , although it has the advantage of consistently transitioning to dilute gases , correctly resolving shocks , and resolving phase boundaries .the tests in paper 2 use the classic fixed - composition cooling curves in the lte limit both to facilitate comparison with published results and for the ease of implementation .however , the traditional dsmc implementation is based on individual particle cross sections .it is natural and straightforward to include multiple distinct species , interactions , and excitations within the dsmc framework and self - consistently compute the radiation spectrum from the gas in the optically thin limit .when the heating or cooling of the gas overall depends on specific elemental or molecular lines from species with low fractional number density ( e.g. singly or multiply ionized oxygen , carbon and nitrogen with k ) , these species may be included as _ tracer _ subspecies by solving rate equations in the qss limit or using weighting schemes with or without particle production . as described in [ sec : motivation ] and [ sec : coolheat ] , we are currently testing a dsmc implementation that includes any specie whose atomic data is included in the chianti atomic data base , including the standard plasma cross sections . continuing to generalize the microphysics , it should be possible to consistently include additional plasma physics by adding the simultaneous solution of the electrostatic poisson equation . of course , the time - dependent solution of maxwell s equations is a very stiff challenging problem , but intermediate charge - flow problems are tractable using dsmc .paper 2 applies the hybrid dsmc n - body code from this paper to study the effect of an icm wind on a galaxy s ism , commonly known as _ram pressure_. as illustrated by paper 2 , dsmc may help us understand the multiphase medium on small scales by enabling accurate treatment of interfaces in the ism . finally , on much larger scales , dsmc can be used to simulate the dominant processes in intra - cluster gas dynamics , such as the formation and interaction of bubbles , conduction at interfaces , etc .this material is based upon work supported by the national science foundation under grant no . ast-0907951 .
we describe a hybrid direct simulation monte carlo ( dsmc ) code for simultaneously solving the _ collisional _ boltzmann equation for gas and the _ collisionless _ boltzmann equation for stars and dark matter for problems important to galaxy evolution . this project is motivated by the need to understand the controlling dynamics at interfaces between gases of widely differing densities and temperature , i.e. multiphase media . while more expensive than hydrodynamics , the kinetic approach does not suffer from discontinuities and it applies when the continuum limit does not , such as in the collapse of galaxy clusters and at the interface between coronal halo gas and a thin neutral gas layer . finally , the momentum flux is carried , self - consistently , by particles and this approach explicitly resolves and thereby ` captures ' shocks . the dsmc method splits the solution into two pieces : 1 ) the evolution of the phase - space flow _ without _ collisions ; and 2 ) the evolution governed the collision term alone _ without _ phase - space flow . this splitting approach makes dsmc an ideal match to existing particle - based n - body codes . if the mean free path becomes very small compared to any scale of interest , the method abandons simulated particle collisions and simply adopts the relaxed solution in each interaction cell consistent with the overall energy and momentum fluxes . this is functionally equivalent to solving the navier - stokes equations on a mesh . our implementation is tested using the sod shock tube problem and the non - linear development of an kelvin - helmholtz unstable shear layer . [ firstpage ] hydrodynamics atomic processes methods : numerical galaxies : ism ism : structure , evolution
an automated next - to - leading order generator for standard model processes is highly desirable . with recent developments of generalized unitarity and parametric integration methods a generator seems to be within reach .a first crucial step is the development of stable and fast algorithms for evaluating one - loop amplitudes through generalized unitarity cuts .a c++ code is especially of interest because of the ease with which it can be integrated in leading - order generators such as c .the leading - order code will then be used to compute the cut graphs .eventually , such a retrofitted generator will be able to generate all necessary amplitudes for a next - to - leading order monte carlo program for any standard model process of interest to the collider experiments .the full -gluon one - loop amplitude can be constructed from the leading colour - ordered amplitudes , which can be calculated by }_n(\{p_i,\kappa_i\})=a^{\rm cc}_n+r_n ] , and denotes the number of cuts .the master integrals are defined as and the inverse propagators are functions of the loop momentum : with .the rational part is represented by }\frac{d^{(4)}_{i_1i_2i_3i_4}}{6}+ \sum_{[i_1|i_3]}\frac{c^{(7)}_{i_1i_2i_3}}{2}- \sum_{[i_1|i_2]}\left(\frac{(q_{i_1}-q_{i_2})^2}{6}\right)b^{(9)}_{i_1i_2}\,,\ ] ] cf . . for each possible cut configuration , the box ( ) , triangle ( ) and bubble ( ) coefficients appearing above are found as solutions to the parametric form of the unintegrated ordered one - loop amplitude , }\frac{\bar e^{(d_s)}_{i_1i_2i_3i_4i_5}(\ell ) } { d_{i_1}d_{i_2}d_{i_3}d_{i_4}d_{i_5}}+ \sum_{[i_1|i_4]}\frac{\bar d^{(d_s)}_{i_1i_2i_3i_4}(\ell ) } { d_{i_1}d_{i_2}d_{i_3}d_{i_4}}+ \sum_{[i_1|i_3]}\frac{\bar c^{(d_s)}_{i_1i_2i_3}(\ell ) } { d_{i_1}d_{i_2}d_{i_3}}+ \sum_{[i_1|i_2]}\frac{\bar b^{(d_s)}_{i_1i_2}(\ell)}{d_{i_1}d_{i_2}}%+ % \sum_{[i_1|i_1]}\frac{\bar a^{(d_s)}_{i_1}(\ell)}{d_{i_1 } } \,.\ ] ] this decomposition of the integrand has been generalized to higher ( integer ) dimensionality of the internal particles , i.e. spin - polarization states and loop momenta respectively have dimension and ( as required by dimensional regularization ) .the dependence of the integrand can be eliminated by taking into account that the numerator only linearly depends on the spin - space dimension : .moreover , only up to -point terms ( i.e. ) need to be included in the parametrization , since the loop momentum effectively has components only : ^{1/2}) ] amplitudes in double precision in the four - dimensional helicity scheme .this allows for crosschecks with the results obtained in refs . .the construction of the orthonormal sets of the basis vectors and the polarization vectors follows the method outlined in .in addition , the vector generation ( ) has been set up such that basis vectors obtained for large- cuts can be re - used for suitable lower- cuts .the basic strategy , which has been implemented to find the coefficients , is as follows : by using the freedom in choosing loop momenta , solutions and , therefore , algebraic equations , such as eq .( [ eq : dcff ] ) , can be generated to solve for coefficients .first the dependence on is eliminated by computing : = ( d_s-3)\,\mbox{res}_{i_1\cdots i_m}[{\cal a}^{(d_s)}_n(\ell)]- ( d_s-4)\,\mbox{res}_{i_1\cdots i_m}[{\cal a}^{(d_s+1)}_n(\ell)]$ ] . then higher - point terms are subtracted yielding numerator factors etc .that are independent of , i.e. eqs .( [ eq : ebar ] ) and ( [ eq : dbar ] ) work without the label . for the coefficients of the cut - constructible part, one can dispense with the determination of the dependence of the residues and set .this , in addition , leads to smaller subsystems of equations , which can be solved separately , e.g.eq .( [ eq : dcff ] ) simplifies to .the tree - level amplitudes needed to obtain the residues in eq .( [ eq : residues ] ) are calculated with berends giele recursion relations , adjusted to work for gluons in higher dimensions . for efficiency , currents , which involve external gluons only , are stored for re - use in evaluating other residues .a number of consistency checks was carried out to verify the correctness of the implementation .the gauge invariance of the results and their independence of different choices for loop momenta and dimensionalities and were tested .coefficients themselves , the pole structure of the amplitudes and , finally , the amplitudes themselves have been compared to analytic results for various and different momentum and polarization configurations of the gluons .agreement within the limits of double - precision calculations has been found with the numbers produced by rocket for the fixed phase - space points given in . in the following ,studies are presented that have been conducted to examine the accuracy and time dependence of the numerical calculation .* accuracy of the results . * one - loop amplitudes for gluons; see also text and right panel of figure [ fig : speed ] .bottom row : center , double - logarithmic distributions of gram determinants involving sets of external gluons and , right , coefficients for the gluon setups.,title="fig : " ] one - loop amplitudes for gluons ; see also text and right panel of figure [ fig : speed ] .bottom row : center , double - logarithmic distributions of gram determinants involving sets of external gluons and , right , coefficients for the gluon setups.,title="fig : " ] one - loop amplitudes for gluons ; see also text and right panel of figure [ fig : speed ] . bottom row : center , double - logarithmic distributions of gram determinants involving sets of external gluons and , right, coefficients for the gluon setups.,title="fig : " ] one - loop amplitudes for gluons ; see also text and right panel of figure [ fig : speed ] .bottom row : center , double - logarithmic distributions of gram determinants involving sets of external gluons and , right , coefficients for the gluon setups.,title="fig : " ] one - loop amplitudes for gluons ; see also text and right panel of figure [ fig : speed ] . bottom row : center , double - logarithmic distributions of gram determinants involving sets of external gluons and , right , coefficients for the gluon setups.,title="fig : " ] one - loop amplitudes for gluons ; see also text and right panel of figure [ fig : speed ] . bottom row : center , double - logarithmic distributions of gram determinants involving sets of external gluons and , right , coefficients for the gluon setups.,title="fig : " ] one - loop amplitudes for gluons ; see also text and right panel of figure [ fig : speed ] .bottom row : center , double - logarithmic distributions of gram determinants involving sets of external gluons and , right , coefficients for the gluon setups.,title="fig : " ] one - loop amplitudes for gluons ; see also text and right panel of figure [ fig : speed ] .bottom row : center , double - logarithmic distributions of gram determinants involving sets of external gluons and , right , coefficients for the gluon setups.,title="fig : " ] one - loop amplitudes for gluons ; see also text and right panel of figure [ fig : speed ] . bottom row : center , double - logarithmic distributions of gram determinants involving sets of external gluons and , right, coefficients for the gluon setups.,title="fig : " ] coefficient ( center ) and single - pole accuracy ( all in double precision ) for the gluon setups of above.,title="fig : " ] coefficient ( center ) and single - pole accuracy ( all in double precision ) for the gluon setups of above.,title="fig : " ] coefficient ( center ) and single - pole accuracy ( all in double precision ) for the gluon setups of above.,title="fig : " ] coefficient ( center ) and single - pole accuracy ( all in double precision ) for the gluon setups of above.,title="fig : " ] coefficient ( center ) and single - pole accuracy ( all in double precision ) for the gluon setups of above.,title="fig : " ] coefficient ( center ) and single - pole accuracy ( all in double precision ) for the gluon setups of above.,title="fig : " ] coefficient ( center ) and single - pole accuracy ( all in double precision ) for the gluon setups of above.,title="fig : " ] coefficient ( center ) and single - pole accuracy ( all in double precision ) for the gluon setups of above.,title="fig : " ] coefficient ( center ) and single - pole accuracy ( all in double precision ) for the gluon setups of above.,title="fig : " ] the quality of the numerical solutions can be estimated by analyzing the logarithmic relative deviations , which are defined as (\rm{dp , sp})}_{n,\rm{c++}}- a^{[1](\rm{dp , sp})}_{n,\,\rm{anly}}| } { |a^{[1](\rm{dp , sp})}_{n,\,\rm{anly}}|}\,,\ \quad \varepsilon_{\rm{fp}}\;=\;\log_{10 } \frac{2\,|a^{[1](\rm{fp})}_{n,\rm{c++}}[1]- a^{[1](\rm{fp})}_{n,\rm{c++}}[2]| } { |a^{[1](\rm{fp})}_{n,\rm{c++}}[1]|+ |a^{[1](\rm{fp})}_{n,\rm{c++}}[2]|}\,,\ ] ] where the analytically known pole structures of the one - loop amplitudes are taken as reference for double ( dp ) and single ( sp ) poles , while for finite parts ( fp ) , two independent solutions are compared with each other .figure [ fig : accs+range ] shows the distributions together with the number of generated phase - space points for various .the top row of numbers in the plots displays the means of the distributions .all results have been obtained for the same cuts on external gluons as reported in ; for the effect of tighter cuts , see figure [ fig : speed ] ( right ) . for ,the double - precision evaluation of the coefficients clearly is not sufficient to yield reliable finite - part result .the loss of precision as increases is correlated with the more frequent appearance of small denominators and large numbers characteristic for the calculation .the two rightmost bottom panels of figure [ fig : accs+range ] present two examples by depicting the range of magnitude taken by gram determinants , used to evaluate the vectors of external gluons , and coefficients , which can be of for .double precision will then be insufficient to make cancellations as they may occur e.g. in eq .( [ eq : dbar ] ) manifest .the scatter plots of figure [ fig : corrs ] visualize that ( partial ) correlations exist between the relative accuracy of the finite part and the smallest gram determinant of external - gluon sets , the largest coefficient and the single - pole accuracy . in all cases ,the areas of scatters shift with increasing towards worse accuracy and more extreme values of gram determinants and -point coefficients .the calculation may still involve other small denominators , such as the leftover in the subtraction terms of e.g. eq .( [ eq : dbar ] ) .this leads to instabilities ( even for small coefficients ) and uncorrelated areas in the plots are populated .given these correlations , it can be seen that one way of achieving higher accuracy is to compute the coefficients in quadrupole precision .this has been pointed out in ref . .* efficiency of the calculation.*as estimated in ref . ( p. 31 ) , the algorithm is expected to have polynomial complexity , see also .the computing time to calculate an ordered -gluon one - loop amplitude should scale as , with the leading term having dominating the behaviour for large .the results for with are shown in figure [ fig : speed ] together with the exponents .dependence of the computing time ; -exponents ( center ) , see text . timesrefer to using a 2.20 ghz intel core2 duo processor .tighter gluon cuts were used : , , , denoted as in .the last plot shows the accuracy improvement.,title="fig : " ] dependence of the computing time ; -exponents ( center ) , see text .times refer to using a 2.20 ghz intel core2 duo processor .tighter gluon cuts were used : , , , denoted as in .the last plot shows the accuracy improvement.,title="fig : " ] dependence of the computing time ; -exponents ( center ) , see text .times refer to using a 2.20 ghz intel core2 duo processor .tighter gluon cuts were used : , , , denoted as in .the last plot shows the accuracy improvement.,title="fig : " ] the plots demonstrate that the c++ algorithm as implemented displays the predicted scaling .it has been shown that the generalized unitarity method of refs . can be implemented in a stable and fast c++ program .the one - loop -gluon amplitude has been used as a testing ground .the next step is to integrate the c++ code in an existing leading - order generator .the resulting upgraded generator will be able to generate both virtual and bremsstrahlung contributions for arbitrarily complex standard model processes .the final step towards a full next - to - leading order monte carlo is to add the necessary phase - space integrations . as was shown in ref . the virtual matrix elements calculated with the generalized unitarity method can be used in next - to - leading order monte carlo programs .ellis , w.t .giele and z. kunszt , jhep * 0803 * , 003 ( 2008 ) [ arxiv:0708.2398 [ hep - ph ] ] .giele , z. kunszt and k. melnikov , jhep * 0804 * , 049 ( 2008 ) [ arxiv:0801.2237 [ hep - ph ] ] . based on a presentation given by j. winter : + ` http://ilcagenda.linearcollider.org/contributiondisplay.py?contribid=78&sessionid=18&confid=2628 ` z. bern _ et al . _[ nlo multileg working group ] , arxiv:0803.0494 [ hep - ph ] .z. bern , l.j .dixon , d.c .dunbar and d.a .kosower , nucl .b * 425 * , 217 ( 1994 ) [ arxiv : hep - ph/9403226 ] ; z. bern , l.j .dixon and d.a .kosower , nucl . phys .b * 513 * , 3 ( 1998 ) [ arxiv : hep - ph/9708239 ] ; r. britto , f. cachazo and b. feng , phys .d * 71 * , 025012 ( 2005 ) [ arxiv : hep - th/0410179 ] ; r. britto , f. cachazo and b. feng , nucl .b * 725 * , 275 ( 2005 ) [ arxiv : hep - th/0412103 ] .g. ossola , c.g .papadopoulos and r. pittau , nucl .b * 763 * , 147 ( 2007 ) [ arxiv : hep - ph/0609007 ] .giele and g. zanderighi , jhep * 0806 * , 038 ( 2008 ) [ arxiv:0805.2152 [ hep - ph ] ]. g. ossola , c.g . papadopoulos and r. pittau ,jhep * 0803 * , 042 ( 2008 ) [ arxiv:0711.3596 [ hep - ph ] ] ; + c.f .et al . _ ,d * 78 * , 036003 ( 2008 ) [ arxiv:0803.4180 [ hep - ph ] ] .t. gleisberg and s. hche , jhep * 0812 * , 039 ( 2008 ) [ arxiv:0808.3674 [ hep - ph ] ] .z. bern and d.a .kosower , nucl .b * 362 * , 389 ( 1991 ) .ellis and g. zanderighi , jhep * 0802 * , 002 ( 2008 ) [ arxiv:0712.1851 [ hep - ph ] ] .a. lazopoulos , arxiv:0812.2998 [ hep - ph ] .berends and w.t .giele , nucl .b * 306 * , 759 ( 1988 ) .ellis , k. melnikov and g. zanderighi , arxiv:0901.4101 [ hep - ph ] .
this note reports on an independent implementation of calculating one - loop amplitudes semi - numerically using generalized unitarity techniques . the algorithm implemented in form of a c++ code closely follows the method by ellis , giele , kunszt and melnikov . for the case of gluons , the algorithm is briefly reviewed . double - precision results are presented documenting the accuracy and efficiency of this computation .
decoherence and noisy dynamics in open quantum systems are major hurdles to the realization of working quantum computers and practical quantum communication devices .information encoded in a controlled quantum system will leak into its surrounding environment when there is coupling between the two systems , resulting in shorter qubit lifetimes and lower gate fidelities .it is necessary to characterize these quantum dynamics in order to better understand the sources of system - environment coupling .characterization of the dynamical process can then be used to mitigate sources of noise and improve qubit coherence times . given the density matrix for a -dimensional system , the completely - positive quantum process may be described in terms of its kraus operators or the hermitian process matrix defined with respect to the operator basis .experimental characterization of the process matrix provides a concrete representation of that can be used to study and refine system behavior . in standard quantum process tomography ( sqpt ) ,measurements characterizing the state are used to reconstruct the process matrix by inverting eq .( [ eq : pm ] ) over a complete set of input states . ancilla - assisted process tomography ( aapt )performs a similar inversion using fewer input states by exploiting correlations between the principal system ( * p * ) and an ancilla system ( * a * ) isolated from non - trivial quantum process .such a composite process can be written as where the subscripts indicate which processes occur on each subsystem .in contrast , direct - characterization of quantum dynamics ( dcqd ) avoids inverting eq .( [ eq : pm ] ) by measuring the process elements directly .dcqd techniques have recently been applied to characterize trapped ion and hyper - entangled photon dynamics . like aapt, the principal and ancilla subsystems are initially entangled in a probe state before being subjected to non - trivial and trivial quantum processes respectively .interestingly , dcqd probe states can be described as the codewords of a quantum error correction ( qec ) code . in this framework , a quantum process maps the joint system probe state either within or outside the codespace .processes mapping the probe state outside the original codespace are detected and characterized by their error syndrome , i.e. , the measured eigenvalues of each qec code generators .syndrome frequencies derived from an ensemble of stabilizer measurements are sufficient to directly characterize the underlying process matrix .the dcqd framework shows that the mathematical tools developed for qec can be leveraged for process characterization .in particular , code design plays an important role in probing the process matrix .recent extensions to dcqd involve generalized characterization codes which also encode logical quantum information .this offers the ability to characterize processes occurring during an arbitrary quantum computation .complimentary works , from a qec perspective , have shown that syndrome data generated by error correction protocols can be used for noisy parameter estimation , with recent experiments involving stabilizer qec circuits over 9 and 4 qubits are prime candidates for these types of characterization methods . despite advances within the dcqd paradigm ,a significant and persistent drawback in all existing schemes is the requirement that the ancilla system be perfectly noiseless .this assumption is necessary for correctly interpreting the measured syndromes in the context of eq .( [ eq : pm ] ) .noisy ancilla lead to spurious data that corrupts the process tomography and adds errors to the process matrix .however , noise is certainly present in any realistic experiment and it is important to ask how qec - based process characterization can be extended to include noisy ancilla .we address the use of noisy ancilla for process characterization by introducing a new class of quantum process codes that remove the requirement of noise - free ancilla .our approach is based on concatenated encoding of the ancilla system using a second quantum error detection code .we show that by monitoring syndrome values of the composite code measurements of the principal system that have been corrupted by ancilla noise can be filtered out . by removing measurements attributed to noisy ancilla, we generate a higher fidelity construction of the process matrix than possible with direct characterization alone .we also examine the question of efficiency , which we define as a tradeoff between the syndromes collected and the accuracy of the process characterization .the remainder of the paper is organized as follows : in sec .[ sec : cdcqd ] we review notation for stabilizer qec codes and outline the conventional dcqd procedure for constructing the process matrix before introducing a concatenated six - qubit code used to characterized the dynamics of a two - qubit principal system in the presence of full system noise . this includes a discussion of how ancilla error detection is used to filter tomographic information prior to characterization . in sec .[ sec : monte - carlo ] we present a numerical case study of an amplitude damping channel on various codes with and without depolarizing noise affecting the ancilla subsystem .we discuss results of our simulation , the degree to which the code faithfully characterizes dynamics on the principal system , and the probability that high weight errors , which can pass through our concatenated error filter thus corrupting the tomographic data , occur in sec .[ sec : analysis ] .our conclusions and discussion appear in sec .[ sec : conclusion ] .an ] code to characterize * p*. however , as discussed in the last section , this code leads to the mistaken interpretation that processes acting on * a * ( qubits 3,4 ) characterize * p * ( qubits 1,2 ) under the interchange as is obvious from the symmetry of the generators . we can remove the syndrome degeneracy by encoding each physical qubit in * a * with a second error detection code .this form of code concatenation enables the detection of processes that occur on only the ancilla .concatenation of the ancilla is also compatible with dcqd , as the first level stabilizers are still used for direct process characterization .a schematic of this process is shown in fig .[ fig : schematic ] .the additional resources required for encoding the ancilla can be managed by adjusting the error detection properties of the second code .we encode the two ancilla qubits from the original characterization code with a ] code .the encoding ] .this group of located processes commutes with the first two generators in eq .( [ eq : s_1 ] ) , so the corresponding syndromes are of the form . table [ tab : located_errors ] enumerates the syndromes associated with all located errors .syndromes that begin with 00 " indicate * a * is error free and that the corresponding measurement is accurate for characterizing as described in eqs .[ eq : syn_prob_diag],[eq : syn_prob_coherence],[eq : syn_prob_coherence_2 ] .the code is degenerate for processes .this result is expected since there are syndromes and operator elements in the set .the remaining 192 processes ( } ] . those processes that have an odd weight support on * a * anti - commute with either one or both of the first two generators of .consequently , syndromes that begin with indicate that noise was detected on the ancilla .because these syndromes are degenerate we can not know exactly which process corrupted the ancilla .therefore , this data is _ filtered _ out from characterizing the principal system .simulated ad channel process matrices constructed from ensembles of syndrome measurements according to the dcqd procedure , namely eqs .[ eq : syn_prob_diag],[eq : syn_prob_coherence],[eq : syn_prob_coherence_2 ] .probabilities are determined from monte - carlo events using the numerical parameters , were used for the amplitude damping and depolarizing channels respectively .the real and imaginary parts for constructed with a noiseless ancilla is given in panels ( a , b ) and its difference from the theoretical value appears in panel ( c ) .a noisy ancilla reduces the accuracy of the dcqd procedure as seen in panels ( d - f ) for which a standard ] code .the constructed matrix is characterized by a high degree of fidelity , }(\rho))=.9884 ] .this procedure is repeated for all six syndromes with the eigenvalues for each generator defining a single measured syndrome generator .the probabilities for each clean syndrome , i.e. , 00 " syndromes , with respect to all clean results is used to determine the elements by eqs .( [ eq : syn_prob_diag ] ) , ( [ eq : syn_prob_coherence ] ) , and ( [ eq : syn_prob_coherence_2 ] ) . following this procedurewe perform a monte - carlo simulation of the following three scenarios : ( i ) the ad channel acting on qubit 1 with a noiseless ancilla * a * ( ) , ( ii ) ad on qubit 1 with a noisy * a * implemented with detection being done by a non - concatenated ] code given in eq .[ eq : s_1 ] to determine .the process matrix constructed in scenario ( i ) , i.e. for a noiseless ancilla system , is shown in fig .[ fig : chi ] panels ( a , b ) and is compared to the theoretical result in panel ( c ) .finite sampling causes a small discrepancy between the theoretical and the simulated result as seen in panel ( c ) .next , we simulate case ( ii ) involving the four qubit non - concatenated dcqd code in which every possible syndrome is used to determine the .the absence of a filtering process means that each error occurring on the ancilla system corrupts the syndrome probabilities which , in turn , determine . the simulated matrix is presented in fig . [ fig : chi ] panels ( d , e ) and its distance from the clean is given in panel ( f ) . finally , for case ( iii ), we construct using the ancilla concatenated code ( eq.[eq : s_1 ] ) and present the results in panels ( g - i ) . inspecting panels ( c ) , ( f ) , and ( i )it is obvious that the matrix constructed with the concatenated code is more accurate than the standard non - concatenated code .to quantify this difference we calculate the fidelity , defined as ] , }(\rho)= \sum_{mn } \chi_{mn}^{[[4,0,2 ] ] } f_m \rho f_n^\dagger ] and }(\rho))=.9165 ] code, failure can not be due to any weight - one errors on * a * since they , and their product with all located errors ( ) , are within the detectable errors set .the filter does however fail in the presence of some weight - two errors which commute with the first two generators in eq .[ eq : s_1 ] . in the dp channel the probability for a weight to occur is . notably , the probability for the weight 0 error " to occur is the probability that the identity occur on each qubit which appears as the dashed blue line labeled in fig .[ fig : failure ] .ancilla errors outside the correctable error set occur with probability . however , not all of the errors with weight will lead to faulty characterization data since many most of them will still lead to syndromes beginning with one of and therefore do not corrupt the constructed . in these caseswe discard the data point because it is ( correctly ) assumed that some error has occurred on * a*. to confirm our estimates , we numerically calculate the failure rate with monte - carlo simulations of the composite depolarizing channel . with a single exception ,the failure probability is by definition the number of syndromes beginning with divided by the total number of randomly generated errors .the exception comes from the ambiguity of whether the syndrome should count towards the error rate , as may be generated by the identity mapping or by any element in the normalizer group , i.e. the group of errors commuting the all stabilizer elements . however , we know that the identity operator ( ) occurs with probability as illustrated by the blue dashed curve in fig .[ fig : failure ] .we determine the rate for erroneous identity - like syndromes to be , the difference between and the numerical rate at which we measure the identity syndrome ( blue circles in fig .[ fig : failure ] ) . in fig .[ fig : failure ] the green circles represent the total failure rate obtained by adding the identity probabilities difference to the probability with which all other located syndromes occur . enumerating the number of weight 2,3 , and 4 errors which commute with the first two generators of and the probability with which they occur we find the probability of failure to be where is the for probability for an error of weight 2,3 , or 4 to occur .this function of is plotted as the dashed green line in fig .[ fig : failure ] and exactly matches our numerical data .the leading term in goes as in contrast to to non - concatenated dcqd schemes whose failure rate gores as , the probability for weight - one errors .explains the sharp contrast in the constructed process matrices in the second and third rows of fig .[ fig : chi ] .we have introduced a dcqd code that directly characterizes the quantum dynamics of a principal system with assistance from a noisy ancilla system . within the stabilizer framework, we show that ancilla noise can be distinguished from processes acting on the principal system by using syndrome value as a filter for non - trivial ancilla processes .for the example of dcqd with a ] code .our numerical simulations found that the process matrix constructed using the six - qubit code shows a marked improvement in fidelity over the non - concatenated approaches .our motivation for encoding the ancilla qubits has been to filter out those measurements that correspond to unwanted data . from this perspective, ancilla encoding represents a form of filtering the dynamics to isolate non - trivial processes acting only on the principal system .we have argued that filtering increases the signal - to - noise ratio for process characterization , as measured by the gain in fidelity of the constructed matrix .of course , the gain for process characterization depends strongly on the details of the ancilla filter .for example , the 6-qubit code introduced here detects only weight - one ancilla errors and their product with located principal system errors .when higher weight errors are common , the benefit of this ancilla encoding diminishes , and larger distance codes are needed to filter higher weights processes .for example , a distance 4 code that detects all weight-2 ancilla errors will have a filter failure rate that scale as with the ancilla error rate .we could also have used a non - degenerate $ ] code to encode the ancilla , where each detectable error would have a unique error syndrome . in this case, each syndrome would be used without a filtering procedure . in general, one can improve the signal to noise ratio at the expense of additional ancilla qubits and larger codes .additionally , we have taken throughout thought this work , but we could have used a code satisfying a generalized hamming bound .for example , a non - concatenated code performing error correction on two qubits with another encoded is provided in ref . .equations ( [ eq : syn_prob_diag])-([eq : syn_prob_coherence_2 ] ) are easily generalized using the higher dimensional projectors resulting in a code which detects ancilla errors while encoding some non - trivial quantum information .recent progress in realizing stabilizer qec circuits with 9 and 4 qubits on different lattice configurations suggest that the implementation of these ideas should be experimentally feasible in the near future .in particular , it is worth noting that the characterization processes described here and in earlier dcqd work do not require active , feed - forward error correction for purposes of implementation .consequently , the use of qec - based dcqd appears to be a natural way point toward the demonstration of error - corrected computation .e. d. and t. s. h. acknowledge support from the intelligence community postdoctoral research fellowship program .this manuscript has been authored by ut - battelle , llc , under contract no .de - ac0500or22725 with the u.s .department of energy .the united states government retains and the publisher , by accepting the article for publication , acknowledges that the united states government retains a non - exclusive , paid - up , irrevocable , world - wide license to publish or reproduce the published form of this manuscript , or allow others to do so , for the united states government purposes .the department of energy will provide public access to these results of federally sponsored research in accordance with the doe public access plan .obrien et . quantum process tomography of a controlled - not gate " , phys .* 93 * , 080502 , ( 2004 ) .r. c. bialczak et .al . , quantum process tomography of a universal entangling gate implemented with josephson phase qubits " , nature physics * 6 * , 409 ( 2010 ) .j. b. altepeter , et .al . , ancilla - assisted quantum process tomography " , phys . rev . lett . * 90 * , 193601 ( 2003 ) .m. mohseni and d. a. lidar , direct characterization of quantum dynamics , " phys .* 97 * 170510 ( 2006 ) .j. combes , c. ferrie , c. cesare , m. tiersch , g. j. milburn , h. j. briegel , and c. m. caves , `` in - situ characterization of quantum devices with error correction '' , arxiv preprint arxiv:1405.5656 ( 2014 ) .
we present methods for the direct characterization of quantum dynamics ( dcqd ) in which both the principal and ancilla systems undergo noisy processes . using a concatenated error detection code , we discriminate between located and unlocated errors on the principal system in what amounts to filtering of ancilla noise . the example of composite noise involving amplitude damping and depolarizing channels is used to demonstrate the method , while we find the rate of noise filtering is more generally dependent on code distance . our results indicate the accuracy of quantum process characterization can be greatly improved while remaining within reach of current experimental capabilities .
it is generally expected that the geometry of compact sources should resemble flat spacetime at large enough distances .this is true not only qualitatively , but through very precise falloff conditions that are built into the formal definition of asymptotic flatness .within this definition , the deviations from flat spacetime are well described ( in the sense of the leading order behaviour of an expansion in powers of `` '' ) by perturbations of the schwarzschild spacetime .such perturbations can in turn be studied through the gauge invariant regge - wheeler and zerilli ( rwz ) formalisms .these allow one to derive , after a spherical harmonic decomposition ( that is , for each `` '' ) , two master evolution equations for the truly gauge invariant , linearized physical degrees of freedom .due to the multipole decomposition , these equations involve only one spatial coordinate ( the radial one ) . the fact that they are one - dimensional implies that these master equations can be solved for very large computational domains with very modest computational resources .on the other hand , three - dimensional cauchy codes are very demanding on their resource requirements . even though mesh refinement can help in this respect, there is a limit to how much one can coarsen the grid in the asymptotic region ; this limit is set by the resolution required to reasonably represent wave propagation in the radiative zone .the use of a grid structure adapted to the physical geometry ( possibly through multiple patches ) can also help , but one still ends up imposing artificial ( even if constraint - preserving ) boundary conditions at the outer boundary .for example , one in general misses information about the geometry outside the domain .two approaches that at the same time provide wave extraction , physically motivated boundary conditions , and extend the computational domain to the radiative regime are cauchy characteristic and cauchy perturbative matching ( cpm ) ; this paper is concerned with the latter .the idea is to match at each timestep a fully non - linear cauchy code to an outer one solving , say , the rwz equations , one might , in principle , try to solve for perturbations of kerr spacetime ( as opposed to schwarzschild ) . ] .this paper is the first one in a series where we plan to revisit cpm in the light of some recent technical developments which we describe below that should help in its implementation . before discussing these developments , we point out and summarize some features present in the original implementation of cpm which we hope to improve on : 1 .the non - linear cauchy equations were solved on a cartesian , cubic grid . on the other hand ,the rwz equations use a radial coordinate for the spatial dimension .mixing cartesian coordinates with spherical ones leads to the need for interpolation back and forth between both grids .especially when using high order methods , this type of interpolation might not only be complicated but also subtle : depending on how it is done it might introduce noise and sometimes it might even be a source of numerical instabilities .when injecting data from the perturbative module to the cauchy code and vice versa boundary conditions were given to all modes , irrespectively of their propagation speed and without taking into account the existence of constraint violating boundary modes .one would intuitively expect a cleaner matching if boundary conditions are given according to the characteristic ( propagation ) speeds of the different modes , and even cleaner if constraint - preservation is automatically built in during the matching .low order numerical schemes , which result in slow convergence , were used . in recent yearsthere has been progress on several related fronts that should in principle help in the implementation of cpm .we describe these new results next : 1 .the first improvement is the ability to implement smooth ( in particular , spherical ) boundaries in 3d cauchy evolutions . one important advantage of this is the fact that the matching can be performed to either a perturbative or a characteristic outer module without the need for interpolation between spherical and cartesian grids . in that waya possible source of noise can be eliminated .it is now understood how to match different domains using schemes of arbitrary high order while at the same time ensuring numerical stability .one way of doing so is through the use of multiple patches ( much in the same way multiple charts are used in differential geometry ) , penalty terms and difference operators satisfying summation by parts ( more about this below ) .this is the approach we shall explore here in the context of cpm .the second improvement is the construction of constraint - preserving boundary conditions ( cpbc ) .several efforts have by now reported numerically stable ( in the sense of convergent ) implementations of such boundary conditions for the fully three - dimensional non - linear einstein s equations .furthermore , there have been reports in the context of cauchy characteristic matching that significant improvements are obtained when this type of boundary conditions are used in the matching . with this in mind , we will test their use in cpm .lastly , new , accurate and efficient high order difference operators satisfying sbp and associated dissipative operators have been constructed recently .as mentioned above , in conjunction with certain penalty interface treatment such operators guarantee numerical stability when `` glueing '' together different computational grids .we will test these operators in the context of cpm .we have incorporated these techniques , i.e. , high - order summation - by - parts finite differencing and dissipation operators , multiple coordinate patches with penalty inter - patch constraint - preserving boundary conditions and cauchy perturbative matching , into a spherically symmetric numerical code evolving the einstein christoffel form of the field equations , minimally coupled to a klein - gordon field . using this tool , we can test the performance of the numerical methods in a non - trivial , but easily reproducible and computationally inexpensive setting , and gain experience for three - dimensional applications .the evolutions presented here model black holes with excision in isolation , under dynamical slicings , and black holes accreting scalar field pulses , which are used as a scalar analogue of gravitational radiation . the plan of this paper is as follows . in section [ sec : equations ] we introduce the continuum system and the numerical techniques we have used .results are presented in section [ sec : results ] , where a black hole is evolved successively from simple settings , i.e. , single - patch , isolated , killing - field adapted gauges , to more involved ones including cauchy perturbative matching and scalar pulse accretion .finally , in section [ sec : conclusions ] , we draw conclusions and give an outlook to future work .in this paper we use the einstein christoffel ( ec ) system in spherical symmetry .we follow the notation of ref . ; in particular , the densitized lapse is denoted by , and is introduced for convenience . here , is the determinant of the 3-metric and the lapse function , while the 4-metric is written as the vacuum part of the evolution equations in spherical symmetry for this formulation constitute a symmetric hyperbolic system of six first order differential equations .the vacuum variables are the two metric and extrinsic curvature components where the extrinsic curvature is written as plus two auxiliary variables needed to make einstein s equations a first order system .these extra variables are defined as in addition , a massless klein - gordon field is minimally coupled to the geometry .the scalar field equation is converted into a first order system by introduction of the variables throughout this paper the prime and dot represent partial derivatives with respect to and , respectively .constraint preserving boundary conditions are imposed by analyzing the characteristic modes of the main and constraint evolution systems , as discussed in .these modes and their associated characteristic speeds are summarized in table [ ta : modes ] . for illustration purposes, we also show the direction of propagation of each mode in the schwarzschild spacetime in painlev - gullstrand coordinates ..characteristic modes for einstein - christoffel system in spherical symmetry , and their direction of propagation for a schwarzschild spacetime in painlev - gullstrand coordinates with respect to the vector field . in this gauge ,all modes are outflow at the inner boundary , if it is located at , while boundary conditions have to be applied to the incoming modes and at the outer boundary , assuming is is located at . [ cols="<,^,^,^",options="header " , ] from table [ ta : modes ] we notice that for the schwarzschild spacetime there are four ingoing and two outgoing gravitational modes at the outer boundary , and therefore expect the same count to hold for perturbations thereof .boundary conditions for the incoming modes and are fixed by the cpbc procedure .thus , the only free incoming modes are , which represents a gauge mode and , which represents a physical one ( see for more details ) .boundary conditions do not need to be specified at the inner boundary if it is located inside the event horizon , because all modes are outflow then . since there is no radiative degree of freedom in spherically symmetric spacetimes , we use the massless klein - gordon field as a scalar analogue of gravitational waves . to emulate the setup of three - dimensional cauchy perturbative matching as closely as possible ,the scalar wave is evolved on a fixed schwarzschild background in a `` perturbative '' patch defined for , while the fully non - linear einstein s equations are evolved in the `` cauchy '' patch , defined for ] discretized using grid points and a grid spacing if holds for all grid functions . here the scalar product , is defined in terms of its coefficients by in this paper we use the new , efficient , and accurate high order sbp difference operators and associated dissipation operators constructed in ref .thus , as mentioned , this paper also serves as an extra test of those new operators .sbp operators are standard centered finite difference operators in the interior of the domain , but the stencils are modified to yield lower order operators in a region close to the boundaries ( at the boundary itself the stencil is completely one sided ) .there are several types of sbp operators depending on the properties of the norm .the simplest are the diagonal norm operators .they have the advantage that sbp is guaranteed to hold in several dimensions by simply applying the 1d operator along each direction and that numerical stability can be guaranteed by discrete energy estimates in a wide range of cases .the main disadvantage is that the order of the operator at and close to the boundary is only half the interior order .we denote the sbp operators by the interior and boundary order and consider here the diagonal operators , , and . the second type is the restricted full norm operators , where the norm is diagonal at the boundary but has a non - diagonal block in the interior .the advantage of these operators is that the order at and close to the boundary is only one order lower than in the interior , while the disadvantage is that schemes based on these operators may be unstable without the use of dissipation .the restricted full operators we use here are and .if the computational domain is split into several sub - domains ( `` patches '' ) , the discrete representation requires a stable technique to communicate the solution at inter - patch boundaries .we make use of a penalty method , which adds a damping term to the right hand side of the evolution equation at the boundary point in a way which retains linear stability .the method has a free parameter , called in ref . , which determines how much the difference between characteristic fields on either side of the inter - patch boundary is penalized .different values of result in different amount of energy dissipation at the inter - patch boundary and can in principle be chosen so that no energy is dissipated ( this is marginally stable ) .usually the value of is chosen such that some dissipation of energy occurs . with constant values of amount of dissipation decreases with resolution . for the purposes of this paper , a one - dimensional code which supports constraint - preserving boundaries , multiple grid patches , and the use of theaforementioned high order sbp derivative and dissipation operators has been developed .in addition , the code is able to reproduce the ( single grid and without cpm matching ) second - order methods of ref . for comparison .we use the methods of lines , and the time integration is performed by a 4th order runge - kutta method .the grid patches that we consider here are not intersecting , but touching .this implies , that each grid function is double valued at the patch interface coordinate since the sbp derivative operators are one sided at the boundaries . to ensure consistency without compromising ( linear ) stability , we make use of a penalty method as described above .constraint - preserving boundary conditions require the calculation of derivatives of certain grid functions at the outer boundary , which we also obtain by using the sbp derivative operators . in a black holesetting , the computational domain next to the excision boundary tends to quickly amplify high frequency noise , which can not be represented accurately on the discrete grid .this is especially true for high order accurate derivative operators .thus , high order simulations of black holes need a certain amount of numerical dissipation to be stable .this dissipation is here provided by the sbp dissipation operators constructed in ref. .the free parameters of these operators , namely the coefficient of the dissipation and the extent of the transition region ( for non - diagonal operators ) , are found by numerical experiment .the numerical experiments presented in this section are set up to systematically test the performance of the new techniques in several situations of increasing difficulty .we start with a series of tests evolving a schwarzschild black hole in painlev - gullstrand coordinates with either a single patch or two patches matched via the penalty method , and compare the performance of all sbp operators with the second order finite - differencing method presented in .next , to test more dynamical situations , a gauge or scalar field signal is injected in a constraint - preserving manner through the outer boundary and accreted onto the black hole . a robust stability testis then performed with noise on the incoming gauge mode , and , with cauchy perturbative matching , on the scalar field mode .finally , a series of high - precision tests involving all techniques are presented , in which a black hole accretes a scalar field injected through the outer boundary of the perturbative patch .these simulations also include a test of the long - term stability and accuracy after accretion and ring - down . in our first series of tests ,a schwarzschild black hole is evolved with high - order accurate sbp operators , constraint - preserving boundary conditions and excision .perturbative matching is not used in these tests . to fix the coordinate system ,we make use of the horizon - penetrating painlev - gullstrand coordinates , and we fix the coordinate functions and of the previous section to their exact values . for all tests ,the inner boundary is located well inside the event horizon ( more precisely , it is located at ) , which implies that all modes are outflow .therefore , no boundary conditions may be applied at the excision boundary .the exact boundary location is not crucial as long as it is inside the apparent horizon , but this choice facilitates comparison with .also , in dynamical situations the apparent horizon location may move significantly on the coordinate grid , and to ensure outflow conditions at the inner boundary some penetration into the black hole is of advantage . to match the setup of , we set the outer boundary to . to ensure well - posedness of the continuum problem ,boundary conditions should be applied to the incoming modes , , , , and .however , three of these modes , namely , , and , can be fixed by the use of constraint - preserving boundary conditions , as discussed in section [ sec : equations ] , which leaves the freely specifyable gauge mode and the scalar field mode . since in these initial testswe are only interested in obtaining a stationary black hole solution , the initial scalar field is set to zero , and the ( scalar field ) characteristic mode is penalized to zero as well .the incoming gauge mode is penalized to the exact solution .an error function can be defined by use of the misner - sharp mass function ,\ ] ] where then , if the black hole mass is denoted by , .since the same error measure and continuum system is used in , we can compare the different discrete approaches directly .( upper panel ) and ( lower panel ) .the result from the method presented ref . is denoted by `` second order '' , while new results are marked by the sbp derivative and dissipation operators used .the high - order operators and display superior performance already at the lowest resolution.,title="fig : " ] ( upper panel ) and ( lower panel ) .the result from the method presented ref . is denoted by `` second order '' , while new results are marked by the sbp derivative and dissipation operators used .the high - order operators and display superior performance already at the lowest resolution.,title="fig : " ] the computational domain ] . and , using the sbp operator .the large scalar field amplitude leads to a significant increase in the black hole mass . ]norm of the hamiltonian constraint over time for the accretion of a strong scalar field pulse to a schwarzschild black hole , with resolutions ( upper and lower panels , respectively ) .the graph denoted by `` second order '' is obtained with the method presented in , and the and are obtained using the corresponding sbp operators.,title="fig : " ] norm of the hamiltonian constraint over time for the accretion of a strong scalar field pulse to a schwarzschild black hole , with resolutions ( upper and lower panels , respectively ) . the graph denoted by `` second order ''is obtained with the method presented in , and the and are obtained using the corresponding sbp operators.,title="fig : " ] , but evolved for 10,000 m with to demonstrate the long - term behaviour after accretion of the pulse . ] for resolutions and , the time evolution of the apparent horizon is shown in figure [ fig : ah_mass ] .the scalar pulse leads to a significant increase in the black hole mass by a factor of after the pulse is inside the black hole .larger amplitudes are not obtainable with the simple gauge prescription used here , but a horizon - freezing gauge condition could improve on this result . as a replacement for the misner - sharp error measure , we plot the norm of the hamiltonian constraint over time in figure [ fig : scalar_pulse_res_20 ] .it is apparent that the high - order operators are again stable and more accurate than the second order operator .the graphs indicate a growth of the constraint near , but a long - term evolution with shown in figure [ fig : scalar_pulse_res_20_longterm ] demonstrates that the system settles down to stability after the accretion .the term _ robust stability test _ typically refers to the discrete stability of a numerical system in response to random perturbations . in this case, we will use the same system as in section ( [ sec : bh_pg_two_patches ] ) , but impose random noise on the incoming gauge mode with a certain amplitude . to test the discrete stability of the evolution system , we chose a large range of amplitudes from to .random perturbations of the latter amplitude is significant for a non - linear system .m ] is a numerical artefact , which converges away with resolution .the inset shows that the evolution obtained with the operator is not unstable , but only significantly less accurate . ] , but for a resolution of . ] , but for a resolution of . ] , but for a resolution of . ] the advantages of using high - order methods is made evident in figures [ fig : cpm_gw_accretion_ah_mass_res_10 ] , [ fig : cpm_gw_accretion_ah_mass_res_20 ] , [ fig : cpm_gw_accretion_ah_mass_res_40 ] , and [ fig : cpm_gw_accretion_ah_mass_res_80 ] . in these plots ,the performance of the sbp operator , which is sixth order in the interior and fifth order at the boundaries , is compared to that of the operator , which is fourth order in the interior and third order at the boundaries , for different choices of resolution . although both operators show convergence , for a mass increase of about , the operator is unable to reproduce the correct behaviour with reasonable grid resolutions .we consider this specifically important for three - dimensional simulations , where the necessary resources scale with if denotes the number of grid points in each direction .thus , for all simulations requiring a certain amount of precision , high - order operators are an essential requirement .is used with a resolution of . plottedare the apparent horizon mass and the hamiltonian constraint over time .the apparent horizon mass indicates that the discrete evolution introduces a relative error of about after . ]the long - term evolution of a schwarzschild black hole accreting a wave packet over a cauchy perturbative matching interface and settling down to equilibrium is shown in figure [ fig : cpm_gw_accretion_1e6 ] .the black hole is evolved for with the lowest resolution and the sbp operator . while an evolution of this length might appear to be of only technical interest, we note that modelling phenomena like hypernovae and collapsars in general relativity will require the stable evolution of a black hole for at least several seconds , which is the lower end of timescales associated with the collapsar model of gamma - ray burst engines . for a stellar mass black hole , , that is .to obtain long - term evolutions of compact astrophysical systems in three spatial dimensions , advanced numerical techniques are preferable in that they may improve stability and accuracy of the associated discrete model system . while high accuracy enables efficient use of the available computational resources , well - posedness of the continuum model and numerical stability are requirements which can not be met by increasing computational power .a number of techniques has been suggested to address these issues : multiple coordinate patches , typically adapted to approximate symmetries of certain solution domains , combined with high - order operators are expected to increase the accuracy of any model of a stellar system .perturbative matching provides an efficient way to accurately model the propagation of gravitational waves to a distant observer , and to yield physical boundary conditions on incoming modes of the cauchy evolution .constraint - preserving boundary conditions isolate the incoming modes on the constraint hypersurface , and , finally , for evolving black holes , an excision boundary is desirable to concentrate on the behaviour of the external spacetime .only recently the consideration of the well - posedness of the differential system and the application of theorems on discrete stability of the numerical system have provided hints as how to address the outstanding issues . in this paper , we have applied all these techniques to a model system : a spherically symmetric black hole coupled to a massless klein - gordon field .we find that the use of a first - order hyperbolic formulation of einstein s field equations , combined with high - order derivative and dissipation operators with the summation - by - parts property , penalized inter - patch boundary conditions and constraint - preserving outer boundary conditions leads to a stable and accurate discrete model .specifically , isolated schwarzschild black holes in coordinates adapted to the killing fields , and in coordinates on which a gauge wave is imposed , and schwarzschild black holes accreting scalar wave pulses were taken as typical model systems involving excision .the results show that the introduction of several coordinate patches and of a cauchy perturbative matching interface does not introduce significant artefacts or instabilities .rather , the high - order methods allow the accurate long - term evolution of accreting black holes with excision and cauchy perturbative matching in reasonable resolutions . as an example, we have presented the evolution of such a system with the high - order sbp operator , which , at a resolution of , introduced an error of only after an evolution time of .most system of interest in general relativistic astrophysics will necessarily require the use of three - dimensional codes .results from a one - dimensional study are useful in that they ( i ) allow to gain experience in a clean but non - trivial physical system , ( ii ) can be easily reproduced without the need to implement three - dimensional codes with multiple coordinate patches and ( iii ) allow to isolate sources of difficulty in the three - dimensional setting more easily . with the promising results from this study, we will , as a next step , apply these techniques to a three - dimensional , general relativistic setting .this research was supported in part by nsf under grants phy050576 and int0204937 and by nasa under grantnasa - nag5 - 1430 to louisiana state university , and employed the resources of the center for computation and technology at louisiana state university , which is supported by funding from the louisiana legislature s information technology initiative .
during the last few years progress has been made on several fronts making it possible to revisit cauchy perturbative matching ( cpm ) in numerical relativity in a more robust and accurate way . this paper is the first in a series where we plan to analyze cpm in the light of these new results . one of the new developments is an understanding of how to impose constraint - preserving boundary conditions ( cpbc ) ; though most of the related research has been driven by outer boundaries , one can use them for matching interface boundaries as well . another front is related to numerically stable evolutions using multiple patches , which in the context of cpm allows the matching to be performed on a spherical surface , thus avoiding interpolations between cartesian and spherical grids . one way of achieving stability for such schemes of arbitrary high order is through the use of penalty techniques and discrete derivatives satisfying summation by parts ( sbp ) . recently , new , very efficient and high order accurate derivatives satisfying sbp and associated dissipation operators have been constructed . here we start by testing all these techniques applied to cpm in a setting that is simple enough to study all the ingredients in great detail : einstein s equations in spherical symmetry , describing a black hole coupled to a massless scalar field . we show that with the techniques described above , the errors introduced by cauchy perturbative matching are very small , and that very long term and accurate cpm evolutions can be achieved . our tests include the accretion and ring - down phase of a schwarzschild black hole with cpm , where we find that the discrete evolution introduces , with a low spatial resolution of , an error of after an evolution time of . for a black hole of solar mass , this corresponds to approximately , and is therefore at the lower end of timescales discussed e.g. in the collapsar model of gamma - ray burst engines .
during the last decade , answer - set programming ( asp ) has become a well - acknowledged paradigm for declarative problem solving .although there exist efficient solvers ( see , e.g. , for an overview ) and a considerable body of literature concerning the theoretical foundations of asp , comparably little effort has been spent on methods to support the development of asp programs .especially novice programmers , tempted by the intuitive semantics and expressive power of asp , may get disappointed and discouraged soon when some observed program behaviour diverges from his or her expectations . unlike for other programming languages like java or c++, there is currently little support for _ debugging _ a program in asp , i.e. , methods to _ explain _ and _ localise _ unexpected observations .this is a clear shortcoming of asp and work in this direction has already started .most of the current debugging approaches for asp rely on declarative strategies , focusing on _ conceptual errors _ of programs , i.e. , mismatches between the intended meaning and the actual meaning of a program .in fact , an elegant realisation of declarative debugging is to use asp itself to debug programs in asp .this has been put forth , e.g. , in the approaches of and .while the former uses a `` tagging '' method to decompose a program and applying various debugging queries , the latter is based on a meta - programming technique , i.e. , using a program over a meta - language to manipulate a program over an object language ( in this case , both the meta - language and the object language are instances of asp ) .such techniques have the obvious benefits of allowing ( i ) to use reliable state - of - the - art asp solvers as back - end reasoning engines and ( ii ) to stay within the same paradigm for both the programming and debugging process .indeed , both approaches are realised by the system ` spock ` .however , like most other asp debugging proposals , ` spock ` can deal only with propositional programs which is clearly a limiting factor as far as practical applications are concerned . in this paper, we present a debugging method for non - ground programs following the methodology of the meta - programming approach of for propositional programs .that is to say , we deal with the problem of finding reasons why some interpretation is _ not _ an answer set of a given program .this is addressed by referring to a model - theoretic characterisation of answer sets due to : an interpretation is not an answer set of a program iff ( i ) some rule in is not classically satisfied by or ( ii ) contains some loop of that is unfounded by with respect to .intuitively , item ( ii ) states that some atoms in are not justified by in the sense that no rules in can derive them or that some atoms are in only because they are derived by a set of rules in a circular way like the _ ouroboros _ , the ancient symbol of a dragon biting its own tail that represents cyclicality and eternity .this characterisation seems to be quite natural and intuitive for _ explaining _ why some interpretation is not an answer set .furthermore , a particular benefit is that it can ease the subsequent _ localisation _ of errors since the witnesses why an interpretation is not an answer set , like rules which are not satisfied , unfounded atoms , or cyclic rules responsible for unfounded loops , can be located in the program or the interpretation .although , at first glance , one may be inclined to directly apply the original approach of to programs with variables by simply grounding them in a preprocessing step , one problem in such an endeavour is that then it is not immediate clear how to relate explanations for the propositional program to the non - ground program .the more severe problem , however , is that the grounding step requires exponential space and time with respect to the size of the problem instance which yields a mismatch of the overall complexity as checking whether an interpretation is an answer set of some ( non - ground ) program is complete for , and thus the complementary problem why some interpretation is not an answer set is complete for method to decide this problem accounts for this complexity bound and avoids exponential space requirements .indeed , we devise a _ uniform _ encoding of our basic debugging problem in terms of a _fixed _ disjunctive logic program and an efficient reification of a problem instance as a set of facts , where is the program to be debugged and is the interpretation under consideration .explanations why is not an answer set of are then obtained by the answer sets of .we stress that the definition of is non - trivial : while the meta - program in the approach of for debugging propositional disjunctive programs could be achieved in terms of a normal non - ground program , _ by uniformly encoding a property , we reach the very limits of disjunctive asp _ and have to rely on advanced saturation techniques that inherently require disjunctions in rule heads . currently , our approach handles disjunctive logic programs with constraints , integer arithmetic , comparison predicates , and strong negation , thus covering a practically relevant program class .further language constructs , in particular aggregates and weak constraints , are left for future work .we deal with _ disjunctive logic programs _ which are finite sets of rules of form where , `` '' denotes _default negation _ , and all are literals over a function - free first - order language . a literal is an atom possibly preceded by the _ strong negation _symbol . in the sequel, we assume that will be implicitly defined by the considered programs . for a rule as above, we define the _ head _ of as , the _ positive body _ as , and the _ negative body _ as .if , is a _fact _ ; if contains no disjunction , is _ normal _ ; and if and , is a _constraint_. for facts , we will omit the symbol . a literal , rule , or program is _ ground _ if it contains no variables .furthermore , a program is normal if all rules in it are normal . finally , we allow arithmetic and comparison predicate symbols , , , , , , , and in programs , but these may appear only positively in rule bodies .let be a set of constants .substitution over _ is a function assigning each variable an element of .we denote by the result of applying to an expression .the _ grounding _ of a program relative to its herbrand universe , denoted by , is defined as usual . an _ interpretation _ ( over some language ) is a finite and consistent set of ground literals ( over ) that does not contain any arithmetic or comparison predicates .recall that consistency means that , for any atom .the satisfaction relation , , between and a ground atom , a literal , a rule , a set of literals , or a program is defined in the usual manner .note that the presence of arithmetic and comparison operators implies that the domain of our language will normally include natural numbers as well as a linear ordering , , for evaluating the comparison relations ( which coincides with the usual ordering in case of constants which are natural numbers ) . for any ground program and any interpretation , the _ reduct _ , , of with respect to is defined as .an interpretation is an _ answer set _ of a program iff is a minimal model of .we will base our subsequent elaboration on an alternative characterisation of answer sets following , described next . given a program ,the _ positive dependency graph _ is a directed graph , where ( i ) equals the herbrand base of the considered language and ( ii ) iff and , for some rule .a non - empty set of ground literals is a _ _ loop _ _ of a program iff , for each pair , there is a path of length greater than or equal to 0 from to in the positive dependency graph of such that each literal in is in .let be a program and and interpretations .then , is _ externally supported by with respect to _ iff there is a rule such that ( i ) and , ( ii ) , ( iii ) , and ( iv ) .intuitively , items ( i)(iii ) express that is supported by with respect to , in the sense that the grounding of contains some rule whose body is satisfied by ( item ( i ) ) and which is able to derive some literal in ( item ( ii ) ) , while all head atoms of not contained in are false under . moreover , item ( iv ) ensures that this support is external as it is without reference to the set itself .answer sets are now characterised thus : [ prop : lee05 ] let be a program and an interpretation .then , is an answer set of iff ( i ) and ( ii ) every loop of that is contained in is externally supported by with respect to .we actually make mainly use of the complementary relation of external support : following , we call _ unfounded by with respect to _ iff is not externally supported by with respect to .as discussed in the introduction , we view an error as a mismatch between the intended answer sets and the observed actual answer sets of some program .more specifically , our basic debugging question is why a given interpretation is not answer set of some program , and thus we deal with finding explanations for not being an answer set of .proposition [ prop : lee05 ] allows us to distinguish between two kinds of such explanations : ( i ) instantiations of rules in that are not satisfied by and ( ii ) loops of in that are unfounded by with respect to .although our basic debugging question allows for different , multi - faceted , answers , we see two major benefits of referring to this kind of categorisation : first , in view of proposition [ prop : lee05 ] , these kinds of explanations are always sufficient to explain why is not an answer set of , and second , this method provides _ concrete witnesses _ , e.g. , unsatisfied rules or unfounded atoms , that can help to localise the reason for an error in a program or an interpretation in a rather intuitive way .before we introduce the details of our approach , we discuss its virtues compared to a method for debugging non - ground programs which can be obtained using the previous meta - programming technique for propositional programs due to . explaining why some interpretation is not an answer set of some program based on the characterisation of has been dealt with in previous work for debugging propositional disjunctive logic programs . in principle, we could use this method for debugging non - ground programs as well by employing a preparatory grounding step .however , such an undertaking comes at a higher computational cost compared to our approach which respects the inherent complexity of the underlying tasks .we lay down our arguments in what follows . to begin with ,let us recall that defined a fixed normal non - ground program and a mapping from disjunctive propositional programs and interpretations to sets of facts .given a disjunctive program without variables and some interpretation , explanations why is not an answer set of can then be extracted from the answer sets of .such a problem encoding is _ uniform _ in the sense that does not depend on the problem instance determined by and . to find reasons why some interpretation is not an answer set of a non - ground program , the above approach can be used by computing the answer sets of . however , in general , the size of is exponential in the size of , and the computation of the answer sets of a ground program requires exponential time with respect to the size of the program , unless the polynomial hierarchy collapses .hence , this outlined approach to compute explanations using a grounding step requires , all in all , _ exponential space _ and _ double - exponential time _with respect to the size of .but this is a mismatch to the inherent complexity of the overall task , as the following result shows : [ prop : complexity ] given a program and an interpretation , deciding whether is not an answer set of is -complete .this property is a consequence of the well - known fact that the complementary problem , i.e. , checking whether some given interpretation is an answer set of some program , is -complete .hence , checking whether an interpretation is not an answer set of some program can be computed in _polynomial space_. our approach takes this complexity property into account .we exploit the expressive power of disjunctive non - ground asp by providing a uniform encoding that avoids both exponential space and double - exponential time requirements : given a program and an interpretation , we define an encoding , where is a fixed disjunctive non - ground program , and is an efficient encoding of and by means of facts . explanations why is not an answer set of are determined by the answer sets of . since is fixed , the grounding of is bounded by a polynomial in the size of and .thus , our approach requires only polynomial space and single - exponential time with respect to and .note that disjunctions can presumably not be avoided in due to the -hardness of deciding whether an interpretation is not an answer set of some program .one may ask , however , whether could be normal in case is normal .we have to answer in the negative : answer - set checking for normal programs is complete for , even if no negation is used or negation is only used in a stratified way .( we recall that is the class of problems that can be decided by a conjunction of an and an independent property . )hence , can not be normal unless .however , one could use two independent normal meta - programs to encode our desired task .a further benefit of debugging a program directly at the non - ground level is that we can immediately relate explanations for errors to first - order expressions in the considered program , e.g. , to rules or literals with variables instead of their ground instantiations . in what follows ,we give details of and and describe their main properties . for realising the encoding for program and interpretation , we rely on a reification of and a reification of .the former is , in turn , constructed from reifications of each individual rule .we introduce the mappings , , and in the following .to begin with , we need unique names for certain language elements . by an_ extended predicate symbol _( eps ) we understand a predicate symbol , possibly preceded by the symbol for strong negation .let be an injective _ labelling function _ from the set of program rules , literals , epss , and variables to a set of labels from the symbols in our language .note that we do not need labels for constant symbols since they will serve as unique names for themselves .a single program rule is reified by means of facts according to the following definition .let be a rule .then , the first fact states that label denotes a rule .the next three sets of facts associate labels of the literals in the head , the positive body , and the negative body to the respective parts of .then , each label of some literal in is associated with a label for its eps .the following two sets of facts encode the positions of variables and constants in the literals of the rule .finally , the last set of facts states which variables occur in the rule .a program is encoded as follows : let be a program .then , the first union of facts stem from the reification of the single rules in the program .the remaining facts represent the herbrand universe of the program and associate the epss occurring in the program with their arities .the translation from an interpretation to a set of facts is formalised by the next definition .let be an interpretation .then , the first two sets of facts associate the literals in with their respective labels and epss .the last set of facts reifies the internal structure of the literals occurring in .let be a program and an interpretation .furthermore , let be the the maximum of and the arities of all predicate symbols in .then , .the literals are necessary to add sufficiently many natural numbers to the herbrand universe of to carry out correctly all computations in the subsequent program encodings .note that the size of is always linear in the size of and . ' '' '' { { \gamma_{{\text{\it{unsat}}}}}^{{\text{\it{check}}}}}= \ { & { { \text{\it{unsatisfied } } } } \leftarrow { { \text{\it{satbody } } } } , { \mathrm{not}}\ { { \text{\it{sathead } } } } , \\ & { { \text{\it{satbody } } } } \leftarrow { \mathrm{not}}\ { { \text{\it{unsatposbody } } } } , { \mathrm{not}}\ { { \text{\it{unsatnegbody}}}},\\ & { { \text{\it{sathead } } } } \leftarrow { { \text{\it{guessrule}}}}(r ) , { { \text{\it{head}}}}(r , a ) , { { \text{\it{true}}}}(a),\\ & { { \text{\it{unsatposbody } } } } \leftarrow { { \text{\it{guessrule}}}}(r),{{\text{\it{posbody}}}}(r , a ) , { { \text{\it{false}}}}(a),\\ & \begin{array}{@{}r } { { \text{\it{unsatnegbody } } } } \leftarrow { { \text{\it{guessrule}}}}(r),{{\text{\it{negbody}}}}(r , a ) , { { \text{\it{true}}}}(a ) \}\mbox{. } \end{array } \end{array}\ ] ] + ' '' '' we proceed with the definition of the central meta - program . the complete program consists of more than 160 rules . for space reasons , we only present the relevant parts and omit modules containing simple auxiliary definitions .the full encodings can be found at _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ www.kr.tuwien.ac.at/research/projects/mmdasp/encoding.tar.gz . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the meta - program consists of the following modules : ( i ) , related to unsatisfied rules , ( ii ) , related to loops , ( iii ) , for testing unfoundedness of loops , and ( iv ) , integrating parts ( i)(iii ) for performing the overall test of whether a given interpretation is not an answer set of a given program .we first introduce the program module to identify unsatisfied rules .[ def : unsat ] by we understand the program , where and are given in figure [ fig : unsat ] , and defines the auxiliary predicates and . intuitively , for a program and an interpretation , guesses a rule , represented by predicate , and a substitution , represented by , and defines that holds if .module ( omitted for space reasons ) defines the auxiliary predicates and such that holds if , for some literal , and holds if .module has the following central property : [ thm : unsat ] let be a program and an interpretation .then , iff some answer set of contains .more specifically , for each rule with , for some substitution over the herbrand universe of , has an answer set such that ( i ) and ( ii ) iff .we next define module for identifying loops of a program .[ def : loop ] by we understand the program , where and are given in figure [ fig : loop ] , and defines the auxiliary predicates and . ' '' '' { { \gamma_{{\text{\it{loop}}}}}^{{\text{\it{check}}}}}= \ { & \begin{array}[t]{@{}l@{}l } { { \text{\it{inruleset}}}}(n , r ) \vee { { \text{\it{outruleset}}}}(n , r ) \leftarrow\ & 1 \leq n , n \leq s , { { \text{\it{loopsz}}}}(s ) , { { \text{\it{rule}}}}(r ) , \\ & { { \text{\it{natnumber}}}}(n ) , \end{array}\\ & { { \text{\it{somerule}}}}(n ) \leftarrow { { \text{\it{inruleset}}}}(n , r ) , \\ & \leftarrow { \mathrm{not}}\ { { \text{\it{somerule}}}}(n ) , 1 \leq n , n \leq s , { { \text{\it{loopsz}}}}(s ) , { { \text{\it{rule}}}}(r ) , { { \text{\it{natnumber}}}}(n ) , \\ & \leftarrow { { \text{\it{inruleset}}}}(n , r_{1 } ) , { { \text{\it{inruleset}}}}(n , r_{2 } ) , r_{1 } \neq r_{2 } , \\ & \leftarrow { { \text{\it{inruleset}}}}(n_{1},r_{1 } ) , { { \text{\it{inruleset}}}}(n_{2},r_{2 } ) , n_{1 } \leq n_{2 } , r_{1 } > r_{2 } , \\ & \begin{array}[t]{@{}l@{}l } { { \text{\it{loopsubst}}}}(n , x , c ) \vee { { \text{\it{nloopsubst}}}}(n , x , c ) \leftarrow\ & { { \text{\it{var}}}}(r , x),{{\text{\it{dom}}}}(c),\\ & { { \text{\it{inruleset}}}}(n , r),\end{array}\\ & { { \text{\it{loopassigned}}}}(n , x ) \leftarrow { { \text{\it{loopsubst}}}}(n , x , c ) , \\ & \leftarrow { \mathrm{not}}\ { { \text{\it{loopassigned}}}}(n , x ) , { { \text{\it{inruleset}}}}(n , r ) , { { \text{\it{var}}}}(r , x ) , \\ & \leftarrow { { \text{\it{loopsubst}}}}(n , x , c_{1 } ) , { { \text{\it{loopsubst}}}}(n , x , c_{2 } ) , c_{1 } \neq c_{2 } , \\&{{\text{\it{isloop } } } } \leftarrow { \mathrm{not}}\ { { \text{\it{unreachablepair } } } } , inloop(x ) , \\ & { { \text{\it{unreachablepair } } } } \leftarrow { { \text{\it{inloop}}}}(x ) , { { \text{\it{inloop}}}}(y ) , { \mathrm{not}}\ { { \text{\it{path}}}}(x , y ) , \\ & { { \text{\it{path}}}}(x , x ) \leftarrow { { \text{\it{inloop}}}}(x ) , \\ & \begin{array}[t]{@{}l@{}l } { { \text{\it{path}}}}(x , y ) \leftarrow\ & { { \text{\it{inloop}}}}(x ) , { { \text{\it{inloop}}}}(y ) , { { \text{\it{pred}}}}(x , t_{1 } ) , { { \text{\it{pred}}}}(y , t_{2 } ) , { { \text{\it{loopsz}}}}(s ) , \\ & 1 \leq n , n \leq s , { { \text{\it{head}}}}(r , h),{{\text{\it{inruleset}}}}(n , r ) , { { \text{\it{posbody}}}}(r , b ) , \\&{{\text{\it{pred}}}}(h , t_{1 } ) , { { \text{\it{pred}}}}(b , t_{2 } ) , { \mathrm{not}}\ { { \text{\it{differseq}}}}(n , x , h ) , \\ & { \mathrm{not}}\ { { \text{\it{differseq}}}}(n , y , b ) , \end{array}\\ & { { \text{\it{path}}}}(x , z ) \leftarrow { { \text{\it{inloop}}}}(x ) , { { \text{\it{inloop}}}}(z ) , { { \text{\it{path}}}}(x , y ) , { { \text{\it{path}}}}(y , z ) \}\mbox{. } \end{array}\ ] ] ' '' '' intuitively , for a program and an interpretation , guesses a non - empty subset of , represented by , as a candidate for a loop , and defines that holds if is a loop of . more specifically , this check is realised as follows .assume contains literals . 1 .guess a set of pairs , where is a rule from and is a substitution over the herbrand universe of .2 . check , for each , whether there is a path in the positive dependency graph of the ground program consisting of rules such that starts with and ends with , and all literals in are in .a path is represented by the binary predicate .module ( again omitted for space reasons ) defines that ( i ) holds if and ( ii ) holds if , where , are literals and is the substitution stemming from a pair in that is associated with an index by .[ thm : loop ] for any program and any interpretation , is a loop of iff , for some answer set of , and .we proceed with module for checking whether some set of ground literals is unfounded by with respect to an interpretation .we later combine this check with to identify unfounded loops , i.e. , we will integrate a loop guess with a check , thus reaching the very limits of disjunctive asp by uniformly encoding a property . ' '' '' { { \gamma_{{\text{\it{unfd}}}}}^{{\text{\it{check}}}}}= \ { & { { \text{\it{unfounded } } } } \leftarrow { { \text{\it{unsupp}}}}(r ) , { { \text{\it{lastr}}}}(r),\\ & { { \text{\it{unsupp}}}}(r ) \leftarrow { { \text{\it{firstr}}}}(r ) , { { \text{\it{unsupprule}}}}(r),\\ & { { \text{\it{unsupp}}}}(r_{2 } ) \leftarrow { { \text{\it{succr}}}}(r_{1},r_{2 } ) , { { \text{\it{unsupp}}}}(r_{1 } ) , { { \text{\it{unsupprule}}}}(r_{2}),\\ & { { \text{\it{saturate } } } } \leftarrow { { \text{\it{unfounded}}}},\\ & { { \text{\it{suppsubst}}}}(x , c ) \leftarrow { { \text{\it{variable}}}}(x),{{\text{\it{dom}}}}(c ) , { { \text{\it{saturate}}}},\\ & { { \text{\it{nsuppsubst}}}}(x , c ) \leftarrow { { \text{\it{variable}}}}(x),{{\text{\it{dom}}}}(c ) , { { \text{\it{saturate}}}}\ \}\cup\\ \{&{{\text{\it{unsupprule}}}}(r ) \leftarrow c_{i}(r ) \mid i \in \{1,\ldots , 5\}\}\mbox{. } \end{array}\ ] ] ' '' '' [ def : unfounded ] by we understand the program , where and are given in figure [ fig : support ] , and defines the auxiliary predicates , , , , , , and .the intuition behind this definition is as follows .consider a program , some set of ground literals , encoded via , and an interpretation .module non - deterministically guesses a binary relation between the variables and the constant symbols in . in case this relation is not a function , establishes .module , in turn , encodes whether , for each substitution and each rule , some of the conditions from the definition of being externally supported by is violated .in fact , is derived if some of these conditions is violated .moreover , holds if holds , and saturates the relation defined by predicate if holds .module ( omitted for space reasons ) defines and , which express the immediate successor relation , based on , for the constant symbols and rules in , respectively , as well as the predicates , , , and , which mark the first and the last elements in the order defined by and , respectively .moreover , the module defines predicates , expressing failure of one of the conditions for being externally supported by with respect to . the rough idea behind the encoded saturation technique is to search , via , for counterexample substitutions that witness that the set of ground literals is _ not _ unfounded .for such a substitution , neither nor can become true which implies that no answer set can contain due to the saturation of and the minimality of answer sets .[ thm : support ] consider a program , an interpretation , and a set of ground literals . then, is unfounded by with respect to iff the unique answer set of contains the literal . given the above defined program modules , we arrive at the uniform encoding of the overall program .let , , and be the programs from definitions [ def : unsat ] , [ def : loop ] , and [ def : unfounded ] , respectively .then , , where module encodes that each answer set of witnesses either or that some loop of is unfounded by with respect to .we finally obtain our main result , which follows essentially from the semantics of module and theorems [ thm : unsat ] , [ thm : loop ] , and [ thm : support ] .[ th : main ] given a program and an interpretation , satisfies the following properties : 1 . has no answer set iff is an answer set of . is not an answer set of iff , for each answer set of , .3 . iff , for some answer set of .moreover , for each rule with , for some substitution over the herbrand universe of , there is some answer set of such that ( a ) and ( b ) iff .a loop is unfounded by with respect to iff some answer set of contains both and , and .in this section , we first describe a simple scenario with different debugging tasks and show how the meta - program defined in the previous section can be used to solve them .afterwards , we discuss some pragmatic aspects relevant for realising a prospective user - friendly debugging system based on our approach .we assume that students have to encode the assignments of papers to members of a program committee ( pc ) based on some bidding information in terms of asp .we consider three cases , each of them illustrates a different kind of debugging problem . in the first case ,an answer set is expected but the program is inconsistent . in the second case ,multiple answer sets are expected but the program yields only one answer set . in the third case, it is expected that a program is inconsistent , but it actually yields some answer set .we illustrate that , in all cases , our approach gives valuable hints how to debug the program in an iterative way .assume that means that is a member of the pc , means that is a paper , and means that pc member bids on paper with value , where is a natural number ranging from to expressing a degree of preference for that paper . to start with , lucy wants to express that the default bid for a paper is .that is , if a pc member does not bid on a paper , then it is assumed that the pc member bids on that paper per default .first attempt looks as follows : lucy s intention is that is true if pc member bids on paper , and is true if there is no evidence that pc member has bid on that paper . indeed , the unique answer set of is the answer set is indeed as expected : we have that each pc member bids on some paper in and the last rule is inactive .lucy s next step is to delete the fact from us denote the resulting program by .lucy expects that the answer set of contains .however , it turns out that yields no answer set at all ! to find out what went wrong , lucy defines her expected answer set as and inspects the answer sets of .it turns out that one answer set contains the facts and , where is the label for the rule hence , is not satisfied by : is in and thus satisfies the body of , but the head of is not satisfied since does not contain . now that lucy sees that s answer set has to contain , she defines as plus the fact answer sets of reveal that is not an answer set of because the singleton loop is contained in but it is unfounded by with respect to .the reason is clear : the only rule that could support is however , is blocked since contains .lucy concludes that , to make work as expected , must not be contained in the answer set . to achieve this , lucy changes , the only rule with predicate in the head , into the resulting program works as expected and contains in its answer set . the next student who is faced with a mystery is linus .he tried to formalise that each paper is non - deterministically assigned to at least one member of the pc .his program looks as follows : linus expects that the disjunctive rule realises the non - deterministic guess , and then the constraint prunes away all answer set candidates where a paper is not assigned to some pc member .now , poor linus is desperate since the non - deterministic guess seems not to work correctly ; the only answer set of is although linus expected one answer set for each possible assignment . in particular , linus expected to be an answer set as well .hence , linus inspects the answer sets of and learns that the constraint in is not satisfied by .in particular , it is the substitution that maps the variable to and to that is responsible for the unsatisfied constraint , which can be seen from the atoms in each answer set that contains . having this information, linus observes that the constraint in its current form is unsatisfied if some paper is not assigned to _ each _ pc member .however , he intended it to be unsatisfied only when a paper is assigned to _ no _ pc member .hence , he replaces the constraint by the two rules the resulting program yields the nine expected answer sets .meanwhile , peppermint patty encounters a strange problem .her task was to write a program that expresses the following issue : if a pc member bids 0 on some paper , then this means that there is a conflict of interest with respect to and .in any case , there is a conflict of interest if ( co-)authored .a pc member can only be assigned to some paper if there is no conflict of interest with respect to that pc member and that paper .this is peppermint patty s solution : the facts in should model a scenario where a pc member authored a paper and is assigned to that paper . according to the specification from above , this should not be allowed .since patty is convinced that her encoding is correct , she expects that has no answer sets .but has the unique answer set peppermint patty finds puzzling is that does not contain any atoms signalling a conflict of interest .hence , she decides to analyse why is not an answer set of . if was correct , then the only reason why is not an answer set of would be that the ( only ) constraint in is unsatisfied .as expected , some answer sets of contain and , where is the label of the constraint in .however , some answer sets contain the atom as well a surprising observation .patty learns , by inspecting the atoms in the respective answer set , that contains the loop which is unfounded by with respect to : seems to be justified only by the literal and vice versa .this should not be the case since contains the rule that should support because all the facts , , and should be contained in .now , the error is obvious : does not contain the fact but order of the arguments was wrong .after peppermint patty fixed that bug , her program is correct . for a debugging system of practical value, certain pragmatic aspects have to be taken into account which we briefly sketch in what follows .to start with , our encodings can be seen as a `` golden design''tailored towards clarity and readability which leaves room for optimisations .related to this issue , solver features like limiting the number of computed answer sets or query answering are needed to avoid unnecessary computation and to limit the amount of information presented to the user .our debugging approach requires information about the intended semantics in form of the interpretation representing a desired answer set .typically , answer sets of programs encoding real - world problems tend to be large which makes it quite cumbersome to manually create interpretations from scratch .it is therefore vital to have convenient means for obtaining an intended answer set in the first place .for this purpose , we envisage a tool - box for managing interpretations that allows for their manipulation and storage . in sucha setting , answer sets of previous versions of the debugged program could be a valuable source of interpretations which are then tailored towards an intended answer set of the current version .in addition to manual adaptations , partial evaluation of the program could significantly accelerate the creation of interpretations .we plan to further investigate these issues and aim at incorporating our debugging technique , along with an interpretation management system as outlined , in an integrated development environment ( ide ) . here ,an important issue is to achieve a suitable user interface for highlighting the identified unsatisfied rules and unfounded loops in the source code and for visualising the involved variable substitutions .besides the debugging approach by , as already discussed earlier , other related approaches on debugging include the work of on _ justifications _ for non - ground answer - set programs that can be seen as a complementary approach to ours .their goal is to explain the truth values of literals with respect to a given actual answer set of a program .explanations are provided in terms of _ justifications _ which are labelled graphs whose nodes are truth assignments of possibly default - negated ground atoms .the edges represent positive and negative support relations between these truth assignments such that every path ends in an assignment which is either assumed or known to hold .the authors have also introduced justifications for partial answer sets that emerge during the solving process ( online justifications ) , being represented by three - valued interpretations . the question why atoms are contained or are not contained in an answerset has also been raised by who provide algorithms for recursively computing explanations in terms of satisfied supporting rules .note that these problems can in principle also be handled by our approach , as illustrated in section [ sec : peanuts ] . indeed , consider some program with answer set and suppose we want to know why a certain set of literals is contained in .using our approach , explanations why is not an answer set of will reveal rules which are unsatisfied under but which support literals in under .likewise , we can answer the question why expected atoms are missing in an answer set. aims at finding explanations why some propositional program has no answer sets .his approach is based on finding minimal sets of constraints such that their removal yields consistency .hereby , it is assumed that a program does not involve circular dependencies between literals through an odd number of negations which might also cause inconsistency .finding reasons for program inconsistency can be handled by our approach when an intended answer set is known , as illustrated by program in section [ sec : peanuts ] .otherwise , an interpretation can be chosen from the answer sets resulting from temporarily removing all constraints from the considered program ( providing this yields consistency ) .rewrite a program using some additional control atoms , called _ tags _ , that allow , e.g. , for switching individual rules on or off and for analysing the resulting answer sets .debugging requests in this approach can be posed by adding further rules that can employ tags as well .one such extension allows also for detecting atoms in unfounded loops .however , as opposed to our current approach , the individual loops themselves are not identified .developed a declarative debugging approach for datalog using a classification of error explanations similar to the one by and our current work .their approach is tailored towards query answering and , in contrast to our approach , the language is restricted to stratified datalog .however , provide an implementation that is based on computing a graph that reflects the execution of a query .show how a calculus can be used for debugging first - order theories with inductive definitions in the context of model expansion problems , i.e. , problems of finding models of a given theory that expand some given interpretation .the idea is to trace the proof of inconsistency of such an unsatisfiable model expansion problem .the authors provide a system that allows for interactively exploring the proof tree . besides the mentioned approaches which rely on the semantical behaviour of programs , use a translation from logic - program rules to natural language in order to detect program errors more easily .this seems to be a potentially useful feature for an ide as well , especially for novice and non - expert asp programmers .our approach for declaratively debugging non - ground answer - set programs aims at providing intuitive explanations why a given interpretation fails to be an answer set of the program in development . to answer this question , we localise , on the one hand , unsatisfied rules and , on the other hand , loops of the program that are unfounded with respect to the given interpretation . as underlying technique, we use a sophisticated meta - programming method that reflects the complexity of the considered debugging question which resides on the second level of the polynomial hierarchy .typical errors in asp may have quite different reasons and many of them could be avoided rather easily in the first place , e.g. , by a compulsory declaration of predicates , forbidding uneven loops through negation , introducing type checks , or defining program interfaces .we plan to realise these kinds of simple prophylactic techniques for our future ide for asp that will incorporate our current debugging approach . in this context ,courses on logic programming at our institute shall provide a permanent testbed for our techniques .moreover , as part of an ongoing research project on methods and methodologies for developing answer - set programs , we want to put research efforts into methodologies that avoid or minimise debugging needs right from the start . as a next direct step regarding our efforts towards debugging ,we plan to extend our approach to language features like aggregates , function symbols , and optimisation techniques such as minimise - statements or weak constraints .\2005 . debugging logic programs under the answer - set semantics . in _ proceedings of the 3rd workshop on answer set programming : advances in theory and implementation ( asp05 ) , bath , uk , july 27 - 29 , 2005_. ceur workshop proceedings , vol . 142 .ceur-ws.org , aachen , germany . ,gebser , m. , phrer , j. , schaub , t. , tompits , h. , and woltran , s. 2007 .debugging asp programs by means of asp . in _ proceedings of the 9th international conference on logic programming and nonmonotonic reasoning ( lpnmr07 ) , tempe , az ,usa , may 15 - 17 , 2007 _ , c. baral , g. brewka , and j. s. schlipf , eds .lecture notes in computer science , vol . 4483 .springer , berlin - heidelberg , germany , 3143 . , garca - ruiz , y. , and senz - prez , f. 2008 . a theoretical framework for the declarative debugging of datalog programs . in _ revised selected papers of the 3rd international workshop on semantics in data and knowledge bases ( sdkb08 ) , nantes , france , march 29 , 2008 _ , k .-schewe and b. thalheim , eds .lecture notes in computer science , vol . 4925 .springer , berlin - heidelberg , germany , 143159 . ,lin , f. , wang , y. , and zhang , m. 2006 .first - order loop formulas for normal logic programs . in _ proceedings of the 10th international conference on principles of knowledge representation and reasoning ( kr06 ) , lake district , uk , june 2 - 5 , 2006 _ , p.doherty , j. mylopoulos , and c. a. welty , eds .aaai press , menlo park , ca , usa , 298307 . , vennekens , j. , bond , s. , gebser , m. , and truszczynski , m. 2009 . the second answer set programming competition . in _ proceedings of the 10th international conference on logic programming and nonmonotonic reasoning ( lpnmr09 ) , potsdam , germany ,september 14 - 18 , 2009 _ , e. erdem , f. lin , and t. schaub , eds .lecture notes in computer science , vol . 5753 .springer , berlin - heidelberg , germany , 637654 . ,faber , w. , fink , m. , pfeifer , g. , and woltran , s. 2004 .complexity of model checking and bounded predicate arities for non - ground answer set programming . in_ proceedings of the 9th international conference on principles of knowledge representation and reasoning ( kr04 ) , whistler , canada , june 2 - 5 , 2004 _ , d. dubois , c. a. welty , and m .- a .williams , eds .aaai press , menlo park , ca , usa , 377387 . , phrer , j. , schaub , t. , and tompits , h. 2008 . a meta - programming technique for debugging answer - set programs . in _ proceedings of the 23rd aaai conference on artificial intelligence ( aaai08 ) , chicago , il , usa ,july 13 - 17 , 2008 _, d. fox and c. p. gomes , eds .aaai press , menlo park , ca , usa , 448453 . , phrer , j. , schaub , t. , tompits , h. , and woltran , s. 2009 .spock : a debugging support tool for logic programs under the answer - set semantics . in_ revised selected papers of the 17th international conference on applications of declarative programming and knowledge management and the 21st workshop on logic programming ( inap07/wlp07 ) , wrzburg , germany , october 4 - 6 , 2007 _, d. seipel , m. hanus , and a. wolf , eds .lecture notes in computer science , vol . 5437 .springer , berlin - heidelberg , germany , 247252 .a model - theoretic counterpart of loop formulas . in _ proceedings of the 19th international joint conference on artificial intelligence ( ijcai05 ) , edinburgh , scotland , uk , july 30-august 5 , 2005 _ , l. p. kaelbling and a. saffiotti , eds . professional book center ,denver , co , usa , 503508 . \2008 . on loop formulas with variables . in _ proceedings of the 11th international conference on principles of knowledge representation and reasoning ( kr08 ) , sydney , australia ,september 16 - 19 , 2008 _ , g. brewka and j. lang , eds .aaai press , menlo park , ca , usa , 444453 . , moseley , e. , and truszczynski , m. 2007 . towards debugging of answer - set programs in the language pspb . in _ proceedings of the 2007 international conference on artificial intelligence ( icai07 ) , volume ii , las vegas , nv , usa , june 25 - 28 , 2007 _ , h. r. arabnia , m. q. yang , and j. y. yang , eds .csrea press , bogart , ga , usa , 635640 . , phrer , j. , and tompits , h. 2010 .methods and methodologies for developing answer - set programs project description . in _technical communications of the 26th international conference on logic programming ( iclp10 ) _ , m. hermenegildo and t. schaub , eds .leibniz international proceedings in informatics ( lipics ) , volschloss dagstuhl leibniz - zentrum fr informatik , dagstuhl ,germany .debugging inconsistent answer set programs . in _ proceedings of the 11th international workshop on non - monotonic reasoning ( nmr06 ) ,lake district , uk , may 30-june 1 , 2006 _ , j. dix and a. hunter , eds .institut fr informatik , technische universitt clausthal , technical report , clausthal , germany , 7783 . ,vlaeminck , h. , and denecker , m. 2009 .debugging for model expansion . in _ proceedings of the 25th international conference on logic programming ( iclp09 ) , pasadena ,ca , usa , july 14 - 17 , 2009 _ , p. m. hill and d. s. warren , eds .lecture notes in computer science , vol .springer , berlin - heidelberg , germany , 296311 .
an important issue towards a broader acceptance of answer - set programming ( asp ) is the deployment of tools which support the programmer during the coding phase . in particular , methods for _ debugging _ an answer - set program are recognised as a crucial step in this regard . initial work on debugging in asp mainly focused on propositional programs , yet practical debuggers need to handle programs with variables as well . in this paper , we discuss a debugging technique that is directly geared towards non - ground programs . following previous work , we address the central debugging question why some interpretation is not an answer set . the explanations provided by our method are computed by means of a meta - programming technique , using a uniform encoding of a debugging request in terms of asp itself . our method also permits programs containing comparison predicates and integer arithmetics , thus covering a relevant language class commonly supported by all state - of - the - art asp solvers . [ firstpage ] answer - set programming , program analysis , debugging
websites get hacked , whenever they are subject to a vulnerability that is known to the attacker , whenever they can be discovered efficiently , and , whenever the attacker has efficient means of hacking at his disposal .this combination of _ knowledge _ , _ opportunity _ , and _tools _ is quite crucial in shaping the way a group of sites receives unwanted attention by hackers .unfortunately , as an observer we are not privy to either one of these three properties .exploits are first discovered by highly skilled hackers who will use them for their own purposes for an extended period of time , as long as there is an ample supply of hackable sites that can be discovered efficiently .once the _ opportunity _ for such hacks diminishes due to exhausted supply , the appropriate vulnerabilities are often published . the available tools increase and they are added to the repertoire of popular rootkits , at the ready disposal of script kiddies who will attempt to attack the remaining sites . the increased availability of _ tools _ often offsets the reduced _ opportunity _ to yield a secondary wave of infections . in a nutshell ,the above leads to the following statistical assumptions on how vulnerability of sites and the infectious behavior occurs .firstly , sites are only practically vulnerable once a vulnerability is discovered .second , as time passes , the propensity of an attack might increase or not , but changes in attack behavior are discrete rather than gradual .we propose a novel hazard regression model that provides a clear description of the probability a site getting hacked conditioned on its time - varying features , therefore allowing prediction tasks such as finding websites at risk , or inferential tasks such as attributing attacks to certain features as well as identifying change points of the activations of certain features to be conducted with statistical rigor .[ [ related - work . ] ] related work .+ + + + + + + + + + + + + the primary strategy for identifying web - based malware has been to detect an active infection based on features such as small iframes .this approach has been pursued by both academia ( e.g. ) and industry ( e.g. ) . propose a data driven ( linear classification ) approach to identify software packages that were being targeted by attackers to predict the security outcome of websites .hazard regression aims to estimate the chances of survival of a particular event with covariates , as a function of time , such as to better understand the effects of . instead of directly modeling the cumulative survival distribution ,people are interested in the instantaneous rate of dying of any at any given time , i.e. hazard rate .the density of dying at time is given by this leads to a differential equation for the survival probability with solution in our case , death amounts to a site being infected and is the rate at which such an infection occurs .an extremely useful fact of hazard regression is that it is additive .that is , if there are two causes with rates and respectively , it allows us to add the rates and this leads to and .the reason why this is desirable in our case follows from the fact that we may now model as the sum of attacks in a generalized linear form .most hazard regression approaches are based on the cox s propotional hazard model , including parametric models , and nonparametric models with baseline hazard rate unspecified .the proportional assumption may not hold because of the time - varying effect of covariates . as a result ,time - dependent effect models that allow as functions over time for each feature are proposed .typically people developed time functions based on fractional polynomials , or spline functions . due to the huge parameter space , techniques like reduced rank methods and structured penalized methods are studied .however those works either search for global smoothing functions or need to pre - specify knots .typically they work on tens of features .our work inspired by hacking campaigns aims to identify discrete attack behaviors .we show the optimal solutioin is a 0th order spline with knots adaptively chosen over continuous time .the blacklist may not always immediately discover whether a site has been taken over .the probability that this happens in some time interval ] of likely infection for site , at time we have the following likelihood for the observed data : it remains to specify the hazard function .we do not wish to make strong parametric assumptions , but since is high - dimensional , estimating completely non - parametrically is intractable .we thus make an additive assumption and expand the hazard function into an inner product this is still an extremely rich class of functions as can be different over time and is allowed to be any univariate nonnegative functions over continuous time .furthermore , based on our intuition , is not a smoothly changing function , but can jump suddenly in response to certain events .it may not have a small or even bounded lipschitz constant .we therefore constrain complexity of the function class via total variations ( tv ) .then we can learn the model by solving the variational penalized maximum likelihood problem below : ,t\in \r , \delta\in \r_+ \end{aligned}\ ] ] where is the indicator of censoring type for observation , i.e. interval - censored or right - censored ; is the associated censoring time ; is the evaluation of function at time .note that the monotone constraint is optional and can be removed to form a `` non - monotone '' model .there , only issue is that is an infinite dimensional function optimization problem and could be very hard to solve .the following theorem provides a finite set of simple basis functions that can always represent at least one of the solution to .[ thm : repre ] assume no observations are uncensored , feature for each user is piecewise constant over time with finite number of change points .let be the step function at .there exists an optimal solution of the above problem such that for each , for some set that collects all censoring boundaries and places where feature changes , and coefficient vector .the direct consequence of theorem [ thm : repre ] is that we can now represent piecewise constant functions by vectors in and solve by solving a tractable finite dimensional fused lasso problem ( with an optional isotonic constraints ) of the form : where we abuse the notation to denote as evaluations of function at sorted time points in ; and is the discrete difference operator .we evaluate the out - of - sample predictive power measured by log - likelihood and conduct case studies of learned latent hazard rate on real data via domain expert s knowledge .the data used for evaluation was sourced from the work of soska et al . and was compromised as a collection of interval censored sites from backlists and right censored sites randomly sampled from .com domains .all the sites were drawn from the wayback machine when archives were available at appropriate dates .a total of _ 49,347 _ blacklists were collected between 2011 to 2013 , include a blacklist of predominately phishing , and search redireciton attacks .the .com zone files during the same period are randomly sampled , served as right censored sites with a total of _336,671_. we automatically extracted raw tags and attributes from webpages , that served as features ( a total of _ 159,000 _ features ) .these tags and attributes could be like < br > , or < meta > wordpress 2.9.2</meta>. our corpus of features corresponds to a very large set of distinct and diverse software packages or content management systems .the baseline method is the classic cox proportional model ( cox ) which have been extensively used for hazard regression and survival analysis .it s parametrized based on the features with constant coeffcients over time .we use `` '' to denote the model penalized with .an experimental comparison between our models and cox on the aforementioned dataset are reported in figure [ fig : real - convergence ] ( * left * and * right * ) .apparently the cox model underfit the data quite a bit .our `` monotone '' model allows only monotone hazard rate underfit the data a little but still significantly outperforms cox .it is well expected that `` non - monotone '' model without any constraint overfit the data severely .`` +non - monotone '' model which is well regularized performs the best .this result clearly shows that the latent hazard rate recovered by our model is much better than the cox s model . to achieve such accuracy , our model uses only around 3 times parameter size compared with cox model . in this section , we manually inspect the model s ability to automatically discover known security events .figure [ fig : real - case ] ( * left * ) demonstrates some of the differences between the monotone and non - monotone models by following the hazard assigned to features that correspond to wordpress 3.5.1 . in early 2013 ,our dataset recorded a few malicious instances of wordpress 3.5.1 sites ( among some benign ones ) .these initial samples appeared to be part of a small scale test or proof of concept by the adversary to demonstrate their ability to exploit the platform .both models detect these security events and respond by assigning a non - zero hazard .following the small scale test was a lack of activity for a few weeks , during which the non - monotone model relaxes its hazard rate back down to zero , just before an attack campaign on a much larger scale is launched .this example illustrates once a vulnerability for a software package is known , that package is always at risk , even if it is not actively being exploited . on the other hand ,the non - monotone model captures the notion that adversaries tend to work in batches or attack campaigns .previous work has shown that it is economically efficient for adversaries to compromise similar sites in large batches , and after a few attack campaigns , most vulnerable websites tend to be ignored .this phenomena is shown in figure [ fig : real - case ] ( * left * ) where wordpress 3.2.1 was attacked in late 2011 and then subsequently ignored with the exception of a few small attacks that were likely orthogonal to the underlying software and any observable content features .it can be observed from figure [ fig : real - case ] ( * right * ) that a number of distinct wordpress distributions experienced a change - point in the summer of 2011 .this phenomina was present in several of the most popular versions of wordpress in the dataset including versions 2.8.5 , 2.9.2 and 3.2.1 .this type of correlation between the hazard of features corresponding to different versions of a software package is expected .this correlation often occurs when adversaries exploit vulnerabilities which are present in multiple versions of a package , or plugins and third party add - ons that share compatibility across the different packages .in this paper , we propose a novel survival analysis - based approach to model the latent process of websites getting hacked over time . the proposed model attempts to solve a variational total variation penalized optimization problem , andwe show that the optimal function can be linearly represented by a set of step functions with the jump points known to as ahead of time .the results suggest that by correctly recovering the latent hazard rate , our model significantly outperforms the classic cox model .compared with known time - dependent hazard regression models , our models work on several orders of larger feature space .most importantly , identifying the changes of each feature s susceptibility over time can help people understand the latent hacking campaigns and leverage these insights to take appropriate actions . in future, further works can be made to study the relations among potential correlated features by investigating the structures ( e.g. low rank ) of the coefficient matrix , or via deeper transformation .on the other hand , the same model ( variants ) can be used in many other settings to study consumer spending behaviors , marriage , animal habits and so on .borgolte , kevin , kruegel , christopher , and vigna , giovanni .delta : automatic identification of unknown web - based infection campaigns . in _acm sigsac conference on computer & communications security _ , pp . 109120 .acm , 2013 .sauerbrei , willi , royston , patrick , and look , maxime . a new proposal for multivariable modelling of time - varying effects in survival data based on fractional polynomial time - transformation . _ biometrical journal _ , 490 (3):0 453473 , 2007 .
in this paper we describe an algorithm for predicting the websites at risk in a long range hacking activity , while jointly inferring the provenance and evolution of vulnerabilities on websites over continuous time . specifically , we use hazard regression with a time - varying additive hazard function parameterized in a generalized linear form . the activation coefficients on each feature are continuous - time functions constrained with total variation penalty inspired by hacking campaigns . we show that the optimal solution is a order spline with a finite number of adaptively chosen knots , and can be solved efficiently . experiments on real data show that our method significantly outperforms classic methods while providing meaningful interpretability .
an important problem in physics concerns the study of sound .music consists of a complex fourier superposition of sinusoidal waveforms .a person with very good hearing can hear continuous single frequency ( `` monochromatic '' ) musical tones in the range 20 hz to 20 khz .audio cd players can reproduce high fidelity music using a 44 khz sampling rate for two channels of 16 bit audio signals , corresponding to a maximum audible frequency of 22 khz , according to the nyquist sampling theorem . in practice ,band pass or other filters limit the range of frequencies to the audible spectrum referred to above .systematic studies of the amazing complexity of music have focused primarily on using fft- or dft - based spectral techniques that detect power densities in frequency intervals .for example , -type noise in music has received considerable attention .another approach to musical complexity involves studies of the entropy and of the fractal dimension of pitch variations in music .such systematic analyses have shown that music has interesting scaling properties and long - range correlations . however , quantifying the differences between qualitatively different categories of music still remains a challenge . here , we adapt recently developed methods of statistical physics that have found successful application in studying financial time series , dna sequences and heart rate dynamics .specifically , we develop a new adaptation of _ detrended fluctuation analysis _ ( dfa ) to study nonstationary fluctuations in the local variance of time series rather than in the original time series by calculating a function that quantifies correlations on time scale .this method can detect deviations from uniform power law scaling embedded in scale invariant local variance fluctuation patterns .we apply this new method to study correlations in highly nonstationary local variance ( i.e. , loudness ) fluctuations occurring in audio time series .we then study the relationship of such objectively measurable correlations to known subjective , qualitative musical aspects that characterize selected genres of music .we show that the correlation properties of popular music , high art music , and dance music differ quantitatively from each other .the loudness of music perceived by the human auditory system grows as a monotonically increasing function of the average intensity .one typically measures the intensity of sound signals in db ( deci - bells or `` decibels '' ) .hence , one conventionally also measures loudness in db , even though the subjectively perceived loudness scales as a non - linear function of the intensity .the subjective perception of loudness varies according to frequency and depends also on ear sensitivity , which in turn can depend on age , sex , medication , etc ( see , e.g. , refs . ) . for all practical purposes ,however , the objective measurement of sound intensity provides a good means to quantify loudness . in the remainder of this article, we use the term `` loudness '' to refer to the instantaneous value of the running or `` moving '' average of the intensity . an important fact that deserves a detailed explanation concerns how the human ear can not perceive any variation in loudness ( i.e. , amplitude modulation ) that occurs at frequencies .humans hear frequencies in the audible range hz khz and therefore do not perceive amplitude modulation or instantaneous intensity fluctuations in this frequency range as variations in loudness , but rather as having constant loudness .we briefly explain this point as follows .we can consider the the human auditory system , in a limiting approximation , as a time - to - frequency transducer that operates in the `` audible '' range of 20 hz 20 khz . any monochromatic signal in this frequency range will lead to the perception of an audible tone of that same frequency or `` pitch .'' a linear combination of such signals can give a number of impressions to the human ear , depending on the exact fourier decomposition of the signal .specifically , a combination of monochromatic signals may sound as having a nontrivial `` timbre , '' and if the signal frequencies have special arithmetic relationships , then they may sound as a `` harmony '' .beats and heterodyning , for two or more closely spaced frequencies , can also arise .most importantly , a linear superposition of monochromatic signals can sound either as having constant loudness , or else as having varying loudness .we discuss this last point in some detail : if a monochromatic carrier signal of frequency becomes amplitude modulated by a modulating signal of frequency , then the fourier decomposition of the modulated signal will include monochromatic sidebands of sum and difference frequencies , but no power at frequency .moreover , amplitude modulation with 20 hz results in sidebands close to the carrier frequency , whereas 20 hz leads to significant changes in the perceived sound timbre , due to the distant sidebands indeed , if 20 hz , the sidebands fall far enough away from the carrier to enable the ear to pick up the sidebands as having distinct frequencies , thereby leading to the perception of a changed timbre . only if 20 hz do the sidebands fall sufficiently close to the carrier to fool the auditory system into perceiving a monochromatic signal of varying loudness .specifically , humans hear 8 hz as a `` tremolo '' ( i.e. , a periodic oscillation in the intensity of the carrier tone ) , whereas for 8 hz 20 hz we perceive a transition from the tremolo effect to the timbre effect ( see refs . for more information ) .the reader should not confuse tremolos with vibratos , which arise from frequency modulation rather than amplitude modulation .we now devise methods suitable for studying the scaling properties of the intensity of music signals over a range of times scales .we begin with selected pieces of music taken from cds and digitize them using 8 bit sampling at khz .since each piece lasts several minutes , therefore , this `` low '' 11 khz bit sampling rate suffices for obtaining excellent statistics .similarly , since we aim not to listen to music , but to study correlations in intensity , 8 bit sound adequately satisfies basic signal - to - noise requirements ( better than ) .we choose 4 min stretches of music , and to each piece of music assign a time series where and represents the sample index ( fig .[ intro](a ) ) .we generate another series defined as the standard deviation of every non - overlapping 110 samples of .the variance ^ 2 $ ] thus represents the average intensity of the sound ( loudness ) over intervals of s ( fig . [ intro](b ) ) . concerning the choice of the windowing time interval, we have found the exact value of the time interval to have little or no importance ; we have verified that our central results do not depend on the exact value chosen , since we aim to study fluctuations in the intensity of the signal .we have found , e.g. , that using a time interval five times larger , s , equivalent to the minimum audible tone frequency of 20 hz , leads to no significant changes to our main results . in this context , we note that the measurement of the loudness of music has some similarities to the measurement of volatility in financial markets , since in both cases the variance measurement effectively involves a moving window of fixed but arbitrary size .we define the power spectrum of the signal as the modulus squared of the discrete fourier transform of : where represents the frequency measured in hz . at the lowest frequencies ,the spectrum appears distorted by artifacts of the fast fourier transform ( fft ) method .specifically , at small frequencies approaching where represents the fft window size , a spurious contribution arises from the treatment of the data as periodic with period .the last few decades have seen extensive studies of the audio power spectra , considered nowadays well understood ( fig .[ intro](c ) ) .the spectral power in the range hz khz arises due to audible sounds , while lower frequency contributions emerge due to the structure of the music on sub - audible scales larger than 20 s ( see fig .[ intro](c ) ) .since we primarily aim to study loudness fluctuations at these larger time scales , we find it more convenient to study the power spectrum of the series rather than of the series this spectrum allows us to study correlations related to loudness at these higher time scales .however , behaves as a highly nonstationary variable and the power spectrum of nonstationary signals may not converge in a well behaved manner .therefore , conclusions drawn from such spectra may lead to questions about their validity . in order to circumvent these limitations ,we use dfa . like the power spectrum, dfa can measure two - point correlations in time series , however unlike power spectra , dfa also works with nonstationary signals .the dfa method has been systematically compared with other algorithms for measuring fractal correlations in ref . , and refs . contain comprehensive studies of dfa .we use the variant of the dfa method described in ref .we define the net displacement of the sequence by , which can be thought of graphically as a one - dimensional random walk .we divide the sequence into a number of overlapping subsequences of length each shifted with respect to the previous subsequence by a single sample . for each subsequence , we apply linear regression to calculate an interpolated `` detrended '' walk then we define the `` dfa fluctuation '' by where , and the angular brackets denote averaging over all points .we use a moving window to obtain better statistics .we define the dfa exponent by where gives the real time scale measured in seconds .uncorrelated data give rise to as expected from the central limit theorem , while correlated data give rise to specifically , a value corresponds to uncorrelated white noise , corresponds to -type noise with complex nontrivial correlations , and corresponds to trivially correlated brown noise ( integrated white noise ) . discuss in further detail the relationship between dfa and the power spectrum .a constant value of indicates stable scaling , while departures indicate loss of uniform power law scaling .we obtain the best statistics by studying time scales that range from s to s , hence we focus on these scales .we have recorded 10 tracks from each of 9 genres : music from the western european classical tradition ( wect ) , north indian hindustani music , javanese gamelan music , brazilian popular music , rock and roll , techno - dance music , new age music , jazz , and modern `` electronic '' forr dance music ( with roots in traditional forr , from northeast brazil ) .we have chosen these genres of music somewhat arbitrarily , noting that our main interest lies not in the music itself but rather in developing quantitative methods of analyzing music that can in principle be applied in future studies systematically to compare and contrast diverse audio signals originating in music .[ spectrum](a ) shows the the power spectrum of the series as noted previously , does not have stationarity and therefore the meaning of such spectra may appear ambiguous .nevertheless , we can observe clear differences in the spectra of each genre of music .[ spectrum](b , c ) show the dfa functions and , respectively .each genre of music has a different `` signature . '' in jazz , javanese music , new age music , hindustani music and brazilian pop , decreases with wect music appears characterized by extremely high in the region of interest from s to s , with lower values for rock and roll .techno - dance and forr music have characteristic patterns marked by `` dips '' near 0.8 s. these characteristics also appear in fig . [ styles ] , which shows for each data set separately .we also compute the average dfa exponent in the region of interest s s for each genre of music ( fig .[ compare ] ) .we emphasize that these values of measure the scaling exponents in the variance hence , loudness fluctuations of the music signals .any conclusions derived from the results presented here must carefully consider this point .javanese gamelan and new age , and to a lesser extent hindustani and wect , have the values closest to , corresponding to the most complex , nontrivial correlations ( -type behavior ) .we note that wect music has the highest value of indicating that loudness fluctuations have the strongest correlations in this genre .hence , from the point of view of loudness level changes , wect music appears the most correlated , and modern electronic forr music the least correlated .none of the results reported here have a direct bearing on harmony , melody or other aspects of music .our results apply only to loudness fluctuations , which can reflect aspects of the rhythm of the music .another observation concerns how the extremely predictable periodic rhythmic structure of techno - dance music and forr shows up as minima in near 0.8 s ( figs .[ spectrum](c ) , [ styles ] ) .this finding suggests that the periodic `` beat '' of the music , considered abstractly as a superposition of periodic trends and the acoustic signal , leads to significant deviations from uniform power law scaling at that time scale .the above results seem to suggest that the qualitative differences between genres well known to music lovers may in fact be quantifiable .for example , wect music , hindustani music and gamelan music , which have the highest average ( suggesting almost perfect scaling behavior ) , usually belong to the general category of high art music . on the other end , electronic forr and techno - dance music , where periodic tends dominate , have the lowest average and arguably belong to the category of dance or danceable music .the lower observed in these genres is due to a a bump and horizontal shoulder in the dfa fluctuation fluctation that emerges at time scales corresponding to the pronounced periodic beats ( see figs . [ spectrum](c ) , [ styles ] ) .such genres might have evolved primarily for dancing , rather than for listening .we can speculate from this point of view that jazz , rock and roll , and brazilian popular music may occupy an intermediary position between high art music and dance music : complex enough to listen to , but periodic and rhythmic enough to dance to .finally , we discuss the relevance of these findings to the possible effects of music on the nervous system .studies of heart rate dynamics using the dfa method have shown that healthy individuals have values relatively close to , corresponding to correlations , while subjects with heart disease have higher values ( typically ) that indicate a significant shift towards less complex behavior in heart rate fluctuations , since corresponds to trivially correlated brown noise ( e.g. , see ) . hence , listening to certain kinds of music may conceivably bestow benefits to the health of the listener . the hypothesis that music with confers health benefits still requires systematic testing .for example , the so - called `` mozart effect '' refers to the conjecture that listening to certain types of music may correlate with higher test scores and more generally to intelligence .if ever such findings become substantiated , then a new approach to the study of music ( and perhaps other forms of art ) might become a necessity .we note , however , that the mozart effect has not been legitimately established as a real phenomenon . nevertheless , the results reported here and more importantly , the approach used in obtaining the results point towards the possibility of objectively analyzing subjectively experienced forms of art. such an approach may find relevance in the academic study of music , and of art in general . in summary, we have developed a method to study loudness fluctuations in audio signals taken from music .results obtained using this method show consistent differences between different genres of music . specifically , dance music and high art music appear at the lower and upper endpoints respectively in the range of observed values of , with rock and roll , jazz , and other genres appearing in the middle of the range .we thank ary l. goldberger , yongki lee , m. g. e. da luz , c .- k peng , e. p. raposo , luciano r. da silva and itamar vidal for helpful discussions .we thank cnpq and fapeal for financial support . c. k. peng , s. v. buldyrev , m. simons , h. e. stanley and a. l. goldberger , phys .e 49 ( 1994 ) 1695 .k. hu , plamen ch .ivanov , zhi chen , pedro carpena and h. e. stanley , phys .e 64 ( 2001 ) 011114 .peng , shlomo havlin , h. e. stanley and a. l. goldberger , chaos 5 ( 1995 ) 82 .ivanov , l. a. n. amaral , a. l. goldberger , s. havlin , m. g. rosenblum , h. e. stanley and z. struzik , chaos 11 ( 2001 ) 641 .f. h. rauscher , g. l. shaw and k. n. ky , nature 365 ( 1993 ) 611 .
an important problem in physics concerns the analysis of audio time series generated by transduced acoustic phenomena . here , we develop a new method to quantify the scaling properties of the local variance of nonstationary time series . we apply this technique to analyze audio signals obtained from selected genres of music . we find quantitative differences in the correlation properties of high art music , popular music , and dance music . we discuss the relevance of these objective findings in relation to the subjective experience of music . , , , and time series analysis , variance fluctuations , music 05.45.t , 05.40.f , 43.75
linear wave propagation in homogeneous porous media saturated with a viscothermal fluid such as ambient air , has been the subject of extensive research .traditionally , for the case when the wavelength is large compared to the pore size , it has been described on the basis of the so - called two - scale asymptotic homogenization theory . for materials with skeleton sufficiently heavy and/or rigid to be motionless , this leads to an effective medium theory in which the medium permittivities are two linear operators , one representing an effective density and the other an effective compressibility , .these operators , while nonlocal in time as a result of delayed responses due to the losses ( temporal dispersion ) , are given as local in space by the homogenization process at the dominant order .in other words , no spatial dispersion arises , so that the response of the material at a given macroscopic point ( physically , an elementary coarse - graining volume ) is determined by the history of the pertinent field variables at the given point but not the neighboring points . as such, this description is a special case not directly applicable to all geometries .it does not describe the situations where structures in the form of helmholtz resonators are presentin this case , if the microgeometry splits in parts with pore sizes sufficiently different to imply different rescalings of the microscopic governing equations in the different parts , a solution can still be written using the principle of asymptotic homogenization . independently , a general nonlocal theory of propagation along a symmetry axis in macroscopically homogeneous materials having arbitrary microgeometry , has been recently proposed by one of the authors .finally , different high - frequencies extensions of the idea of two - scale homogenization have also been introduced recently .all of these extensions lie outside the scope of the present paper , as defined next . ] .nevertheless , most of the materials used in noise control for sound absorption , do not present in their inner structure very different pores sizes . in this paperwe explicitly assume the absence of very different pores sizes , automatically ensuring at long wavelengths the validity of the usual two - scale asymptotic homogenization .it gives us explicit recipes to compute from microgeometry the effective density and compressibility , complex functions of frequency but not of wavenumber , in the above framework . in practice ,the computation is not possible in full detail ; it is however not required to be done to arrive at a relatively precise description . in absence of a complete knowledge of the microgeometry , a widely used semi - phenomenological model which depends on a small set of independently measurable geometrical parameters of the structure ,is given by the well - known formula of johnson joh for the density , and likewise , the champoux - allard or the lafarge formula for the compressibility .these approximate expressions ( denoted ) of the two constitutive functions , essentially are the result of : \1 ) an exact description of the high - frequency limits of the two functions , density and compressibility , in terms of the concepts of ideal - fluid tortuosity and characteristic lengths joh and , the viscous and thermal relaxation processes being ` frozen ' in this limitin the frozen limit the viscous and thermal relaxation processes have no time to develop ; in the relaxed limit they have enough time to fully develop . ] , \2 ) an exact description of the low - frequency limits , in terms of the concepts of d.c .viscous and thermal permeabilities , and , the viscous and thermal relaxation processes being ` relaxed ' in this limit , and finally , \3 ) an assumption that these frozen and relaxed limits 1 ) and 2 ) are connected in the simplest reasonable manner , by means of the simplest ` relaxation ' functions of frequency having their singularities and zeros lying on the imaginary half - axis in the complex plane ( see appendix a , and or appendix c ) . that the singularities are on a half imaginary axis is a mathematical expression of the fact that we actually restrict to the class of materials allowing to apply the direct two - scale homogenization process. the pure causality condition would only require the singularities to be in a half plane .our stronger assumption may be rephrased physically by saying that the fluid velocity pattern at the pore scale is divergence - free for the purpose of the determination of the density , and the pressure pattern is uniform for the purpose of the determination of the compressibility ; or in essence , that the operators are local operators in space , at long wavelengths , in the geometries we consider .the remaining assumption that the functions are the simplest reasonable ` relaxation ' ones , consists in making the additional but adjacent assumption that the considered geometries manifest a relatively narrow not bimodal , for example distribution of pore sizes .then the distribution of poles on the imaginary axis is simple and the whole pattern of response functions on the real axis is strongly determined by the low- and high - frequency behaviors . in this manner , a simple , resp .viscous and thermal , relaxation - transition description of the density and compressibility functions is obtained , that may be thought to be well - verified in a wide class of materials as long as the wavelengths are large compared to the typical dimensions of the coarse - graining averaging volumes . at this point, we may mention a similar relaxation - transition approach , developed by wilson . in wilson sapproach , the emphasis is not made on the low - frequency and high - frequency limits , but directly on the transition .the quality of the description obtainable with wilson s models is comparable to that of model , but the parameters become purely adjustable parameters not precisely defined in terms of the microgeometry , and not clearly obtainable by non - acoustical means .subsequently , the low - frequency relaxed limit 2 ) was made more precise by pride and lafarge ( the next viscous and nontrivial thermal terms now being exactly described thanks to the introduction of the additional notions of d.c .viscous and thermal tortuosities and ) , resulting in a slightly improved ( denoted ) description of the relaxation transitions of the two functions .notice that some confusions were present in the original works and ( not readily available ) and other subsequent ones ( e.g. ) , so that the presentation we give later on , of the description , may be worthwhile . in ref . , fellah concentrated on the time - domain expression of the high - frequencies asymptotics implied by this modela short , non exhaustive list of works concerned with the time - domain description is : - . ] .expanding , in the high - frequency limit , the density and compressibility in powers of the inverse stokes number ( defined as the ratio between boundary layer thickness and characteristic pore size ) , and retaining the ( exact ) zero and first order terms and the ( model - dependent ) second order terms , they derived an asymptotic time - domain pressure wave equation . using fractional - derivative and laplace - transform calculus , they were able to solve this equation in elegant manner through the calculation of a corresponding asymptotic green s function of the unbounded medium .recall that a green s function or impulse response , as a function of time , extends and flattens when observed at fixed locations more and more remote from the spatial point where it originates .now , the results of the calculations made in seem to indicate that the terms of second order yield a significant effect on the green s function for resistive porous materials , and , in addition , that the effect is mainly an effect on the amplitude without noticeable distortion of the time wave pattern ( see figs . 2 and 3 of paper ) . in the present paper ,our purpose is threefold .first , we wish taking advantage of recent clarifications regarding nonlocal ( spatial dispersion ) effects to propose a lucid review of the above local ( frequency dispersion ) theory , stressing the physical hypotheses behind it . by recalling the construction principle of the models and ,we make clear that these models lead to inaccurate descriptions of the second order high - frequency terms .this is highlighted on the simple example of cylindrical circular tubes .this is the matter of section [ be ] .next , we give a very simple analytic derivation of the attenuation - without - distortion finding of ref . summarized above .indeed , generalizing to arbitrary geometry a work done by polack for cylindrical circular tubes , we show the following exact property ( for 1d propagation along one axis ) : and are the two first asymptotic green s functions computed by retaining in the wavenumber the terms up to the first and second order on the inverse stokes number respectively .this is done in section [ sw ] , using a scaled - form of the green s function .the first is only determined by , and , and exactly predicted by the models and .the second demonstrates the mentioned attenuation - without - distortion effect through the exponential .the decay length , however , depends in part of the next frozen parameters , two viscous and thermal purely geometrical characteristic surfaces and involved in the above - mentioned terms : ^{-1 } , \label{decay l}\]]where the fluid constants are , the fluid kinematic viscosity , the adiabatic speed of sound , the prandtl number , and the ratio of heat coefficients .now , because the models and ( especially ) give inaccurate predictions for the terms , this decay length will not be accurately captured by the models .indeed , on the simple example of cylindrical circular tubes it can be checked that is completely misrepresented by model , which gives a negative estimate for it , and still largely underestimated by model , which produces only about of its correct value , due to overestimation of surfaces and . both models and do not describe the correct high - frequency attenuation - without - distortion effect .technical details are given in appendix to lighten the main text . finally , our last objective is to show that , in spite of its faulty description of the decay length ( [ decay l ] ) , the model , nevertheless furnishes a relatively precise description of the complete exact green s function , especially when compared to other formulas .the asymptotic analytic green s function provides a reasonable description of the complete green s function with the same number of parameters as , but it uses parameters and which are unknown in general .the merits and drawbacks of the different descriptions are illustrated on the example of circular pores ) computing the exact green s function through fft and the known zwikker and kosten full frequency formulas ; ) computing likewise the model green s functions and through fft ; ) computing the exact asymptotic green s function through ( [ o2]-[decay l ] ) with the known exact values of the involved parameters ( namely , , ) ; and ) computing the ` model asymptotic ' green s functions through ( [ o2]-[decay l ] ) with the model values of the involved parameters ( , , and for and for ) .the results of fft computation are given in section [ fftc ] for the full - frequency models , while the results for asymptotic expansions are given in appendix .we start by recalling the form of the macroscopic equivalent - fluid equations in the frequency domain ( see ): by definition , and are the macroscopic velocity and pressure obtained by coarse - graining ( averaging ) the microscopic fluid velocity and pressure fields , is the time derivative , and are the saturating - fluid density and adiabatic bulk modulus , and and are the dynamic tortuosity and the dynamic compressibility . notice that for simplicity , isotropy or 1d propagation along one principal axis is assumed , so that is a scalar .( [ 11 ] ) make apparent the frequencies but not the wavenumbers associated to the time- and space - variable fields .this is consistent with the hypothesis that the effects of spatial dispersion are negligible .a necessary but not sufficient condition for this , is that the wavelengths are large .the complementary condition which will ensure that spatial locality is verified , is that no very different pore sizes are present , so that in particular , the presence of structures in the form of helmholtz resonators is excluded .indeed , with resonators , the fluid exchanged to and fro , is always associated by mass conservation with a corresponding spatial inhomogeneity in the wavefield .the resonance can not occur without concomitant spatial inhomogeneity in the macroscopic fields .as such , it is an effect of spatial dispersion , see p. 360, no matter how large the wavelengths actually are . now ,if we restrict sufficiently the possible geometries , resonances can not occur and the long - wavelength propagation becomes practically devoid of spatial dispersion is given by the traditional homogenization .the application of the asymptotic two - scale homogenization then yields two different microscopic boundary value formal problems to be solved for determining the two constitutive functions and . the first action - response problem specifies the ( velocity ) response of a viscous incompressible fluid subject to an applied time harmonic , spatially uniform bulk force source term : in the pore space ( with a periodic or stationary random field , in periodic or stationary random geometries ) , and satisfying on the pore surface ( no slip condition ) , in this problem , is a dimensionless unit vector in the direction of the applied bulk force , and is the scaled response velocity field ( dimension of ) .it determines the dynamic permeability and dynamic tortuosity introduced in the landmark paper by johnson , by the relations : is the coarse - graining averaging operation in the pore space , and is the porosity ( specific connected pore volume ) . the second action - response problem specifies the ( excess temperature ) response of a thermal fluid subject to an applied time harmonic , spatially uniform pressure source term : in the pore space , and satisfying on the pore surface ( no temperature - jump condition ) is assumed sufficiently important to ensure that the specific fluid - solid mass ratio is small ; then the solid is inert thermally and remains at ambient temperature . ] ) , , is a dimensionless unit constant representing the applied pressure , and is the scaled response excess temperature field ( dimension of ) .it determines the functions and , thermal counterparts of functions and introduced by lafarge lafargeth , lafarge , and then , the effective compressibility , by the following relationships : is the ratio of the specific heats . the above action - response problems and response - factor identifications ( d4 ) and ( [ d4 ] ) are written in blind manner by applying the traditional two - scale asymptotic homogenization .as we have insisted , it makes the important assumption that it is possible to neglect spatial dispersion .this is manifested in the first problem by the force source term which is set to a spatial constant ( simultaneously , the velocity field is represented by a divergence - free field ) , and in the second problem by the pressure source term which is also set to a spatial constant ( gradient - free)in fact , what is really meant here is that in this second problem , locally , the pressure field may be viewed as having a uniform gradient ; but the pressure linear variation around the mean may be omitted in a coarse graining volume , as it has no effect on the mean temperature .as soon as spatial dispersion is introduced , it no longer makes sense to represent the source terms by spatial constants .the spatial inhomogeneity of the source terms must be considered . ] .indeed , we could have written these problems and identifications directly without using the homogenization process , on the sole basis of assuming that spatial dispersion effects are absent .now , as a result of the divergence - free and gradient - free nature of velocity and pressure in the given action - response problems , it can be shown that the functions and have purely imaginary singularities , , and thus are smooth functions on the real axis this is the point 3 ) mentioned in introduction .this crucial point in the construction of the models will be considered at more length in section . in absence of a complete information on the microgeometry ,the two above - mentioned microscopic formal boundary - value problems can not entirely be worked out and their exact detailed solutions ( from which and can in principle be extracted by coarse - graining , see eqs.([d4 ] ) and ( [ d4 ] ) ) are missing . nevertheless , in the limit of high - frequencies and low - frequencies , some general characteristics of the solutions and corresponding functions and may be obtained .that may be sketched as follows . in the limit of high frequencies , the viscous and thermal terms and become negligibly small compared to the other terms . the fluid motions , except for vanishingly small viscous and thermal boundary layers at the pore walls , become close to those of an inviscid nonconducting fluid ( , , with the thermal conductivity .we refer to this limit simply as the ` frozen limit ' ( see footnote [ frozen ] ) .accordingly , the quantities and everywhere tend ( except at the pore walls ) to the ` frozen ' fields and verifying the following equations in the pore space , ( with a periodic or stationary random field , in periodic or stationary random geometries ) , verifying at the pore walls ( is the normal on the latter ) , now assume , following johnson and allard , that the pore - surface interface appears locally plane in this asymptotic high - frequency frozen limit .this is in principle an assumption that the viscous and thermal boundary layer thicknesses and eventually become small compared to a characteristic radius of curvature of the pore surface ) become meaningful much more rapidly . ] .then , the functions and expand in integral power series of these thicknesses , which allows us writing that the case of fractal geometry which modifies the exponent in the first correction terms see is excluded by the assumption that the pore walls appear locally flat at the scale of the boundary layer thickness ; the presence of sharp edges which modifies the exponent in the second correction terms see is also excluded by this assumption .] , geometrical parameters and ( dimensionless ) and and ( dimension of ) must be some pore averages constructed with the frozen fields and . the next order geometrical parameters and ( dimension of ) are dependent on other fields involved in the asymptotic limit andhave not been worked out so far .detailed calculations made by johnson and allard , to which we refer the reader , show that the parameters , , and may be written as follows ( see also , in the most detailed manner for , ) : , denotes the pore walls and denotes the connected pore volume .parameter is brown s electric formation factor brown ( represents either an inviscid - fluid scaled velocity or acceleration field for the problem of incompressible inviscid fluid flow , accelerating under the action of external bulk force or applied pressure drop , or else , an electric scaled field for the problem of electrical conduction in the bulk fluid , ) .parameter is an effective pore radius for dynamically connected pore sizes which was introduced by johnson for the problem of electrical conduction in the bulk fluid , perturbed by a thin , different conducting layer at the pore walls .parameter is a length characterizing a simpler effective pore radius twice the fluid - volume to fluid - surface ratio sometimes referred to as the kozeny - carman radius ; allard identified it as the thermal counterpart of parameter . an incomplete reasoning to obtain the parameter , leading to an incomplete expression ,is often made , e.g. ; the reasoning inaccuracy outside the viscous boundary layer or the perturbed conducting layer , of a perturbation contribution due to a perturbed ideal - fluid or electrical bulk flow field orthogonal to the leading bulk flow field , and having , contrary to the latter , nonzero normal components at the pore walls .] was clarified and corrected in along a line tentatively sketched in ( appendix d ) . in the opposite relaxed limit of low frequencies , the viscous and thermal terms and eventually become much greater than the inertial terms and and the boundary layers extend to the whole fluid .accordingly , the fields and everywhere tend to the d.c . or ` relaxed ' velocity and excess temperature fields and verifying , in the pore space ( with a periodic or stationary random field , in periodic or stationary random geometries ) , verifying , at the pore walls , in this limit the functions and expand in laurent s series , two first intervening geometrical parameters , ( darcy s viscous permeability ) and ( its thermal counterpart lafargeth , lafarge ) on one hand , both having dimension of , and ( viscous tortuosity ) and ( its thermal counterpart ) on the other hand , both being dimensionless , are pore averages constructed with the relaxed fields and .indeed , simple calculations show that they may be written norr , lafargeth , that , among other things it is possible to show that , whatever the geometry , equalities being satisfied only for the case of aligned cylindrical pores .as explained before , it is only for sufficiently simple geometries that the assumed long - wavelength nature of the considered fields , automatically imply that the microscopic flow - field can be considered divergence - free for the purpose of determining the density ( eq .( [ d2 ] ) ) , and likewise , the excess pressure field can be considered gradient - free for the purpose of determining the compressibility .now , johnson have made the important observation that , because the velocity field is locally divergence - free , the singularities of functions and poles , zeros , and branch points necessarily are purely imaginary ( see appendix a ) .this characteristics of the response functions is explicitly apparent in avellaneda and torquato s solution of principle of the problem ( [ d1]-d3 ) , written in terms of the stokes operator s eigenmodes and relaxation times . for the functions and , it was similarly shown that , because the excess pressure field can be considered gradient - freesee footnote [ 5f ] ] , the singularities of functions and also are purely imaginary ( see appendix c ) . again , this characteristics of the response functions is explicitly apparent in a solution of principle of the problem ( [ d1]-[d3 ] ) , written in terms of the laplace operator s eigenmodes and relaxation times lafargeth , in exactly the same manner as .the wanted functions must therefore have simple smooth behaviors on the real axis .indeed , the drt ( distribution of relaxation times ) formalism of avellaneda and torquato may be used to explicitly show that in the laplace domain ( ) the hydrodynamic drag function is always a strictly decreasing positive function on the real axis , and the same holds true in the same manner for the corresponding thermal function see lafargeth . alternatively ,more recently , the strictly decreasing nature of functions and versus real frequency has been shown in elegant manner by a variational formulation making use of the divergence - free nature ( [ d2 ] ) of the microscopic flow fields . finally ,as observed by johnson , due to the special location of their singularities and the adjacent hypothesis of the absence of very different pore sizes , the functions and as well as the functions and , may be seeked as the simplest ones satisfying both the frozen and relaxed limits . to proceed ,let us introduce a stokes number constructed using johnson s concept of dynamically connected pore size , : has proposed the following expression of dynamic tortuosity , a dimensionless shape factor associated to the geometry .this expression is the simplest analytical ansatz that yields the exact first two terms at high frequencies ( eq . ( [ hf ] ) ) and the exact first leading term at low frequencies ( eq . ( [ bf ] ) ) , and automatically satisfies the condition on singularities . as such , and for the general reasons discussed before ,it is expected to provide a reasonable description of the exact function .factors of 8 are introduced for convenience in eqs .( j14-[j15a ] ) , so that for cylindrical circular pores .similarly , to describe the function let us introduce a second stokes number corresponding to thermal effects: ratio is of the order of , not far from unity .proceeding as did johnson , lafarge proposed to write , the thermal counterpart of shape factor ( for cylindrical circular pores ) , giving in turn a definite model for the dynamic compressibility this model is the elaboration of allard s original attempt to transpose johnson s modelling to thermal effects , we refer to the combined modeling of functions and eqs .( [ 12]15 ) as to johnson - allard s ( ) . subsequently , pride , while studying oscillating viscous flow in convergent - divergent channels , found that the simple formula ( [ j14 ] ) may significantly underestimate the imaginary part of dynamic permeability at low frequencies . to remedy this , they proposed different modified formulas .in essence these are formulas capable to account for the exact value of parameter , which , in such channels , may be significantly increased as compared to johnson s .notwithstanding , this parameter is not singled out in pride .its identification by eq .( [ lfp2 ] ) ( see and lafargeth ) shows that it is constructed like the tortuosity , this time for the ` poiseuille'-like velocity pattern .therefore it is a measure of ` disorder ' of the ` poiseuille ' flow , and as such it is increased not only by the convergent - divergent mechanism considered by pride , but also , , by irregularities in the distribution of solid inclusions leading to the existence of privileged flow paths .whatever the cause of the enhanced ` poiseuille disorder ' , a significant increase of factor as compared to johnson s value , will make it more necessary to modify johnson s formula .similar formal considerations hold true also , , for thermal effects and the function . here also , a significant increase of may result when replacing a regular distribution of solid inclusions by an irregular one ( which leaves unchanged the thermal characteristic length ) .now , among the different modifications proposed in the first was the simplest one , capable to yield the exact first two terms at high and low frequencies , and simultaneously , to automatically fulfil the condition on singularities whatever the values of parameters , , , , and . finally , this formula was expressed by lafarge in terms of the parameter and the same description was then immediately transferable to thermal effects .the corresponding pride - lafarge s ( ) model formulas are: and are as before , and and are the new shape factors given by , these expressions reduce to s by setting and are simple transformations of the latter : they apply the simple group transformation to the basic johnson s square root function .two successive applications of the transformation ( with parameters and ) yield another same transformation ( with parameter ) which preserves both low- and high - frequency limits and . in this waythere is some unicity in the modification , which is not extendible in obvious very simple manner .it must be realized that , as a side result of the strong constraints imposed by the special location of singularities and as long as the geometry is relatively simple , the expressions constructed with one more exact term than s at low frequencies , will also describe in a slightly more accurate manner all of the viscous and thermal relaxation description essentially improves the low frequencies if often made in literature . in reality because of the pole in and the way is related to see ( [ be ] ) , the small departures between and are mainly perceptible in the region of intermediate frequencies . ] .nevertheless , it should not be hoped that it is possible to gain , by means of this description , very meaningful information on the frozen parameters and . if the model expressions ( [ 14]-[15b ] ) were exact , the comparison of their high - frequency expansions with the exact ones ( [ hf ] )would imply hence giving a fixed relation between the set of relaxed and frozen parameters .but the expressions ( [ 14]-[14p ] ) are not exact , and the numbers and in ( [ 15b]-[qpridem ] ) , obtained by low- or high - frequency matching of these non - exact expressions , will not be the same in general .thus we should not hope that the high - frequency expansions obtained with model : anything precise for the terms .the example of cylindrical circular tubes may serve to illustrate this in quantitative manner . for a material with cylindrical circular pores of identical radius ( say for simplicity , all parallel and aligned along the direction of propagation ) , the two boundary value problems eqs .( [ d1][d3 ] ) and ( [ d1][d3 ] ) determining and , are easily entirely stated and solved . in effect , these are nothing but the problems considered by zwikker and kosten in simplifying ( on account of the wide separation between wavelength and tube radius ) the governing equations of the full kirchhoff s theory of sound propagation in a cylindrical circular tube .we may say , in this respect , that the conventional equivalent - fluid theory neglecting spatial dispersion and expressed by eqs .( [ 11][d4 ] ) , is the direct generalization to the case of arbitrary geometry , of zwikker and kosten s classic theory .now , zwikker and kosten s result is that , and express via bessel functions as follows : is the following relaxation function , (smoothly varying , in a sort ` davidson - cole ' pattern , from relaxed value 1 at low frequencies to frozen value 0 at high frequencies ) . using the known small - arguments and large - arguments series and asymptotic expansions of bessel functions ( or kelvin functions ) , it is simple to derive the following low - frequency and high - frequency exact behaviors : low frequencies : high frequencies : comparing these results with the general low- and high - frequency expansions eqs .( [ bf ] ) and ( [ hf ] ) , the following parameters values are easily obtained : from eqs .( [ 15 ] ) and ( [ 15b ] ) , the circular - tube shape factors , and , are identified as: for the modified model ( [ qpridem ] ) the latter are : let us now examine how far the models are consistent with the limits ( lfc-[hfc ] ) and collection of parameters ( [ 19 ] ) . when johnson s values are used in the model expressions ( [ 14]-[14p ] ) , no terms appear in the high - frequency limit ( [ hf ] ) : the characteristic surfaces and predicted by model are given infinite values whatever the geometry ( this can be seen also on eqs .( [ qpridem ] ) ) . simultaneously , in the low - frequency limit , the relaxed parameters , in ( [ bf ] ) are given as , .consider specifically the increments and , the differences {relaxed}-[% \alpha]_{frozen} ] made by the model on these increments , is a = -25% error .when the and values ( [ 19b ] ) ( ( [ 15b ] ) ) are used , the latter increments are exactly described , but a 50% overestimation still exists for the characteristic surfaces : putting the values ( [ 19b ] ) in ( [ qpridem ] ) yields . when the modified and values ( [ 19bm ] ) ( ( [ qpridem ] ) ) are used , the characteristic surfaces are exactly described but there remain now a = + 20% error on the viscous and thermal increments .finally , when the values of and are taken as the arithmetic mean of the and modified ones , simultaneous but reduced errors are made : the characteristic surfaces are given with 20% overestimation ( instead of with ) , while the viscous and thermal increments are given with 9% error ( instead of 20% with modified ) .in general , the and models unsatisfactory account of parameters and , will be at the origin of some errors in the description of the propagation of transients at relatively short times or short distances , whereas the modified model description unsatisfactory account of parameters and , will be at the origin of errors in the description of transients at longer times and distances . to study thiswe compare in what follows different exact ( fully exact or asymptotically exact ) and modelled time green s functions for the case of cylindrical circular tubes .let us first define the green s functions and write the exact asymptotic results that have been described in introduction and are more detailed in appendix .a general method of defining and calculating a green s function in an infinite medium is by means of the effective frequency - dependent wavenumber in this medium .let us define our green s function ( or impulse response ) as a propagated dirac delta impulsive signal imposed at , or more precisely , as the inverse fourier transform of the propagation transfer function : .\label{green}\ ] ] setting to zero the viscosity and thermal conduction coefficients , no frequency dispersion arises .the wavenumber writes where , defined by the first eq .( [ 1a ] ) , is the frozen speed of sound .the green s function ( [ green ] ) coincides with a dirac delta propagated at this velocity : .setting to nonzero values the viscosity and thermal conduction coefficients the medium wavenumber writes, , \label{23}\]]making apparent a complex function that describes frequency dispersion induced by the viscous and thermal relaxation processes .the green s function ( [ green ] ) writes , \text { . } \label{green2}\ ] ] this green s function now extends and flattens when observed at positions more and more remote from the origin .explicit asymptotic expressions of the green s function defined in this manner will now be obtained by considering high - frequency asymptotic expansions of the wavenumber ( meaning high - frequency asymptotic expansions of the function ) .it will also be convenient to express these functions in scaled form , as functions of a dimensionless delay time and a dimensionless position variable . to take but one example , before being fourier - transformed , the green s function where is say given by the model , may be viewed as a function of : ( _ i _ ) one dimensionless frequency variable , e.g. where is the characteristic viscous relaxation time given by , (hence ) , ( _ ii _ ) one dimensionless position variable , e.g. (hence can be regarded as a time domain stokes number , when replacing by in eq .( 12 ) ) .( _ iii _ ) a number of dimensionless parameters characteristic of the form of the porous space but not of its absolute dimensions ( , , , , , and ) , and , ( _ iv _ ) two dimensionless parameters characteristic of the fluid ( and ) .suppose that the parameters ( _ iii _ ) and ( _ iv _ ) of both the medium and the fluid are held constant .function then reduces to a function of which is parameterized by .there follows that the shape of the corresponding time - domain function will depend , in scaled form , on only , provided the time is counted in a dimensionless manner , for the time elapsed after the first arrival of the signal , function has the dimension of the inverse of time . therefore , a scale - invariant function that depends on the form of the pore space but not on its absolute dimensions: .\label{k2}\]]evidently , function will depend on the description of wavenumber or function : model , model , only some high - frequency terms retained , or complete exact zwikker and kosten s .two successive closed - form analytical exact asymptotic expressions can be derived for the frozen limit , using the series expansion of the wavenumber for the case where the function is represented by its first , and second order terms on . in the first case , using ( eqs .( [ hf ] ) and ( [ 23 ] ) ) , we write : , obtain , , \text { . } \label{maino1'}\]]in the second case we write , obtain , yields an attenuation without distortion , corresponding to the asymptotic results indicated in eqs . ([ o2][decay l ] ) .this analytical form of the green s function , exhibiting a superimposed exponential decay , is also a known result of musical acoustics . concerning the relaxed limit ,the purely diffusive green s function can be obtained by assimilating with and with its first leading term in eq .( [ bf ] ) . in this relaxed d.c .approximation , the wavenumber expands as : (we have used the relation ( [ j15a ] ) ) .the corresponding analytical form of the green s function is deduced from the known result given by ref .landau : the notation is used to avoid confusion with eq .( [ tau ] ) : . \label{diff'}\ ] ]green s functions computed numerically by inverse fft ( fast fourier transform ) of the transfer function , can always be obtained as soon as exact or approximate full - frequency expressions of the wavenumber in eq .( [ 23 ] ) are known .these fft computations are to be carefully done , however , as it is detailed below . in appendix , the different results , either analytical or numerical , corresponding to the asymptotic green s functions are summarized and compared with the exact results of the fft . for the case of arbitrary porous media ,approximate models such as or may be used for , leading to the computation , by fft , of approximate green s functions . for the case of simple workable geometries ( e.g. ,cylindrical circular aligned pores of identical radius ) exact expressions are available , leading to the computation , by fft , of exact green s functions . for later use, it will be convenient to distinguish and denote respectively by : the exact ( zwikker and kosten ) green s function ; the full green s function ( ) ; the full green s function ( ) ; the modified full green s function ( defined by s expressions with ) ; the above asymptotic green s function obtained by retaining in the wavenumber the zero and first frozen terms ; the above asymptotic green s function obtained by retaining in the wavenumber the zero , first and second frozen terms .an important point is that the analytical diffusion solution given by eq .( [ diff ] ) is used in our fft computations to improve the accuracy of calculations .indeed , the numerical computation of the full - frequency exact and model green s functions must be done by fft with some precautions : because of the importance of the higher frequencies on the response shape , subtracting the diffusive ( low frequencies ) approximation largely improves the results .thus , instead of calculating directly the inverse fourier transform of the function , we first compute the inverse fourier transform of the function the latter difference being zero at zero frequency , and increasing smoothly then decreasing rapidly toward zero at high frequency , the function is naturally windowed in the frequency domain .and then we use the relation , with as given by the eqs .( [ diff]-[diff ] ) above ( see ref .the validity of the fft computation has been checked for first order asymptotic expression , with an accuracy better than 1% . for the computation ,the values of the parameters have been chosen close to those of material m1 in ref .the porosity is , the flow resistivity , the permeability , corresponding to a radius .the temperature is , and the prandtl number .the characteristic viscous relaxation time is with respect to time ( both are dimensionless ) .the characteristic viscous relaxation time is defined by eq .( [ theta ] ) .( black ) solid line : zwikker and kosten formulae ( eqs .( [ zw1 ] ) and ( [ zw2 ] ) ) .( blue ) dotted line : pride - lafarge description ( eqs .( [ 14 ] ) to ( [ 15b ] ) ) .( green ) thin solid line : diffusive limit ( eqs .( diff ) and ( [ diff ] ) ) .for very long times , the diffusive limit is reached.,width=377 ] the chosen length is ( the dimensionless length is ) .[ fig1 ] shows the results for the dimensionless green s function , obtained by fft .the pride - lafarge description is compared with the exact zwikker and kosten formula . as expected, the full description is very satisfactory for short times ( high frequencies ) and long times ( low frequencies ) .for very long times , both descriptions reach the diffusive ( analytical ) limit , the poiseuille behavior is reached . with respect to time ( zoom of figure [ fig1 ] ) .( black ) , solid line : zwikker and kosten formula .( blue ) , dotted line : formula .( red ) , dashed line : modified formula .( blue ) , mixed line : formula .( yellow ) , solid , pale line : frozen s green s function see appendix , eqs .( [ maino1][maino2 ] ) . ,width=377 ] in order to emphasize these results , fig .[ fig2 ] shows a zoom of the previous figure , and other approximations have been added .the modified green s function , denoted , can be compared to the green s function . as expected , it is more accurate for short time ( during the signal rise ) than the original function , but it is less accurate for long times , because the choice of the parameters and has been done from the frozen limit instead of the relaxed limit. otherwise the model yields less accurate results than both models .the asymptotic green s function which uses the same number of parameters as performs quite well . on fig .2 it is observed that the models and are in error mainly in the region of the maximum ; one elementary means to improve the description of the bump shape of the response would be to take the direct mean of the two models green s functions .the response modelled in this manner calculated with values of and equal to the mean of their original and modified values .] would be very close to the exact one .additional results given later on ( fig.4 ) show however that this is a favourable situation related to the value and that no very significant improvement of model is to be expected in this manner for other values . .( red ) dashed line : modified green s function ( green ) solid thin line : diffusive green s function .(blue ) dotted line : green function at this scale this curve can not be distinguished from the exact result , see fig .[ fig4 ] for details.,width=377 ] the previous results are concerned with a fixed value of the parameter , a fixed value of the thickness of the material layer . in order to compare the different descriptions for several values of , we chose to compare the maximum values of the time responses ( there is a unique maximum ) .for the asymptotic ( frozen ) expressions at the first orders , and , the maximum values are given by an analytical expression , obtained from eq .( [ maino1]): ^{3/2}e^{-(3/2)}\frac{1}{n_{1}^{2}\xi ^{2}}=\frac{0.2312}{n_{1}^{2}\xi ^{2}}% \text { ; } \label{jkj } \\max(f_{s2 } ) & = & max(f_{s1})\exp ( -n_{2}\xi ) .\label{jkk}\end{aligned}\ ] ] for both orders the time of the maximum is the same ( this illustrates the attenuation without distortion effect brought by the terms ) .[ fig3 ] shows the result for the exact green s function and the two models , and , as well as the diffusive function .it shows the natural logarithm of the maximum value of with respect to the dimensionless space variable .the two models seem to be very good ; however better insight is found by subtracting the result corresponding to the exact green s function , as shown in fig .as expected , the description is very good for long distances , while the modified description is better for small distances .otherwise both are better than the description .the transition range values of the distance is approximately between and , corresponding to a range for the time domain stokes number .this range is similar to that accepted for the frequency domain stokes number defined by eq .( [ 12 ] ) ( see e.g. ref . ) . .( red ) dashed line : modified .( blue ) mixed line : .( yellow ) solid line : frozen s green s function see appendix , eqs .( [ maino1][maino2 ] ) ., width=377 ]a simple analytic formula , eq.(1 ) , can be used to compute the asymptotic green s function in a rigid - framed porous medium .it differs from the asymptotic green s function , by an exponential factor which describes an attenuation - without - distortion effect .we have so far no rigorous statements concerning the geometrical surfaces and that determine the terms in the high - frequency limits ( [ hf ] ) and then also determine the characteristic decay length eq .( 2 ) . nevertheless these parameters are known for some geometries , such as cylindrical pores . by specializing to this particular geometry ,a contrasted situation has been highlighted : while the models fail to give the parameters , , and thus , a correct description of the attenuation - without - distortion effect , they are nevertheless capable to give especially the model a relatively precise description of the complete green s functions . indeed , it is only at very short distances that the asymptotic green s function is close to the complete green s function : its imperfect representation by the models is of no very significant consequence . in connection with this , we note that , when the normalized distance decreases , the maximum error of s model occurs around , precisely when the green s function eventually starts to be valid ( see fig .this suggests that the small remaining errors of s model illustrated in fig .4 are mainly due to the misrepresentation of parameters , , and that the model would become almost exact if modified to properly account for the latter parameters .the question of the modification to be done remains open .a problem of major interest is the use of the present investigation for the inverse problem , the determination of the parameters of a given material . regarding this ,we have illustrated one simple fact : the description of the time domain green s functions is much more precise using the full model expressions than using the asymptotic expressions , as often done in practice .this suggests that there is also an important potential of improvement of the inverse methods of characterization based on recording transmitted and reflected pulses on different thicknesses of a material , provided the full expressions are exploited in the analysis we emphasize , in this respect , the importance of the substraction of the diffusive solution when computing the inverse fft .recall that , since we assume a smooth pore - surface interface , in the frozen limit the product in eq .( [ 23 ] ) expands in successive integral powers of ( see footnote [ fract ] ) .this has been done with dimensionless variables in section [ ae ] .the two coefficients and have been obtained using the a priori expansions of the functions and ( eqs .( [ hf])) for the case of cylindrical circular pores , the result for has been given by keefe , using the zwikker and kosten solution:(eq ( [ 45 ] ) also follows by putting the values ( [ 19 ] ) in eq .( [ n2 ] ) ) . for the general case, requires the missing frozen information and .it will not be given by the asymptotic expansion , eqs .( [ 16 ] ) and ( [ 18 ] ) , with either or values of and .as explained in section [ ffm ] , in the framework of the model , it is not possible to have a good estimation of the second order term , resulting in an expression for the coefficient which differs from eq .( [ n2 ] ) : the case of cylindrical circular pores allows checking this . for this case ,the latter equation becomes: expression differs from eq .( [ 45 ] ) . for standard conditions in air ,the exact result for is , while the approximated one is , more than three times smaller .notice that model would give a negative estimate of for it .these important discrepancies mean that the models and even will not be able to describe the exact attenuation effect .an alternative to the closed - form green s function obtained for this same asymptotic frozen limit can be obtained using the 1d wave equation that follows from eqs .( 1 ) , the following helmholtz equation: the high - frequencies asymptotic limits ( [ hf ] ) we get , in the time domain , the corresponding asymptotic wave equation is written as follows : between ( [ 20]-[exalim ] ) and ( [ we ] ) shows that the coefficients are given by: .\label{exaa1c}\end{aligned}\]]the relationship between the coefficients and and is: notice that by using model one would arrive in the asymptotic high - frequency limit to the same asymptotic wave eq .( [ we ] ) but with the following erroneous value of the index : .\label{a1c}\ ] ] in particular , using model , two important terms disappear as for this case one sets . using this expression for the index in ( [ 1b ] ) corresponds to using the equations ( 14 - 16 ) of fellah , who computed the green s function for an infinite medium described by the above wave equation ( [ we ] ) , by using the laplace transform method notice that in ref . there was a mistake of a factor 2 in the term under the root in eqs .( 14 ) and ( [ 14p ] ) , without influence on further equations .moreover the last term in eq .( [ 18 ] ) was omitted , resulting in a total coefficient of the term in in eq ( [ 18 ] ) equal to .here , consistent with eq .( [ 18 ] ) , an additional term has been included in the bracket in eq .( [ a1c ] ) . ] .the fft computations of the exact green s function can be compared with the following expressions : the ( frozen ) green s function ; the ( frozen ) green s function ( eq .( [ maino2 ] ) with in eq .( [ 42b ] ) ) ; the green s function ( eq . ( [ maino2 ] ) with in eq .( [ 42b ] ) ) ; the green s function ( eq . ( [ maino2 ] ) with in eq .( [ 42b ] ) ) .finally , the fft computations can be compared to the solution of the asymptotic wave equation ( [ we ] ) , for both cases and ( we again choose to compute these solutions using fft , with eqs .( [ 23 ] ) and ( [ exalim ] ) , without expansion of eq . ( [ 23 ] ) ) : the ( frozen ) green s function ( ) ; the green s function ( ) .these solutions being are expected to be very close to the corresponding solutions obtained using the asymptotic wavenumber .results are plotted on fig .notice that , the ( frozen ) green s function , which is a very simple analytical expression , is the best approximation and leads to interesting results , except at long times . ; ; ; , the latter being also shown in fig .4.,width=377 ] the description is slightly better than the first order of the frozen asymptotic , but as expected , it is much comparable to the latter , as it severely underestimate the decay length .otherwise , for short distances , the second order of the asymptotic wave equation solution exhibits the expected convergence to the results of the solution based on wavenumber expansion at second order .this convergence is lost for the comparable model estimates and , as a result of using the faulty coefficients ( . for longer distances , the frozen asymptotic solution appears to be less accurate than the frozen asymptotic wavenumber solution : it is not easy to have an interpretation for this result. the second order solution of the wave equation , as presented in ref . differs ( by definition ) by the third order , with the solution based on wavenumber expansion at second order , the latter being simpler to use in practice looking at the calculation made in the frequency domain , the figure 3 of this paper exhibits a ratio between the description and the one ( ) that is almost independent of frequency : this is the attenuation - without - distortion effect that is described by eq .( [ maino2 ] ) in our wavenumber - based asymptotic calculation . ] .we wish to thank bruno lombard for fruitful discussions . z.e.a .fellah , c. depollier , m. fellah , w. lauriks , j .- y .chapelon , influence of dynamic tortuosity and compressibility on the propagation of transient waves in porous media , wave motion 41 ( 2005 ) 145161 .fellah , a. wirgin , m. fellah , n. sebaa , c. depollier and w. lauriks , a time - domain model of transient acoustic wave propagation in double - layered porous media , j. acoust .am . 118 ( 2005 ) 661670 .g. lefeuve - mesgouez , a. mesgouez , g. chiavassa , b. lombard , semi - analytical and numerical methods for computing transient waves in 2d acoustic / poroelastic stratified media , wave motion 49 ( 2012 ) , 667 - 680 .r. brown , connection between the formation factor for electrical resistivity and fluid solid coupling factors in biot s equations for acoustic waves in fluid - filled porous media , geophysics 45 ( 1980 ) 12691275 .
time domain responses of porous media have been studied by some authors , but generally the possible descriptions are given in the frequency domain . the aim of this paper , limited to materials with rigid skeleton considered as equivalent fluids , is to compare in time domain different descriptions by johnson - allard ( ) as well as by pride - lafarge ( ) , with : ) some analytical approximate formulas based upon asymptotic high - frequency expansion ; ) the exact formula by zwikker and kosten for the case of cylindrical pores . the paper starts with a construction analysis of the models ( and ) . then , the green s function in the time domain is defined , written in scaled form , and shown to exhibit interesting properties of materials . in particular , a so - far overlooked decay length describing a high - frequency attenuation - without - distortion effect is identified in terms of brown s tortuosity , johnson s and allard s known characteristic viscous and thermal lengths , and two , unknown in general , characteristic viscous and thermal surfaces . the numerical computation of the green s function is done by fft , with some precautions , because of the importance of the higher frequencies on the response shape : the substraction of the diffusive ( low frequencies ) approximation largely improves the results of the fft . the description is shown to be the best full - frequency general model with remaining small discrepancies due to unsatisfactory account of the mentioned surface - parameters . keywords : pulse propagation ; transient signals ; porous material pacs : 43.55 rv , 43.55 ti , 43.20 gp , 43.20 bi , 43.20 hq
in numerically solving boundary value problems for time - dependent equations , emphasis is on discretizations in time . for parabolic equations of second order ,unconditionally stable schemes are constructed using implicit approximations .two - level schemes are commonly used in computational practice , whereas multilevel schemes occur more rarely . for unconditionally stable schemes ,a time step is selected only due to the accuracy of the approximate solution .the problem of the control over a time step is relatively well resolved for the numerically solving cauchy problem for systems of differential equations .the basic approach involves the following stages .first , we perform additional calculations in order to estimate the error of the the approximate solution at a new time level . further, a time step is estimated using the theoretical asymptotic dependence of accuracy on a time step .after that we decide is it necessary to correct the time step and to repeat calculations . additional calculations for estimating the error of the approximate solution may be performed in a different way . in particular , it is possible to obtain an approximate solution using two different schemes that have the same theoretical order of accuracy .the most famous example of this strategy involves the solution of the problem on a separate time interval using a preliminary step ( the first solution ) and the step reduced by half ( the second solution ) . in numerically solving the cauchy problem for systems of ordinary differential equations , there are are also applied nested methods , where two approximate solutions of different orders of accuracy are compared . in the above - mentioned methods of selecting a time step , a posteriori estimation of accuracy is employed . in this case , we decide is this time step acceptable or it is necessary to change it for re - calculations ( increase or reduced and how much ) only after performing calculations .such strategies can be also applied to the approximate solution of unsteady boundary value problems using a more advanced a posteriori analysis . in this paper, we consider an a priori selection of a time step for the approximate solution of boundary value problems for parabolic equations . to obtain the solution at a new time level , the backward euler scheme is employed .the time step at the new time level is explicitly calculated using two previous time levels and takes into account changes in the equation coefficients and its right - hand side .the paper is organized as follows . in section 2, we consider a cauchy problem for a system of linear ordinary differential equations that is obtained from numerically solving boundary value problems for parabolic equations after discretization in space .for the approximate solution , estimates for stability are presented along with estimates for accuracy in the corresponding hilbert space .formulas for the selection of a time step are obtained in section 3 using a comparison of the problem solutions corresponding to the forward time level and backward one . in section 4, we show that similar expressions for a time step can be obtained via making a comparison of the solutions derived with one time step and two half steps .section 5 presents numerical results for a model boundary value problem for a one - dimensional parabolic equation obtained on the basis of the developed algorithm for selecting a time step .let us consider the cauchy problem for the linear equation supplemented with the initial condition the problem is investigated in a finite - dimensional hilbert space .assume that in .due to the non - negativity of the operator , for the problem ( [ 2.1 ] ) , ( [ 2.2 ] ) , we have the following estimate for stability with respect for the initial data and the right - hand side : the problem ( [ 2.1 ] ) , ( [ 2.2 ] ) results from finite difference , finite volume or finite element approximations ( lumped masses scheme ) for numerically solving boundary value problems for a parabolic equation of second order . in this problem, an unknown function satisfies the equation where , , .the equation is complemented by the dirichlet boundary conditions and the initial condition to solve numerically this time - dependent problem , we introduce a non - uniform grid in time : we will employ notation . for the problem ( [ 2.1 ] ) , ( [ 2.2 ] ) , we apply the fully implicit scheme , where the transition from the current time level to the next one is performed as follows : starting from the initial condition under the restriction , from ( [ 2.4 ] ) , it follows immediately that the approximate solution satisfies the level - wise estimate thus , we obtain the discrete analog of the estimate ( [ 2.3 ] ) : corresponding to the problem ( [ 2.4 ] ) , ( [ 2.5 ] ) . for the error of the approximate solution, we have the problem here stands for the truncation error : similarly to ( [ 2.6 ] ) , we get the estimate for error : therefore , to control the error , we can employ the summarized error over the interval . in this case , a value defines the same level of the error over the entire interval of integration .if we will be able to calculate the truncation error , then it will be possible to get a posteriori estimate of the error . comparing with the prescribed error level ,this makes possible to evaluate the quality of the choice of the time step .namely , if is much larger ( smaller ) than , then the time step is taken too large ( small ) , and if is close to , then this time step is optimal . thus , we have the problem is that we can not evaluate the truncation error , since it is determined using the exact solution that is unknown .because of this , we must focus on some estimates for the truncation error that guarantee the fulfilment of ( [ 3.1 ] ) .the following strategy is proposed for the correction of the time step .the step is selected from the conditions : forward step .: : using the explicit scheme , we calculate the solution at the time level ; backward step .: : from the obtained , applying the implicit scheme , we determine at the time level ( explicit calculations ) ; step selecting .: : the step is evaluated via closeness between and .in fact , we carry out the back analysis of the error of the approximate solution over the interval using two schemes ( explicit and implicit ) of the same accuracy .let us present the formulas for selecting a time step .the solution is determined from the equation for , we have from ( [ 3.2 ] ) , ( [ 3.3 ] ) , we immediately get the first two terms are associated with the time derivative applied to the problem operator and to the right - hand side . to evaluate them approximately, it seems reasonable to use the time step from the previous time level . but this may be inconvenient to implement . for instance , we have and therefore we have to evaluate the difference derivative of the right - hand side for .the problem is that the derivation of such estimates involves the unknown value .the simplest approach is to evaluate this derivative using the previous time step : but in this case , if , then we can not detect significant changes in the right - hand side for . to resolve the problem , it is possible to use the standard methods available to control a time step for numerically solving time - dependent problems .the first method restricts the growth of the time step with respect to the previous value .we set where is a numerical parameter .the second requirement is that the step can not be too small : where is a specified minimum time step . under the assumption ( [ 3.5 ] ) , we can estimate the time derivative of the right - hand side , putting therefore where for the last term in the right - hand side of ( [ 3.4 ] ) , in view of ( [ 2.4 ] ) , we have with accuracy up to , we put with this in mind , the equality ( [ 3.4 ] ) is replaced by the approximate equality : the value of we associate with the solution error over the interval . because of this , we set from ( [ 3.8 ] ) , we have in view of ( [ 3.5 ] ) , ( [ 3.6 ] ) , ( [ 3.9 ] ) , from ( [ 3.8 ] ) , we obtain the following formula for calculating the time step : this formula for selecting a time step reflects clearly ( see the denominator in the expression for ) corrective actions , which are related to the time - dependence of the problem operator ( the first part ) and the right - hand side ( the second part ) as well as to the time - variation of the solution itself ( the third part ) .to solve numerically the cauchy problem , the traditional strategy is to select an integration step using a comparison of the approximate solution obtained by the preliminary step with the solution calculated with the step reduced by half . for numerically solving problem ( [ 2.1 ] ) , ( [ 2.2 ] ) , we use fully implicit scheme ( [ 2.4 ] ) , ( [ 2.5 ] ) .we employ the explicit scheme over the interval in order to select the time step .the selection strategy includes : calculation with an integer step . : : using the explicit scheme , we determine the solution at the time level via the step ; calculation with a half - integer step . : : using the explicit scheme , we calculate the solution at the time level employing the step ; step selecting .: : the step is evaluated through the closeness between and . for , we have ( [ 3.2 ] ) , and is determined as follows : eliminating from ( [ 4.1 ] ) , ( [ 4.2 ] ) , we get because of this , we have in view of the above notation ( [ 3.5 ] ) , we employ the approximate expressions : by ( [ 2.4 ] ) , we have thus , we arrive at the right - hand side of ( [ 4.4 ] ) coincides with the right - hade side of ( [ 3.6 ] ) with an accuracy of a factor . similarly to ( [ 3.11 ] ) , we can formulate the rule for selecting the time step : in fact , we have come to the same rule for the estimation of the time step the factor 4 has not any essential matter .to demonstrate the performance of the proposed algorithm ( [ 3.5 ] ) , ( [ 3.9 ] ) for selecting a time step based on the implicit scheme for solving the problem ( [ 2.1 ] ) , ( [ 2.2 ] ) , let us consider the boundary value problem for a one - dimensional parabolic equation .let satisfies the equation as well as the boundary and the initial conditions : to solve approximately the problem ( [ 5.1])([5.3 ] ) , we apply finite difference discretization in space .let us introduce a uniform grid with a step : and is the set of interior grid points , whereas is the set of boundary points ( ) .on the set of grid functions such that , we introduce a hilbert space , where the inner product and the norm are defined as : the grid operator is written as follows : the operator is self - adjoint , and if , then it is positive definite in .thus , after the spatial discretization of ( [ 5.1])([5.3 ] ) , we arrive to the problem ( [ 2.1 ] ) , ( [ 2.2 ] ) . as a test problem, we consider the problem ( [ 5.1])([5.3 ] ) with and the discontinuous coefficient and the discontinuous defined as follows : the problem is solved on the grid with , the calculations are performed using the sufficiently small time step .first , we consider the case , where the initial condition ( [ 5.3 ] ) is taken in the following form : if we specify the error level and the parameter , then the time step produced by the algorithm ( [ 3.7 ] ) , ( [ 3.11 ] ) has the form depicted in fig .[ fig:1 ] .the total number of time steps is . in this figure, we observe essential changes in the value the time step at and , i.e. , at the time moments corresponding to discontinuities in the right - hand side and the coefficient of the equation . in accordance with the rule ( [ 3.6 ] ) , the time step increases at the initial time stage .let us decompose the correcting coefficient into three terms : figure [ fig:2 ] demonstrates their influence .the influence of the reducing error level on the convergence of the approximate solution is shown in fig .[ fig:3 ] .the approximate solution at the point is depicted in this figure . for comparison ,figure[fig:3a ] presents similar data that were obtained using the uniform grids in time .special attention should be given to the influence of the initial conditions .a typical situation is the presence of a boundary layer and this requires to use small steps at the initial time stage .for example , the behavior of the time step for our model problem with initial conditions is shown in fig . [ fig:4 ] .compared with fig .[ fig:1 ] ( smooth initial conditions ) , the initial time stage is calculated with essentially smaller time steps and the total number of steps is increased by more than a factor of 2 . in the region outside the neighbourhood of discontinuities of the coefficients and the right - hand side ,the time step is controlled first of all by the term ( see fig .[ fig:5 ] ) .a more difficult situation for the numerical solution is connected with inconsistent initial and boundary conditions .let we have the selection of the time step for this case is shown in fig .[ fig:6 ] . up to the calculationis carried out with the minimum time step .that is why the total number of time steps is 2183 .
this work deals with the problem of choosing a time step for the numerical solution of boundary value problems for parabolic equations . the problem solution is derived using the fully implicit scheme , whereas a time step is selected via explicit calculations . the selection strategy consists of the following steps . first , using the explicit scheme , we calculate the solution at a new time level . next , we employ this solution in order to obtain the solution at the previous time level ( the implicit scheme , explicit calculations ) . this solution should be close to the solution of our problem at this time level with a prescribed accuracy . such an algorithm leads to explicit formulas for the calculation of the time step and takes into account both the dynamics of the problem solution and changes in coefficients of the equation and in its right - hand side . the same formulas for the evaluation of the time step we get using a comparison of two approximate solutions , which are obtained using the explicit scheme with the primary time step and the step that is reduced by half . numerical results are presented for a model parabolic boundary value problem , which demonstrate the robustness of the developed algorithm for the time step selection . * keywords * : parabolic equation , finite difference schemes , explicit schemes , implicit schemes , time step * mathematics subject classification * : msc 65j08 , msc 65m06 , 65m12
let and denote by ] to endowed with its borel--field . in this article , we analyze numerical schemes for the evaluation of ,\ ] ] where * } ] is a borel measurable function that is lipschitz continuous with respect to the _ supremum _ norm .this is a classical problem which appears for instance in finance , where models the risk neutral stock price and denotes the payoff of a ( possibly path dependent ) option , and in the past several concepts have been employed for dealing with it .a common stochastic approach is to perform a monte carlo simulation of numerical approximations to the solution .typically , the euler or milstein schemes are used to obtain approximations .also higher order schemes can be applied provided that samples of iterated it integrals are supplied and the coefficients of the equation are sufficiently regular . in general , the problem is tightly related to _ weak approximation _ which is , for instance , extensively studied in the monograph by kloeden and platen for diffusions .essentially , one distinguishes between two cases .either depends only on the state of at a fixed time or alternatively it depends on the whole trajectory of . in the former case , extrapolation techniques can often be applied to increase the order of convergence , see . for lvy - driven stochastic differential equations ,the euler scheme was analyzed in under the assumption that the increments of the lvy process are simulatable .approximate simulations of the lvy increments are considered in . in this article , we consider functionals that depend on the whole trajectory .concerning results for diffusions , we refer the reader to the monograph . for lvy - driven stochastic differential equations , limit theorems in distributionare provided in and for the discrepancy between the genuine solution and euler approximations .recently , giles ( see also ) introduced the so - called _ multilevel monte carlo method _ to compute .it is very efficient when is a diffusion .indeed , it even can be shown that it is in some sense optimal , see . for lvy - driven stochastic differential equations ,multilevel monte carlo algorithms are first introduced and studied in .let us explain their findings in terms of the blumenthal getoor index ( bg - index ) of the driving lvy process which is an index in ] . denotes an -dimensional -integrable lvy process . by the lvy khintchine formula , it is characterized by a square integrable l ' evy - measure [ a borel measure on with , a positive semi - definite matrix ( being a -matrix ) , and a drift via where briefly , we call a -lvy process , and when , a -lvy martingale .all lvy processes under consideration are assumed to be cdlg . as is well known, we can represent as sum of three independent processes where is a -dimensional wiener process and is a -martingale that comprises the compensated jumps of .we consider the integral equation where is a fixed deterministic initial value .we impose the standard lipschitz assumption on the function : for a fixed , and all , one has furthermore , we assume without further mentioning that there are ] the lvy measure is supported on and satisfies for all with .we consider a class of multilevel monte carlo algorithms together with a cost function that are introduced explicitly in section [ sec : alg ] .for each algorithm , we denote by a real - valued random variable representing the random output of the algorithm when applied to a given measurable function \to{\mathbb{r}} ] in constant time , * one can evaluate at any point in constant time , and * can be evaluated for piecewise constant functions in less than a constant multiple of its breakpoints plus one time units . as pointed out below , in that case ,the average runtime to evaluate is less than a constant multiple of .we analyze the minimal _ worst case error _\tau\ge1.\ ] ] here and elsewhere , denotes the class of measurable functions \to{\mathbb{r}} ] and the set is constituted by the random times that are inductively defined via and clearly , is constant on each interval and one has that we can write = \sum_{k=2}^{m } { \mathbb{e}}\bigl[f\bigl ( { \upsilon}^{({k})}\bigr)-f\bigl({\upsilon}^{({k-1})}\bigr)\bigr ] + { \mathbb{e}}\bigl[f\bigl({\upsilon}^{({1})}\bigr)\bigr].\ ] ] the multilevel monte carlo algorithm identified with each expectation ] ) individually by sampling independently ( resp . , ) versions of [ and taking the average .the output of the algorithm is then the sum of the individual estimates .we denote by a random variable that models the random output of the algorithm when applied to .the monte carlo algorithm introduced above induces the mean squared error - { \mathbb{e}}\bigl[f\bigl({\upsilon}^{({m})}\bigr)\bigr]\bigr| ^2 + \sum_{k=2}^{m } \frac1{n_k } { \operatorname{var}}\bigl(f\bigl ( { \upsilon}^{({k})}\bigr)-f\bigl({\upsilon}^{({k-1})}\bigr)\bigr)\\ & & { } + \frac1{n_1 } { \operatorname{var}}\bigl(f\bigl ( { \upsilon}^{({1})}\bigr)\bigr),\end{aligned}\ ] ] when applied to . for two ] having first marginal and second marginal .clearly , the wasserstein distance depends only on the distributions of and .now , we get for , that \nonumber \\[-8pt ] \\[-8pt ] & & { } + \frac1{n_1 } { \mathbb{e}}\bigl[\bigl\| { \upsilon}^{({1})}-y_0\bigr\|^2\bigr].\nonumber\end{aligned}\ ] ] we set and remark that estimate ( [ eq0311 - 1 ] ) remains valid for the worst case error . the main task of this article is to provide good estimates for the wasserstein metric .the remaining terms on the right - hand side of ( [ eq0311 - 1 ] ) are controlled with estimates from . in order to simulate one pair , we need to simulate all displacements of of size larger or equal to on the time interval ]. then we can construct our approximation via ( [ eq0514 - 3 ] ) . in the real number model of computation ( under the assumptions described in the ) ,this can be performed with runtime less than a multiple of the number of entries in ] , where is , in analogy to above , the set of random times defined inductively via and the process is closely related to from section [ sec : alg ] and choosing and , implies that and are identically distributed .we need to introduce two further crucial quantities : for , let and .[ th0310 - 1 ] suppose that assumption [ assu1 ] is valid .there exists a finite constant that depends only on , and such that for ] , and ] and ] , we couple with .the introduction of the explicit coupling is deferred to section [ sec : gauss ] .let us roughly explain the idea behind the parameter .in classical euler schemes , the coefficients of the sde are updated in either a deterministic or a random number of steps of a given ( typical ) length .our approximation updates the coefficients at steps of order as the classical euler method .however , in our case the lvy process that comprises the small jumps is ignored for most of the time steps .it is only considered on steps of order of size .on the one hand , a large reduces the accuracy of the approximation . on the other hand ,the part of the small jumps has to be approximated by a wiener process and the error inferred from the coupling decreases in .this explains the increasing and decreasing terms in theorem [ th0310 - 1 ] . balancing and then leads to corollary [ cor0311 - 1 ] .we need some auxiliary processes .analogously to and , we let denote the set of random times defined inductively by and so that the mesh - size of is less than or equal to .moreover , we set ) ] , ] . the main task of the proof is to establish an estimate of the form for appropriate values . since is finite ( see , for instance , ) , then gronwall s inequality implies as upper bound : } |y_s-{{\bar y}}_s|^2\bigr ] \leq\alpha_2 \exp({\alpha_1}).\ ] ]we proceed in two steps . __ note that so that for ] and ] . given , } ] .moreover , we have on ] , + f(h ) { \mathbb{e}}\bigl[\bigl|\bar{\upsilon}_{{{\iota}}(s ) } -\bar{\upsilon}_{\eta(s)}\bigr|^2\bigr ] \bigr]\,{\mathrm{d}}s,\ ] ] where }|{\upsilon}_s-\bar{\upsilon}_{s}|^2] ] is bounded by a constant that depends only on .consider \to[0,\infty ) , \delta\mapsto\sqrt{\delta \log(e/\delta)} ] is finite too .consequently , } \bigl|{{\mathcaligr}{x}}_s-{{\mathcaligr}{x}}_{{{\iota}}(s)}\bigr|^2\bigr]\nonumber \\[-8pt ] \\[-8pt ] & & \qquad\le3 \biggl [ \bigl(|{\sigma}|^2+f(h)\bigr ) { \mathbb{e}}[\|w\|_{\varphi}^2 ] { \varepsilon}\log\frac e{\varepsilon}+ result follows immediately by using that and ruling out the asymptotically negligible terms .in this section , we prove the following theorem . [ th0217 - 1 ] let and be a -dimensional -lvy martingale whose lvy measure is supported on .moreover , we suppose that for , one has for any with , and set depending only on such that the following statement is true . for every , one can couple the process } ] such that } the proof of the theorem is based on zaitsev s generalization of the komls major tusndy coupling . in this context, a key quantity is the _ zaitsev parameter _ : let be a -dimensional random variable with finite exponential moments in a neighborhood of zero and set for all with integrable expectation. then the parameter is defined as latexmath:[\ ] ] [ le1121 - 1 ] let and denote a filtration .moreover , let , for , and denote nonnegative random variables such that is -measurable , and is -measurable and independent of . then one has \le{\mathbb{e}}\bigl[\max _ { j=0,\ldots , n-1}u_j \bigr]\cdot{\mathbb{e}}\bigl[\max_{j=0,\ldots , n-1 } v_j \bigr].\ ] ]
we introduce and analyze multilevel monte carlo algorithms for the computation of , where } ] and the lvy process has a gaussian component , we obtain bounds of order . in particular , the error is at most of order . .
transiting exoplanets and eclipsing binaries produce familiar u- and v - shaped lightcurves with several defining quantities , such as the mid - eclipse time , eclipse depth and duration . out of these , the transit duration is undoubtedly the most difficult observeable to express in terms of the physical parameters of the system . kipping ( 2008 )showed that the duration is found by solving a quartic equation , to which exists a well - known solution . in general , two rootscorrespond to the primary eclipse and two to the secondary but this correspondence determination has an intricate dependency on the input parameters for which there currently exists no proposed rules . as a consequence, there currently exists no single exact expression for the transit duration . in many applications ,the process of discarding unwanted roots may be performed by a computer , but naturally this can only be accomplished for case - by - case examples .the benefits of a concise , accurate and general expression for the transit duration , as we will refer to it from now on , are manifold .the solution provides lower computation times , deeper insight into the functional dependence of the duration and a decorrelated parameter set for fitting eclipse lightcurves ( see 6 ) .such a solution may also be readily differentiated to investigate the effects of secular and periodic changes in the system ( see 7 ) . with the changes in transit duration recently being proposed as a method of detecting additional exoplanets in the system ( miralda - escud 2002 ; heyl & gladman 2007 ) and companion exomoons ( kipping 2009a , b ) , there is a strong motivation to ensure an accurate , elegant equation is available . in this work , we will first propose a new approximate expression for the transit duration in 2 . in 3 , we will derive two new approximate expressions for the transit duration and discuss others found in the literature ; exploring their respective physical assumptions and derivations . in 4 , we present the results of numerical tests of the various formulae .we find that one of our new proposed expressions offers the greatest accuracy out of the candidates . in 5 and 6, we utilize the new equation to derive mappings between the observeable transit durations and the physical model parameters .these mappings may be used to obtain a decorrelated transit lightcurve fitting parameter set , which enhances the compuational efficiency of monte carlo based methods . finally ,in 7 , we differentiate the favoured solution to predict how the transit duration will change due to apsidal precession , nodal precession , in - fall and eccentricity variation .before we begin our investigation , let us first clearly define what we mean by the transit duration . there exists several different definitions of what constitutes as the transit duration in the exoplanet literature . in figure 1 , we present a visual comparison of these definitions .there exists at least 7 different duration definitions : , , , , , and . and are the definitions used by and represent the to and to contact durations respectively .these can be understood to be the total transit duration and the flat - bottomed transit duration ( sometimes confusingly referred to as the full duration ) . is the time for a planet to move from its sky - projected centre overlapping with the stellar limb to exitting under the same condition . is a parameter we define in this work as the average of and which we call the transit width .we note that since when the planet s centre crosses the limb of star , less than one half of the planet s projected surface blocks the light from the star .however , many sources in literature make the approximation , which is only true for a trapezoid approximated lightcurve ( a detailed discussion on this is given in 6.3 ) . finally , and are the ingress and egress durations respectively which are often approximated to be equal to one single parameter , . throughout this paper , we assume that the planet is a black sphere emitting no flux crossing the disc of a perfectly spherical star .the consequences for oblate planets is discussed by and for hot planets with significant nightside fluxes in .we will employ the definition of for the transit duration .once the equation for is known , it is trivial to transmute it to give any of the other definitions provided in figure 1 .an exact solution for , in terms of the true anomaly , is given by integrating d/d between and ( where we use to describe true anomaly throughout this paper ) .details of the derivation can be found in k08 , but to summarize we have : \\ & d(f ) = 2 \sqrt{1-e^2 } \tan^{-1 } \big[\sqrt{\frac{1-e}{1+e } } \tan{\frac{f}{2}}\big ] - \frac{e ( 1-e^2 ) \sin f}{1+e \cos f } \\ & d(f_b ) - d(f_a ) = 2 \sqrt{1-e^2 } \tan^{-1}\bigg[\frac{\sqrt{1-e^2 } \sin f_h}{\cos f_h+e \cos f_m}\bigg ] \nonumber \\ \qquad & -\frac{2 e ( 1-e^2 ) \sin f_h ( e \cos f_h + \cos f_m)}{(1-e^2 ) \sin^2f_m + ( e \cos f_h+\cos f_m)^2}\end{aligned}\ ] ] where is the planetary orbital period , is the orbital eccentricity , is the ` duration function ' , and .it is clear that two principal terms define and hence we dub the expressions used here as the ` two - term ' transit duration equation . at this point ,the outstanding problem is solving for and , which are the true anomalies at the contact points .the k08 solution is derived using cartesian coordinates , but we consider here a simpler formulation in terms of . in order to solve , we must first consider the geometry of the system .as with k08 , an ellipse is rotated for argument of pericentre and then orbital inclination . we choose to define our coordinate system with the star at the origin with the axis pointing at the observer .we also choose to align the axis towards the ascending node of the planetary orbit , which ensures that the longitude of the ascending node satisfies and the longitude of pericentre ( of the planetary orbit ) is equal to the argument of the pericentre ( ) . in such a coordinate system , we may write the sky - projected planet - star separation ( in units of stellar radii ) : where is the semi - major axis in units of stellar radii ( ) and is the orbital inclination .the two contact points occur when is equal to unity . for a body which undergoes both primary and secondary transit, there must be at least four solutions , which already is an indication of a quartic .if we let and rearrange ( 4 ) in terms of purely terms in , we obtain the following quartic equation : equation ( 5 ) is a quartic equation not satisfying any of the special case quartics which are most easily solved ( e.g. a biquadratic ) .since we have four roots for , there are eight roots in total for , of which only four are physical .therefore it is preferable to always work with since may be easily expressed in terms of as well .although the solutions of a quartic equation are well known , two of the roots correspond to the primary transit and two to the secondary .the correspondence of which roots relate to which contact points varies with an intricate dependency on the input parameters .unfortunately , no known rules or relations currently exist for this correspondence and we were unable to find a system either .consequently , there currently exists no single exact equation for the transit duration .we note that the problem of finding the duration of an eclipse is not a new one .an equivalent problem is considered in for the time taken for a body to move between primary and secondary transit . showed that an closed - form expression is possible by assuming .however , this assumption would be too erroneous for the purposes of finding the duration of a transit event . in order to avoid the quartic equation, we must make an approximation .a useful approximation we can make is that , i.e. the planet - star separation is approximately a constant value given by the planet - star separation at mid - transit .defining the transit impact parameter as , it may be shown that the difference between and is given by : in addition to ( 12 ) , we require . a good approximation would appear to be that , which is the true anomaly at mid - transit . is defined as the point where is a minimum .differentiating with respect to and solving for leads to a quartic expression again and thus an exact concise solution remains elusive but a good approximation is given by : by combining equations ( 1 ) & ( 3 ) with ( 12 ) & ( 13 ) , we are able to obtain a final expression for the duration , which we dub ( ` two - term ' ) . testing for the exact solutions for and provided precisely the correct transit duration for all , as expected .however , we found that using approximate entries for these terms severely limited the precision of the derived equation for large ( the results of numerical tests will be shown later in 4 ) .the source of the problem comes from equation ( 3 ) which consists of taking the difference between two terms . both terms are of comparable magnitude for large and thus we are obtaining a small term by taking the difference between two large terms .these kinds of expressions are very sensitive to slight errors . in our case ,the error is from using approximate entries for and . in this next section, we will consider possible ` one - term expressions ' which avoid the problem of taking the difference of two comparable - magnitude terms .there are numerous possible methods for finding ` one - term ' transit duration expressions .starting from equation ( 1 ) , we could consider using the same assumption which we used to derive the approximate true anomalistic duration , ; i.e. the planet - star separation does not change during the transit event .this would yield : where we have used equation ( 12 ) for .another derivation would be to assume the planet takes a tangential orbital velocity and constant orbital separation from the planet , sweeping out an arc of length .it is trivial to show that this argument will lead to precisely the same expression for .for a circular orbit , the task of finding the transit duration is greatly simplified due to the inherent symmetry of the problem and an exact , concise solution is possible , as first presented by ( smo03 ) . the physical origin of this expression can be seen as simply multiplying the reciprocal of the planet s tangential orbital velocity ( which is a constant for circular orbits ) , by the distance covered over the swept - out arc , . where we expand the first term as the orbital period divided by the orbital circumference , and the second term as the arc length .it may be shown that : it can be seen that our approximate expression for , presented in equation ( 12 ) is equivalent to equation ( 18 ) for circular orbits .it is also worth noting that both and can be shown to reduce down to for circular orbits . ( ts05 ) presented expressions for the duration of an eccentric transiting planet , which has been used by numerous authors since ( e.g. ford et al .2008 ; jordn & bakos 2008 ) .it is also forms the basis of a lightcurve parameter fitting set proposed by bakos et al .there are two critical assumptions made in the derivation of the ts05 formula .the first of these is that : * the planet - star separation , , is constant during the planetary transit event and equals this is the same assumption made in the derivation of the equation . under this assumption ,ts05 quote the following expression for ( changing to consistent notation ) . where ts05 define as the planet - star separation at the moment of mid - transit , as the planet s orbital velocity at the moment of mid - transit and as ` the eccentric angle between the first and last contacts of transit ' .in the standard notation , there is no such parameter defined strictly as the ` eccentric angle ' and thus we initially assumed that ts05 were referring to the eccentric anomaly .however , substiuting the revevant terms for and gives : by comparing ( 20 ) to equation ( 14 ) , it is clear ( also note equation ( 14 ) was derived under precisely the same assumptions as that assumed by ts05 at this stage of the derivation ) .we therefore conclude that the term ts05 refer to as ` eccentric angle ' infact refers to true anomaly .this is an important point to make because the derivation of the ts05 equation would otherwise be very difficult to understand by those working outside of the field .continueing the derivation from this point , the second assumption made by ts05 is : * the planet - star separation is much greater than the stellar radius , critically , this assumption was not made in the derivation of or . using this assumption , ts05 propose that ( replacing to remain consistent with the notations used in this work and replacing to refer to rather than the duration from contact points 1 to 4 ) : where ts05 use equation ( 22 ) rather than ( 21 ) in the final version of , ts05 effectively make a small - angle approximation for , which is a knock - on effect of assuming .we argue here that losing the function does not offer any great simplification of the transit duration equation but does lead to an unneccessary source of error in the resultant expression , in particular for close - in orbits , which is common for transits .we also note that even equation ( 21 ) exhibits differences to equation ( 12 ) .firstly , inside the function , the factor of is missing which is present in both the derivation we presented in equation ( 12 ) and the derivation of smo03 for circular orbits , equation ( 18 ) .the absence of this term can be understood as a result of the assumption . as , in order to maintain a transit event , we must have .secondly , the expression we presented for earlier in ( 12 ) has the factor of 2 present outside of the arcsin function , whereas ts05 have this factor inside the function .furthermore , the smo03 derivation also predicts that the factor of 2 should be outside of the function and this expression is known to be an exact solution for circular orbits . in a small angle approximation , , but we point out that moving the factor of 2 to within the function seems to serve no purpose except to invite further error into the expression . as a result of these differences , the ts05 expression for does not reduce down to the original smo03 equation and is given by : ( w10 ) proposed an expression for based on modification to the smo03 equation .the first change was to modify the impact parameter from , i.e. to allow for the altered planet - star separation for eccentric orbits .secondly , the altered planetary velocity should also be incorporated .w10 propose that a reasonable approximation for the transit duration is obtained by multiplying the smo03 expressions by the following ratio : }{\frac{\mathrm{d}x}{\mathrm{d}t}(f_c ) } = \frac{\varrho_c}{\sqrt{1-e^2}}\ ] ] where is given in and is the true anomaly at the centre of the transit .this yields a new transit duration equation of : ^{1/2}\bigg)\ ] ] firstly , we note an obvious improvement of the w10 expression is that we recover the original smo03 equation for .secondly , comparison to the equation for reveals that the two expressions are very similar except for the position of an extra term .indeed , the and expressions are equivalent in the small - angle approximation .insights into the robustness and accuracy of the various expressions may be obtained through numerical tests of the various approximate expressions .we here compare the accuracy of four expressions : , , and .these expressions depend only on five parameters , , , and .one of the clearest ways of visually comparing the equations is to consider a typical transiting exoplanet example with system parameters for and and vary the eccentricity parameters . may be selected by simply assuming a star of solar density .we create a 1000 by 1000 grid of and values from -1 to 1 in equal steps .grid positions for hyperbolic orbits ( ) are excluded .we then calculate the transit duration through the exact solution of the quartic equation , , plus all four approximate formulae .we then calculate the fractional deviation of each equation from the true solution using : we then plot the loci of points for which the deviation is less than 1% ( i.e. ) . in figure 2 , we show four such plots for different choices of and .the plot reveals several interesting features : * consistently yields the largest loci .* is sometimes accurate and sometimes not , supporting the hypothesis that the approximation is not stable .* also yields consistently large loci .* consistently yields the smallest loci .we briefly discuss additional tests we performed for two sets of different hypothetical transiting exoplanet systems ; one for eccentricities and one for . in all cases, we randomly generated the system parameters weighted by the transit probability and calculated the deviation of the various formulae .we found that the expressions was consistently the most accurate , with the w10 of similar accuracy but higher assymmetry .we therefore find that the results yield a overall preference for the approximation .we note that authors using the formulation can also expect an extremely good approximation but for the remainder of this paper we will only consider using for the later derivations .we may define the ` improvement ' of the expression relative to the equation as : * 100\ ] ] where is measured in % .we can see that if ts05 gives a lower deviation ( i.e. more accurate solution ) , we will obtain % whereas if the candidate expression gives a closer solution we obtain % and is essentially the percentage improvement in accuracy . for the range , we find the mean value of this parameter is % and for the range we find % .we note that one caveat of these tests is that they are sensitive to the a priori inputs . for the case of kepler photometry, following the method of , we estimate that the typical measurement uncertainty on will be .1% in most cases .we find that is accurate to 0.1% or better over a range of and on average .smo03 showed that the to contact duration , , and the the contact duration , , may be used to derive , , and the stellar density , .we here consider how biased these retrieved parameters would be if we used the circular equations for an eccentric orbit . from here, we will employ the expression for the transit duration , as this equation has been shown to provide the greatest accuracy in the previous section . according , to smo03, the circular transit durations are given by : modification of the solution gives : using ( 28 ) and ( 29 ) , smo03 show that the impact parameter may be retrieved by using : ^ 2 = \frac{(1-p)^2 - \frac{\sin^2(t_f \pi / p)}{\sin^2(t_t \pi / p ) } ( 1+p)^2 } { 1-\frac{\sin^2(t_f \pi / p)}{\sin^2(t_t\pi / p ) } } \ ] ] using the same equations for an eccentric orbit gives : ^ 2 = 1 + p^2 + 2p \cdot \nonumber \\ & \bigg(\frac { \sin^2[\frac{\varrho_c^2}{\sqrt{1-e^2 } } \arcsin(\frac{\sqrt{(1-p)^2 - b^2}}{a_r \varrho_c \sin i } ) ] + \sin^2[\frac{\varrho_c^2}{\sqrt{1-e^2 } } \arcsin(\frac{\sqrt{(1+p)^2 - b^2}}{a_r \varrho_c \sin i } ) ] } { \sin^2[\frac{\varrho_c^2}{\sqrt{1-e^2 } } \arcsin(\frac{\sqrt{(1-p)^2 - b^2}}{a_r \varrho_c \sin i } ) ] - \sin^2[\frac{\varrho_c^2}{\sqrt{1-e^2 } } \arcsin(\frac{\sqrt{(1+p)^2 - b^2}}{a_r \varrho_c \sin i } ) ] } \bigg)\end{aligned}\ ] ] where it is understood that for terms on the right - hand side with in them , we are referring to the true impact parameter , .we plot this function in the case of , and in figure 3 . making small - angle approximations , this yields .however , for larger and values , the overall effect is to overestimate for eccentric orbits .is 0.5 but the introduction of eccentricity causes to be overestimated.__,width=8 ] in addition to the impact parameter , smo03 proposed that the parameter may be derived using : ^ 2 = \frac{(1+p)^2 - b_{\mathrm{derived}}^2}{\sin^2 ( t_t \pi / p ) } + b_{\mathrm{derived}}^2\ ] ] if we use the assumption , then this equation yields : ^ 2 = b^2 + \nonumber \\ & ( ( 1+p^2)-b^2 ) \csc^2\bigg[\frac{\varrho_c^2}{\sqrt{1-e^2 } } \arcsin\big(\frac{\sqrt{(1+p)^2-b^2}}{a_r \varrho_c \sin i}\big)\bigg]\end{aligned}\ ] ] with small - angle approximations , we have : is heavily biased by eccentricity . in this example, the true value of is 10 but the introduction of eccentricity causes to be underestimated.__,width=8 ] the term inside the square root goes to unity for circular orbits , as expected .the deviation in can be seen to become quite significant for eccentric orbits , as seen in figure 4 where the exact expression for ( 34 ) is plotted .this will have significant consequences for our next parameter , the stellar density .stellar density is related to by manipulation of kepler s laws : where the approximation is made using the assumption .we can therefore see that : ^{3/2 }\\ \rho_{*,\mathrm{derived } } & \simeq \rho _ * \psi = \bigg[\frac{(1+e \sin \omega)^3}{(1-e^2)^{3/2}}\bigg ] \rho_*\end{aligned}\ ] ] where in the second line we have assumed that .a series expansion of into first order of yields $ ] .so observers neglecting an eccentricity of may alter the stellar density by 30% . as an example , if we decreased the density of a solar type g2v star by 30% , the biased average stellar density would be more consistent with a star of spectral type k0v .indeed , asteroseismologically determined stellar densities of transiting systems could be used to infer .is heavily biased by eccentricity . in this example , the true value of is 1 but the introduction of eccentricity causes to be underestimated.__,width=8 ] this density bias , which is plotted in figure 5 , could be extremely crucial in the search for transiting planets .many discovery papers of new transiting planets have only sparse radial velocity data and usually no secondary eclipse measurement . as a result ,the uncertainity on the eccentricity is very large .critically , planets are often accepted or rejected as being genuine or not on the basis of this lightcurve derived stellar density . if the lightcurve derived stellar density is very different from the combination of stellar evolution and spectroscopic determination , these candidates are generally regarded as unphysical .this method of discriminating between genuine planets and blends , which may mimic such objects , was proposed by ( see 6.3 of smo03 ) but crucially only for circular orbits .since the typical upper limit on is around 0.1 in discovery papers , then the lightcurve derived stellar density also has a maximum possible error of % . in practice , the uncertainty on will result in a larger uncertainty in .typical procedure is to fix if the radial velocity data is quite poor , despite the fact the upper limit on . as a result, the posterior distribution of would be artificially narrow and erroneous if .we propose that global fits should allow to vary when analyzing radial velocity and transit photometry , as well as a fixed fit for comparison .this would allow the full range of possible eccentricities to be explored , which would result in a broader and more accurate distribution for , and critically .in the previous subsection we saw how using the circular expressions to derive and can lead to severe errors for even mildly eccentric systems .we here present expressions which will recover excellent approximations values for , and .the new equations are given by : }{\sin^2[(t_t \pi \sqrt{1-e^2})/(p \varrho_c^2 ) ] } ( 1+p)^2 } { 1-\frac{\sin^2[(t_f \pi \sqrt{1-e^2})/(p \varrho_c^2)]}{\sin^2[(t_t \pi \sqrt{1-e^2})/(p \varrho_c^2 ) ] } } \\ a_r^2 & = \frac{(1+p^2 ) - b^2}{\varrho_c^2 \sin^2[(t_t \pi \sqrt{1-e^2})/(p \varrho_c^2 ) ] } + \frac{b^2}{\varrho_c^2 } \\\rho _ { * } & = \frac{3 \pi}{g p^2 } a_r^3 - p^3 \rho_p\end{aligned}\ ] ] these expressions can be shown to reduce down to the original smo03 derivations if ( equations ( 7 ) & ( 8) of smo03 ) .the new stellar density parameter may be used with floating and values to correctly estimate the probability distribution of this critical parameter . in the previous subsection ,we have derived the physical parameters in terms of and .we may naively assume that this is interchangeable with expressions in terms of and .assuming the transit lightcurve is symmetric ( exactly valid for circular orbits and a very good approximation for eccentric orbits ) , the following relations between these two definitions exist : for the latter , for the trapezoid approximated lightcurve only .this is because is defined as when the sky - projection of the planet s centre is touching the stellar limb i.e. . at this point , the fraction of the planetary disc occulting the stellar disc is not equal to one half of the total in - transit occulted area .in contrast , we here define as the duration between the midway of the ingress to the midway of the egress .we can intuitionally see at the moment , less than half of the total area must be occulted and therefore .a further validation of this can be seen by writing down the equations for and and combining them using arcsin trigometric identities . for the simple case of a circular orbit, the resultant expression would give : ^{1/2 } [ 1- \nonumber \\ & ( 1-p^2 ) + b^2]^{1/2 } - [ ( 1-p)^2 - b^2]^{1/2 } [ 1- ( 1+p^2 ) + b^2]^{1/2}\bigg]\end{aligned}\ ] ] it may be shown that and thus . in the same manner as we derived , may also be written as a combination of the relevant arcsin functions .however , we can already see that such an expression will also be extremely laborius .this means that writing down the expressions for and is much more challenging than that for and and we were unable to find an exact inversion relation . indeed ,finding exact expressions for these inverse relations is unncessary as we may retrieve and using equations ( 40 ) and ( 41 ) . for a circular orbit and in the limit , the relative difference betwen and may be written as : notice how for this expression yields zero , which is expected since the planet now takes infintessimal size .the denominator also reveals that the difference diverges rapidly for near - grazing transits i.e. .the transit lightcurve is essentially described by four physical parameters , which form the parameter set \{ , , , }. however , efforts to fit transit lightcurves using this parameter set is known to be highly inefficient due to large inter - parameter correlations , in particular between and . used a fisher analysis to show that for a symmetric lightcurve , which is approximated as a piece - wise linear model ( i.e. a trapezoid ) , a superior parameter set to fit for is given by \{ , , , } where is the ingress or equivalently egress duration assuming a symmetric lightcurve . in our case , these parameters become \{ , , , }. the authors reported that using this parameter set decreased the correlation lengths in a markov chain monte carlo ( mcmc ) fit by a factor of .in the analysis , the lightcurve is a symmetric trapezoid and therefore .as we have already seen , this is not true for a real lightcurve .this raises the ambiguity as to whether this fitting parameter should be or for real lightcurves .one advantage of the parameter is that it is independent of , whereas is not . with one degree less of freedom than , will always exhibit lower correlations and may be determined to lower uncertainity .this makes ideal for transit duration variations ( tdv ) studies , as pointed out by .however , as we saw in 5.3 , there presently exists no known expression for converting and into the physical parameter set which is used to actually generate a model lightcurve .when fitting a transit lightcurve , our hypothetical algorithm must make trial guesses for ` the fitting parameter set ' which is then mapped into ` the physical parameter set ' .these physical parameters are then fed into a transit lightcurve model generator , allowing for the goodness - of - fit between the trial model and the observations to be made .this mapping proceedure is unavoidable since the transit lightcurve is essentially generated by feeding the sky - projected planet - star separation , , as a function of time , into a lightcurve generating code like that of .since ( equation 4 ) is a function of the physical parameter set , and not the fitting parameter set , the mapping between the two sets is a pre - requisite for any lightcurve fitting algorithm . unless an approximation is made that , there currently exists no expressions which perform this mapping proceedure for the fitting parameter set \{ , , , } ] .specifically , there currently exists no exact expression for and .therefore , \{ , , , } can not be used as a fitting parameter set unless we assume and use \{ , , , } fortunately , the consequences of making this assumption will not be severe , in most cases .this is because the trial fitting parameters serve only one function - to produce trial physical parameters .these trial physical parameters may be slightly offset from the exact mapping but this is not particularly crippling since the model lightcurve is still generated exactly based upon these trial physical parameters .the only negative consequence of using this method is that an additional correlation has been introduced into the fitting algorithm since the offset between and will be a function of and .this correlation will be largest for near - grazing transits since equation ( 47 ) tends towards as .in general , we wish to avoid such correlations as much as possible to allow the algorithm to most efficiently explore the parameter space . [ [ fitting - parameter - sets - t_c - g_1-w_1-a_1 ] ] fitting parameter sets : \{ , , , } ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ for a trapezoid approximated lightcurve , showed that the fitting parameter correlations are decreased further by using the parameter set \{ , , , } , where is the area of the trapezoid - approximated lightcurve , and is the gradient of the ingress / egress ( note we have changed the original notation from to to avoid confusion with equation 4 ) and lacked a physical interpretation ] .the area of the trapezoid lightcurve is given by where .the gradient of a trapezoid slope is given by .since then both and may be written as a function of and only , thus obviating the use of and the associated issues discussed in the previous subsection . in order to proceed , a mapping from \{ , , } \{ , , } is required for accomplishing this goal .the exact solutions for , & may be found by solving the quartic equation discussed in 2 .however , the roots of this equation yields , & whereas we need the inverse relations . since no concise analytic solution for the inverse relations currently exists, these inverse relations would have to be calculated through a numerical iteration but such a process would need to be repeated for every single trial leading to vastly greater computation time for a fitting algorithm .it is critical to understand that using equations for a circular orbit or an approximate eccentric expression of poor accuracy will cause fitting algorithms to wander into unphysical solutions and/or increase inter - parameter correlations for planets which are eccentric , near - grazing , very close - in , etc .it is therefore imperative to use a mapping which is as accurate as possible in order to have a robust fitting algorithm .the mappings to convert the trial \{ , , } into the physical parameters \{ , , } are given by equations ( 40 ) and ( 41 ) combined with the following replacements : we note that the favoured parameter set derived by assumed a symmetric lightcurve which is not strictly true for .however , for an eccentric orbit , the degree of assymetry between the ingress and egress is known to be very small ( k08 , w10 ) and thus may be neglected for the purposes of choosing an ideal fitting parameter set .[ [ fitting - parameter - sets - t_c - p2-zetar_-b2 ] ] fitting parameter sets : \{ , , , } ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ the two parameter sets proposed so far have both included .since we know is a function of but is not , any parameter set using will likely exhibit larger correlations since there is an extra parameter dependency .however , a parameter set based upon would have to satisfy the condition that it can be inverse mapped into the physical parameters . , originally defined by , can be seen to be reciprocal of one half of the transit duration as defined by ts05 . unlike the \{ , , , } parameter set, we do not need to assume to produce an inverse mapping . by using and ,an exact inverse mapping to the physical parameters is possible which offers significant advantages .having satisfied the criteria of being both a decorrelated parameter and inverseable , would appear to an excellent candidate for lightcurve fitting .a further improvement to this parameter is possible by using the new approximation for the transit duration .let us define : despite the analytic arguments made so far , the clearest validation of which fitting parameter set to employ may be resolved through numerical simulations .this may be done by considering an example system , generating a lightcurve , adding noise and then refitting using an mcmc routine which outputs the inter - parameter correlations .we note that analytic expressions for the covariances may be found through fisher information analysis through the calculation of the relevant partial derivatives .however , the equations describing the lightcurve , as given by are quite elaborate and such an analysis remains outside of the scope of this paper .currently , there exists no exact fisher information analysis in the literature to draw upon . avoided this problem by making a trapezoid approximation of the lightcurve and then implementing a fisher analysis .as discussed earlier , this requires that we assume , which in itself introduces a host of correlations which would be missed by the fisher analysis methodology . therefore , exact numerical testing provides a useful alternative to avoid these issues .first , we consider a super - hot jupiter on a circular orbit with a planet - star separation of from a sun - like star ( ) .we choose to consider a near - grazing transit with corresponding to an orbital inclination of .the lightcurve is generated using the algorithm with no limb - darkening and gaussian noise over a 30 second cadence .the lightcurve is then passed onto a mcmc fitting algorithm where we try several different parameter sets : * \{ , , , } : the physical parameter set . *\{ , , , } : a suggested lightcurve fitting parameter by , based upon the ts05 duration expressions . *\{ , , , } : a modified form of the fitting parameter by , accounting for the improved approximate expression for .* \{ , , , } : a suggested set by , where and are calculated using the expressions presented in this paper .* \{ , , , } : a second suggested set by , where , and are calculated using the expressions presented in this paper . in the mcmc runs , we set the jump sizes to be equal to - uncertainties from a preliminary short - run .we then start the mcmc from 5- s away from the solution for each parameter , and use 500,000 trials with a 100,000 burn - in time .we then compute the cross - correlations between the various parameters in trials which are within of ( errors rescaled such that equals number of data points minus the degrees of freedom ) .we calculate the inter - parameter correlations and construct correlation matrices for each parameter fitting set . as an example , the correlation matrix for the parameter set is given by : we then calculate the semi - principal axes of correlation ellipsoid by diagnolizing the matrices . for a completely optimal parameter set, this diagnolized matrix would be the identity matrix .we quantify the departure of each proposed parameter set from the identity matrix by calculating where is the diagnolized correlation matrix .we display the results in upper half of table 1 .._for each proposed lightcurve fitting parameter set ( left column ) , we calculate the inter - parameter correlation matrices in the examples of i ) a hypothetical near - grazing hot - jupiter on a circular orbit ii ) a system similar to the eccentric planet hd 80606b .we diagnolize the correlation matrices to give and then quantify the departure from a perfectly optimal parameter set ( right column ) , where it is understood that 0 corresponds to optimal and larger values correspond to greater inter - parameter correlations . _ [ cols="^,^ " , ]
in this work , we investigate the accuracy of various approximate expressions for the transit duration of a detached binary against the exact solution , found through solving a quartic equation . additionally , a new concise approximation is derived , which offers more accurate results than those currently in the literature . numerical simulations are performed to test the accuracy of the various expressions . we find that our proposed expression show yields a % improvement in accuracy relative to the most previously employed expression . we derive a new set of equations for retrieving the lightcurve parameters and consider the effect of falsely using circular expressions for eccentric orbits , with particularly important consequences for transit surveys . the new expression also allows us to propose a new lightcurve fitting parameter set , which minimizes the mutual correlations and thus improves computational efficiency . the equation is also readily differentiated to provide analytic expressions for the transit duration variation ( tdv ) due to secular variations in the system parameters , for example due to apsidal precession induced by perturbing planets . [ firstpage ] techniques : photometric planetary systems eclipses methods : analytical celestial mechanics
climate archives from the north atlantic region show repeated shifts in glacial climate , the dansgaard - oeschger ( do ) events . during marine isotope stages 2 and 3 the intervals between the events exhibit a tendency to coincide approximately with multiples of 1470 years , as depicted in figure 1 .the statistical significance of this pattern and the responsible mechanism , however , is still a matter of debate .several hypotheses were proposed to explain the timing of do events .one of these relates the events to two century - scale solar cycles with periods close to 1470/7 ( = 210 ) and 1470/17 ( ) years , the so - called de vries / suess and gleissberg cycles .support for a leading solar role comes from deep - sea sediments , which indicate that during the holocene century - scale solar variability was a main driver of multi - centennial scale climate changes in the north atlantic region .recently the phase relation between solar variability ( deduced from ) and 14 do events was analyzed .a relation far from fixed was found and was interpreted as being in contradiction to braun et al.s hypothesis . while in linear systems a constant phase relation between the forcing and the response is expected , such a relation does not necessarily exist in non - linear systems .but climate records and ocean - atmosphere models , which are not yet suitable for statistical analyses on do events because of their large computational cost , suggest that the events represent switches between two climate states , consequently implying an intrinsically non - linear dynamical scenario . thus , to interpret the reported lack of a fixed phase relation betweenthe do events and solar proxies , it is crucial to analyze their phase relation in simple models .here we investigate this phase relationship in a very simple model of do events .a comprehensive description of this model has already been published before , including a detailed discussion of its geophysical motivation and its applicability , as well as a comparison with a much more detailed ocean - atmosphere model . in the simple model , which was derived from the dynamics of the events in that ocean - atmosphere model ,do events represent repeated switches between two possible states of operation of a bistable , excitable system with a threshold ( figure 2 ) .these states correspond to cold and warm periods of the north atlantic region during do cycles .the switches are assumed to occur each time the forcing function ( f ) , which mimics the solar role in driving do events , crosses the threshold ( t ) .transitions between the two states are accompanied by an overshooting of the threshold , after which the system relaxes back to its respective equilibrium following a millennial - scale relaxation .the rules for the transitions between both states are illustrated in figure 2 .it is assumed that the threshold function is positive in the interstadial ( `` warm '' ) state and negative in the stadial ( `` cold '' ) state . a switch from the stadial state to the interstadial oneis triggered when the forcing is smaller than the threshold function , i.e. when .the opposite switch occurs when . during the switches a discontinuity in the threshold functionis assumed , i.e. t overshoots and takes a non - equilibrium value ( during the shift into the stadial state , during the opposite shift ) .afterwards , t approaches its new equilibrium value ( in the stadial state , in the interstadial state ) following a millennial scale relaxation process : here , and represent the relaxation time in the stadial state ( s=0 ) and in the interstadial state ( s=1 ) , respectively .both the overshooting relaxation assumption and the transition rules in our simple model are a first order approximation of the dynamics of do events in a coupled ocean - atmosphere model . in that modelthe events also represent threshold - like switches in a system with two possible states of operation ( corresponding to two fundamentally different modes of deep water formation in the north atlantic ) and with an overshooting in the stability of the system during these shifts .analogous to the simple model , switches from the stadial mode into the interstadial one are triggered by sufficiently large negative forcing anomalies ( i.e. by a reduction in the surface freshwater flux to the north atlantic that exceeds a certain threshold value ) , whereas the opposite shifts are triggered by sufficiently large positive forcing anomalies ( i.e. by an increase in the freshwater flux that exceeds a certain threshold value ) .it has further been demonstrated that the simple model is able to reproduce the timing of do events as simulated with the ocean - atmosphere model , as well as the occurrence of non - linear resonance phenomena such as stochastic resonance and ghost resonance , which were shown to be properties exhibited by that model .an obvious advantage of the conceptual model compared with the ocean - atmosphere model is its low computational cost , which allows for extensive statistical analyses on the timing of do events .all model - parameter values chosen here are the same as in two earlier publications ( a msv , a msv , b msv , b msv , years , years ; 1 msv = 1 milli - sverdrup = m/s ) . : `` warm '' state ) .a switch from the cold to the warm state is triggered when , which happens in this example at time . during the transition , interpreted as the start of a do event , overshoots and relaxes back towards its new equilibrium ( ) following a millennial time scale .the events are terminated by a switch back to the cold state , which is triggered when ( at time in the figure ) .again , t overshoots and approaches its new equilibrium ( ) . ]to test the assumption of a fixed phase relationship between solar - forced do events and solar variability , we here drive our model by a simple input consisting of noise and of two sinusoidal cycles with equal amplitudes a : + \cos[\frac{2 \pi t}{t_2 } ] ) + \sigma \cdot n(t).}\ ] ] is the standard deviation of the noise and the standard unit variance white noise , with a cutoff frequency of 1/50 years ( figure 3 ) .following the cutoff is used to account for the fact that the model shows an unrealistically large sensitivity to decadal - scale or faster forcing . in analogy to the periods of the two cycles are chosen to be = 1470/7 ( = 210 ) years and = 1470/17 ( .5 ) years , i.e. close to the leading spectral components of the solar de vries and gleissberg cycles . in the simulations shown in figure 4 we use three different signal - to - noise ratios ( snr ) : a = 8 msv and = 5.5 msv ( ) , a = 5 msv and = 8 msv ( ) , a = 3 msv and = 9 msv ( ) . for all of these, the waiting time distribution of the simulated events is centered around a value of 1470 years , with several peaks of only decadal - scale width ( figure 4 ) .the relative position of these peaks is well understood in the context of the ghost resonance theory .the peaks result from constructive interference between the two sinusoidal forcing cycles which produces particularly large magnitude variations in the bi - sinusoidal forcing and when noise is added leads to favored transitions at the corresponding waiting times .depending on the relative amplitude values of the noise and the periodic forcing this synchronization is more or less efficient , as is seen for the different signal - to - noise ratios in figure 4 . even for the lowest ratio , however , the synchronization is still notable .the waiting time distributions shown in figure 4 are thus almost symmetrically centered around a preferred value of 1470 years because the sinusoidal cycles enter in phase every 1470 years , creating forcing peaks of particularly large magnitude .this 1470-year repeated coincidence of the bi - sinusoidal forcing , however , does not show up as a corresponding forcing frequency , since no sinusoidal cycle with that period is present .thus , when linear spectral analysis is performed on the forcing , only the two century - scale sinusoidal cycles are detected as outstanding components .msv ) and white noise ( msv ) .( a ) total forcing f ( grey ) and threshold function t ( black ) .( b ) forcing components ( grey ) ; from top to bottom : 210-year cycle , 86.5-year cycle , noise .dashed lines indicate the onset of the simulated do events . despite the tendency of the three events to recur approximately every 1470 years, only the first two events coincide with minima of the 210-year cycle .the third event , in contrast , occurs closely after a maximum of that cycle . ] despite the robustness of the synchronization effect , none of the two sinusoidal cycles in our forcing shows a fixed phase relationship with _ all _ of the simulated do events , due to the presence of noise and the existence of a threshold . in our model , a fixed phase relation can only be present in the low noise limit ( i.e. either for [ with a supra - threshold bi - sinusoidal forcing ] or for the lowest noise level that still enables repeated threshold crossings [ with a sub - threshold bi - sinusoidal forcing ] , thus corresponding to do events with extremely long waiting times ) , compare third column in figure 4 . even for the largest of the three signal - to - noise ratios in our simulations , the events thus only show a tendency to cluster around a preferred phase of the two sinusoidal cycles .outliers , however , can still occur in almost opposite phase , at least once over a sufficiently large number of events in the simulation .for the highest signal - to - noise ratio , for example , there is still a probability of about 35 percent to find at least one out of 14 events in opposite phase ( i.e. outside of the interval $ ] ) . and for the other two signal - to - noise ratios , the corresponding probability is even much higher ( i.e. 92 percent and 99.5 percent , respectively ) .since a fixed phase relationship between the simulated events and the forcing cycles does not even exist in our very simple model system , it appears unrealistic to us to assume the existence of such a relationship in the climate system .thus , the reported lack of a fixed phase relationship between do events and solar variability ( deduced from ) would also be expected with the proposed ghost resonance solar forcing .we note that superimposed epoch analyses of 14 simulated events can produce forcing - response relations similar to the one reported by : the onset of the superimposed events ( at in the fourth column in figure 4 ) typically coincides with a minimum in the averaged bi - sinusoidal forcing which , however , is not more pronounced than other minima and is highly damped as compared with the unaveraged forcing .a considerable statistical spread exists in the magnitude of this damping because the small number ( 14 ) of events is not yet sufficient to infer reliable information concerning the average phase fluctuation between the input and the output .this lack of phase correlation between forcing and response is explained by the threshold character of do events : the simulated events are triggered when the total forcing ( the sum of the two sinusoidal cycles and the noise ) crosses the threshold function .some of the threshold - crossings are in the first place caused by constructive interference of both cycles .these events coincide with near - minima of the two forcing cycles .other threshold - crossings are , however , in the first place caused by constructive interference of just one cycle and noise .these events thus coincide with a near - minimum of only that cycle ( compare figure 3 ) , whereas a fixed phase - relation with the second cycle does not necessarily exist . and at least for low signal - to - noise ratios , some of the threshold crossings are in the first place caused by the noise alone .these events thus do not show a fixed phase relation with any of the two forcing cycles .the inherently nonlinear noisy synchronization mechanism exhibited by our model is not unique to do events .in fact , it has originally been proposed to explain the perception of the pitch of complex sounds and , as a general concept , has already been used to describe theoretically and experimentally similar dynamics in other excitable or multi - stable systems with thresholds , e.g. in lasers . because of the fact that the leading output frequency is absent in the input , this type of resonance is called ghost stochastic resonance .we here used a simple model of do events , driven by a bi - sinusoidal forcing plus noise , to show that a fixed phase relation between the forcing cycles and _ all _ simulated events does not exist , apart from the unrealistic low noise limit .as argued above , in this model the fluctuations in the phases between the forcing and the response are related to the process giving rise to the transition itself .each event is generated by a threshold crossing resulting from a cooperative process between the two periodic driving forces ( i.e. , the centennial - scale input cycles ) and the stochastic fluctuations . in this nonlinear scenario , as we showed explicitly in our simulations , millennial - scale events with fixed input - output phase relations are impossible for any nonzero noise amplitude . while one could disagree on the interpretation and the statistical significance of the pattern described in figure 1 ,our results show that the reported lack of a fixed phase relationship between 14 do events and solar proxies is consistent with the suggested solar role in synchronizing do events . at the same time our results have further implications for a second so - far unexplained oscillation during pleistocene climate , i.e. the glacial - interglacial cycles , which also show strong indications for the existence of threshold - like dynamics during glacial terminations : since the existence of a causal relation between threshold - crossing events and their quasi - periodic forcing does not necessarily imply the existence of a clear phase relation over _ all _ events , the lack of such a relation between glacial terminations on one hand and the orbital eccentricity and precession cycles on the other hand is not sufficient to reject a leading role of these cycles during terminations , in contrast to the interpretation proposed by .more insight in the cause of pleistocene climate cycles might thus be gained from more adequate statistical approaches , based e.g. on monte - carlo simulations with simple models that mimic the nonlinear dynamics which seems to be relevant during these oscillations .thanks a. ganopolski for instructive discussions on the dynamics of do events and a. svensson for his hospitality at the niels bohr institute .h. b. was funded by deutsche forschungsgemeinschaft ( project ma 821/33 ) .andersen , k. , a. svensson , s. j. johnsen , s. o. rasmussen , m. bigler , r. rthlisberger , u. ruth , m .-siggaard - andersen , j. p. steffensen , d. dahl - jensen , b. m. vinther , and h. b. clausen ( 2006 ) , the greenland ice core chronology 2005 , 15 - 42 ka .part 1 : constructing the time scale ._ , _ 25 _ , 32463257 .bond , g. ; b. kromer , j. beer , r. muscheler , m. n. evans , w. showers , s. hoffmann , r. lotti - bond , i. hajdas , and g. bonani ( 2001 ) , persistent solar influence on north atlantic climate during the holocene , _ science _ , _ 294 _ , 21302136 .braun , h. , m. christl , s. rahmstorf , a. ganopolski , a. mangini , c. kubatzki , k. roth , and b. kromer ( 2005 ) , possible solar origin of the 1,470-year glacial climate cycle demonstrated in a coupled model , _ nature _ , _ 438 _ , 208211 .wagner , g. , j. beer , j. masarik , r. muscheler , p. w. kubik , w. mende , c. laj , g. m. raisbeck , and f. yiou ( 2001 ) , presence of the solar de vries cycle ( years ) during the last ice age , _ geophys ._ , _ 28 _ , 303306
north atlantic climate during glacial times was characterized by large - amplitude switchings , the dansgaard - oeschger ( do ) events , with an apparent tendency to recur preferably in multiples of about 1470 years . recent work interpreted these intervals as resulting from a subharmonic response of a highly nonlinear system to quasi - periodic solar forcing plus noise . this hypothesis was challenged as inconsistent with the observed variability in the phase relation between proxies of solar activity and greenland climate . here we reject the claim of inconsistency by showing that this phase variability is a robust , generic feature of the nonlinear dynamics of do events , as described by a model . this variability is expected from the fact that the events are threshold crossing events , resulting from a cooperative process between the periodic forcing and the noise . this process produces a fluctuating phase relation with the periodic forcing , consistent with proxies of solar activity and greenland climate .
the statistical analysis of precipitation data is an interesting problem of major environmental importance . of particular interestare extreme events of rainfall , which can lead to floodings if a given threshold is exceeded . from a mathematical and statistical point of view, it is natural to apply extreme value statistics to measured rainfall data , but it is not so clear which class of the known extreme value statistics , if any , is applicable , and how the results differ from one spatial location to another . another interesting quantity to look at is the waiting time between rainy days .extreme events of waiting times in this context correspond to droughts .so an interesting question is what type of drought statistics is implied by the observed distribution of waiting times between rainfall periods if this is extrapolated to very long waiting times .in this paper we present a systematic investigation of the probability distributions of the amount of daily rainfall at variety of different locations on the earth , and of waiting time distributions on a scale of days and hours .our main result is that the observed distributions of the amount of rainfall are well - fitted by -exponentials , rather than exponentials , which suggests that techniques borrowed from nonextensive statistical mechanics and superstatistics could be potentially useful to better understand the rainfall statistics . an entropic exponent of is typically observed .in fact , based on the fact that -exponentials asymptotically decay with a power law , we discuss the corresponding extreme value statistics that is highly relevant in the context of floodings produced by extreme rainfall events .we also investigate the waiting time distribution between rainy events , which is much better described by an exponential , although an entropic exponent close to 1 such as seems to give the best fits .we discuss possible dynamical reasons for the occurrence of -exponentials in this context .one possible reason could be superstatistical fluctuations of a variance parameter or a rate parameter .let us explain this a bit further .superstatistical techniques have been discussed in many papers and they represent a powerful method to model and/or analyse complex systems with two ( or more ) clearly separated time scales in the dynamics . the basic idea is to consider for the theoretical modelling a superposition of many systems of statistical mechanics in local equilibrium , each with its own local variance parameter , and finally perform an average over the fluctuating .the probability density of is denoted by .most generally , the parameter can be any system parameter that exhibits large - scale fluctuations , such as energy dissipation in a turbulent flow , or volatility in financial markets .another possibility is to regard as the rate parameter of a local poisson process , as done , for example , in .ultimately all expectation values relevant for the complex system under consideration are averaged over the distribution .many applications have been described in the past , including modelling the statistics of classical turbulent flow , quantum turbulence , space - time granularity , stock price changes , wind velocity fluctuations , sea level fluctuations , infection pathways of a virus , migration trajectories of tumour cells , and much more .superstatistical systems , when integrated over the fluctuating parameter , are effectively described by more general versions of statistical mechanics , where formally the boltzmann - gibbs entropy is replaced by more general entropy measures .the concept can also be generalized to general dynamical systems with slowly varying system parameters , see for some recent rigorous results in this direction .our main goal in this paper is to better understand the extreme event statistics of rainfall at various example locations on the earth. we will start with a careful analysis of experimentally recorded time series of the amount of rainfall measured at a given location , whose probability density is highly relevant to model the corresponding extreme event statistics .ultimately of course all this rainfall dynamics can be formally regarded as being produced by a highly nonlinear and high - dimensional deterministic dynamical system in a chaotic state , producing the occasional rainfall event , hence it is useful to keep in mind the basic results of extreme event statistics for weakly correlated events as generated by mixing dynamical systems .recently there has been much activity on the rigorous application of extreme values theory to deterministic dynamical systems and also to stochastically perturbed ones .a remarkable feature of the dynamical system approach is that there exist some correlations between events , and hence the extreme value theory used to tackle it must account for this correlation going beyond a theory that is just based on sequences of events that are statistically independent . in the superstatistics approach , correlations are also present , due to the fact that parameter changes take place on long time scales , but the relaxation time of the system is short as compared to the time scale of these parameter changes , so that local equilibrium is quickly reached . what is worked out in this paper is a comparison with experimentally measured rainfall data , to decide which extreme event statistics should be most plausibly applied to various questions related to amount of rainfall and waiting times between rainfall events .extreme value theory quite generally tells us ( under suitable asymptotic independence assumptions ) that there are only three possible limit distributions , namely , the gumbel , frchet and weibull distribution . but are these assumptions of near - independence satisfied for rainfall data , and if yes , which of the above three classes are relevant ? this is the subject of this paper. we will also discuss simple deterministic dynamical system models that generate superstatistical processes in this context .the paper is organized as follows . in sectionii we present histograms of rainfall statistics , extracted from experimentally measured time series of rainfall at various locations on the earth .what is seen is that the probability density of amount of rainfall is very well fitted by -exponentials .we discuss the generalized statistical mechanics foundations of this based on nonextensive statistical mechanics with entropic index , with . in section iiiwe look at waiting time distributions ( on a daily and hourly scale ) between rainy episodes . these are observed to be close to exponential functions , similar as for the poisson process .however , a careful analysis shows that a slightly better fit is again given by -exponentials , but this time with much closer to 1 .a simple superstatistical model for this is discussed in section iv , a poisson process that has a rate parameter that fluctuates in a superstatistical way .we review standard extreme event statistics in section v and then , in section vi , based on the measured experimental results of rainfall statistics , we develop the corresponding extreme value statistics . in sectionvii we analyse the ambiguities that arise for the extreme event statistics of waiting times , depending on whether we assume the waiting time distribution is either an exact exponential or a slightly deformed -exponential as produced by superstatistical fluctuations .finally , in section viii we describe a dynamical systems approach to superstatistics .we performed a systematic investigation of time series of rainfall data for 8 different example locations on the earth ( figs . 1 - 8 ) .the data are from various publicly available web sites .when doing a histogram of the amount of daily rainfall observed , a surprising feature arises .all distributions are power law rather than exponential .they are well fitted by so - called -exponentials , functions of the form which are well - motivated by generalized versions of statistical mechanics relevant for systems with long - range interactions , temperature fluctuations and multifractal phase space structure .of course the ordinary exponential is recovered for . whereas the data of most locations are well fitted by , central england and vancouverhave somewhat lower values of closer to 1.13 .one may speculate what the reason for this power law is .nevertheless , the formalism of nonextensive statistical mechanics is designed to describe complex systems with spatial or temporal long - range interactions , and -exponentials occur in this formalism as generalized canonical distributions that maximize -entropy where the are the probabilities of the microstates .ordinary statistical mechanics is recovered in the limit , where the -entropy reduces to the shannon entropy the generalized canonical distributions maximize the -entropy subject to suitable constraints . in our case the constraint is given by the average amount of daily rainfall at a given location .the way rainfall is produced is indeed influenced by highly complex weather systems and condensation processes in clouds , so one may speculate that more general versions of statistical mechanics could be relevant as an effective description . also for hydrodynamic turbulent systems and pattern forming systems these generalized statistical mechanics methods have previously been shown to yield a good effective description .the amount of rain falling on a given day is a complicated spatio - temporal stochastic process with intrinsic correlations , as rainy weather often has a tendency to persist for a while , both spatially and temporally .the actual value of for the observed rainfall statistics reflects characteristic effective properties in the climate and temporal precipitation pattern at the given location . for temperature distributions at the same locations as in fig.1 - 8 ,see .-exponential .the parameter is fitted for the various locations . for central england , . ] .another interesting observable that we extracted from the data is the waiting time distribution between rainy episodes .we did this both for a time scale of days and a time scale of hours .a given day is marked as rainy if it rains for some time during that day .the waiting time is then the number of days one has to wait until it rains again , this is a random variable with a given distribution which we can extract from the data .results for the waiting time distributions are shown in fig .what one observes here is that the distribution is nearly exponential .that means the poisson process of nearly independent point events of rainy days is a reasonably good model . at closer inspection, however , one sees that again a slightly deformed -exponential , this time with , is a better fit of the waiting time distribution . as worked out in the next section, one may explain this with a superstatistical poisson process , i.e. a poisson process whose rate parameter on a long time scale exhibits fluctuations that are -distributed , with a rather large number of degrees of freedom .we start with a very simple model for the return time of rainfall events ( or extreme rainfall events ) on any given time scale .this is to assume that the events follows a poisson process . for a poisson processthe waiting times are exponentially distributed , here , is the time from one event ( peak over threshold ) to the next one , and is a positive parameter , the rate of the poisson process .the symbol denotes the conditional probability density to observe a return time provided the parameter has certain given value .the key idea of the superstatistics approach can be applied to this simple model , thus constructing a superstatistical poisson process . in this case the parameter is regarded as a fluctuating random variable as well , but these fluctuations take place on a very large time scale .for example , for our rainfall statistics the time scale on which fluctuates may correspond to weeks ( different weather conditions ) whereas our data base records rainfall events on an hourly basis .if is distributed with probability density , and fluctuates on a large time scale , then one obtains the marginal distribution of the return time statistics as this marginal distribution is actually what is recorded when we sample histograms of the observational data . by inferring directly on a simple model for the distribution , a more complex model for the return times can be derived without much technical complexity .for example , consider that there are different gaussian random variables , that influence the dynamics of the intensity parameter as a random variable .we may thus assume as a very simple model that with and .then the probability density of is given by a -distribution : where is the number of degrees of freedom and is a shape parameter that has the physical meaning of being the average of formed with the distribution .the integral ( [ eq : marg - ss - dist ] ) is easily evaluated and one obtains the -exponential distribution : where and . to sum up , this model generates -exponential distributions by a simple mechanism , fluctuations of a rate parameter .typical -values obtained in our fits are for rainfall amount and for waiting time between rainfall events .classic extreme value theory is concerned with the probability distribution of unlikely events .given a stationary stochastic process , consider the random variable defined as the maximum over the first -observations : in many cases the limit of the random variable may degenerate when .analogously to central limit laws for partial sums , the degeneracy of the limit can be avoided by considering a rescaled sequence for suitable normalising values and .indeed , extreme value theory studies the existence of normalising values such that as , with a non - degenerate probability distribution .two cornerstones in extreme value theory are the fisher - tippet theorem and the gnedenko theorem .the former asserts that if the limiting distribution exist , then it must be either one of three possible types , whereas the latter theorem gives necessary and sufficient conditions for the convergence of each of the types .a third cornerstone in extreme value theory are the leadbetter conditions .these are a kind of weak asymptotic independence conditions , under which the two previous theorems generalize to stationary stochastic series satisfying them .let us review these results in somewhat more detail . in the case where the process is independent identically distributed ( i.i.d .) the fisher - tippett theorem states that if is i.i.d . andthere exist sequences and such that the limit distribution is non - degenerate , then it belongs to one of the following types : type i : : : for .this distribution is known as the _ gumbel _ extreme value distribution ( e.v.d . ) .type ii : : : , for ; , otherwise ; where is a parameter .this family of distributions is known as the _ frchet _ e.v.d .type iii : : : , for ; , otherwise ; where is a parameter .this family is known as the _ weibull _ e.v.d .a further extension of this result is the gnedenko theorem , which provides a characterization of the convergence in each of these cases .let be an i.i.d .stochastic process and let be its cumulative distribution function .consider .the following conditions are necessary and sufficient for the convergence to each type of e.v.d . :type i : : : there exists some strictly positive function such that for all real ; type ii : : : and , with , for each ; type iii : : : and , with , for each .this result implies that the extremal type is completely determined by the tail behaviour of the distribution .the rainfall data were well described by -exponentials , but waiting time distributions were observed to be close to ordinary exponentials , with only deviating by a small amount from 1 .let us now discuss the differences in extreme value statistics that arise from theses different distributions . in the case where is distributed as an ordinary exponential function with parameter , we have it is not difficult to check that the exponential distribution belongs to the gumbel domain of attraction .in other words , the extreme events associated to the exponential distribution will be gumbel distributed .recall that the -exponential function is defined as ^{1/(1-q)},\ ] ] with .a random variable is -exponential distributed ( with parameter ) if its density function is equal to . in such a case ,its hazard function is where . using the gnedenko theoremif follows that the -exponential distribution belongs to the frchet domain of attraction . in this case the shape parameter of the frchet distribution is equal to .extremely large waiting times for rainfall events correspond to droughts . clearly , it is interesting to extrapolate our observed waiting time distributions to very large time scales . however , in section [ sec : waiting - time ] we saw that in most cases it is difficult to discern if the waiting time distribution is that of a poisson process , distributed as an exponential , or if it is a -exponential with close to one .this can make a huge difference for extreme value statistics .the aim of this section is to assess the impact of choosing one or the other model .consider a constant and a random variable modelled either by an exponential or a -exponential . to normalize the problem, we can scale our analysis in terms of the mean .in other words , we look at the probability of being bigger than a multiple , say -times , the mean of .if is distributed like an exponential with parameter , its mean is equal to and its hazard function is given by ( [ eq : hazard - exp ] ) .then it is easy to check that on the other hand , if is distributed like a -exponential with parameter , its mean is equal to ( provided ) and its hazard function is given by ( [ eq : hazard - q - exp ] ) . in this casewe have [ fig : returns ] recall that the exponential distribution can be understood as the limit of the -exponential as goes to .this is also true for the probability above , which converges to as goes to . in fig 15we plotted the probability of an event of level for different values of .for instance the probability of having an observation bigger than times the mean is for , when and when . when we look at the probability of an observation bigger than the mean , it is , and respectively .apparently , the predicted drought statistics is very different choosing either the value or the very similar value for the observed waiting time distribution .this illustrates the general uncertainty in model building for extreme rainfall and drought events .ultimately , the weather and rainfall events at a given location can be regarded as being produced by a very high - dimensional deterministic dynamical system exhibiting chaotic properties .it is therefore useful to extend the superstatistics concept to general dynamical systems , following similar lines of arguments as in the recent paper .the basic idea here is that one has a given dynamics ( which , for simplicity , we take to be a discrete mapping on some phase space ) which depends on some control parameter .if is changing on a large time scale , much larger than the local relaxation time determined by the perron - frobenius operator of the mapping , then this dynamical system with slowly changing control parameter will ultimately generate a superposition of invariant densities for the given parameter .similarly , if we can calculate return times to certain particular regions of the phase space for a given parameter , then in the long term the return time distribution will have to be formed by taking an average over the slowly varying parameter . clearly , the connection to the previous sections is that a rainfall event corresponds to the trajectory of the dynamical system being in a particular subregion of the phase space , and the control parameter corresponds to the parameter used in the previous sections .let us consider families of maps depending on a control parameter .these can be _ a priori _ arbitrary maps in arbitrary dimensions , but it is useful to restrict the analysis to mixing maps and assume that an absolutely continuous invariant density exists for each value of the control parameter .the local dynamics is we allow for a time dependence of and study the long - term behavior of iterates given by clearly , the problem now requires the specification of the sequence of control parameters as well , at least in a statistical sense .one possibility is a periodic orbit of control parameters of length .another possibility is to regard the as random variables and to specify the properties of the corresponding stochastic process in parameter space .this then leads to a distribution of parameters . in general , rapidly fluctuating parameters will lead to a very complicated dynamics .however , there is a significant simplification if the parameters change slowly .this is the analogue of the slowly varying temperature parameters in the superstatistical treatment of nonequilibrium statistical mechanics .the basic assumption of superstatistics is that an environmental control parameter changes only very slowly , much slower than the local relaxation time of the dynamics . for mapsthis means that significant changes of occur only over a large number of iterations . for practical purposes one can model this superstatistical case as follows :one keeps constant for iterations ( ) , then switches after iterations to a new value , after iterations one switches to the next values , and so on .one of the simplest examples is a period-2 orbit in the parameter space .that is , we have an alternating sequence that repeats itself , with switching between the two possible values taking place after iterations .this case was given particular attention in , and rigorous results were derived for special types of maps where the invariant density as a function of the parameter is under full control , so - called blaschke products . here we discuss two important examples , which are of importance in the context of the current paper , namely how to generate ( in a suitable limit ) a superstatistical langevin process , as well as a superstatistical poisson process , using strongly mixing maps .* example 1 * * superstatistical langevin - like process * we take for a map of linear langevin type .this means is a 2-dimensional map given by a skew product of the form here denote the average of iterates of .it has been shown in that for , finite this deterministic chaotic map generates a dynamics equivalent to a linear langevin equation , provided the map has the so - called -mixing property , and regarding the initial values ] . consider a very small subset of the phase space of size , say $ ] and a generic trajectory of the binary shift map for a generic initial value , where are the digits of the binary expansion of the initial value .we define a ` rainfall ' event to happen if this trajectory enters ( of course , for true rainfall events the dynamical system is much more complicated and lives on a much higher - dimensional phase space ) .it is obvious that the above sequence of events follows poisson - like statistics , as the iterates of the binary shift map are strongly mixing , which means asymptotic statistical independence for a large number of iterations . indeed between successive visits of the very small interval , there is a large number of iterations and hence near - independence .hence the binary shift map generates a very good approximation of the poisson process for small enough , and the waiting time distribution between events is exponential .we may of course also look at a more complicated system , where we iterate a strongly mixing map which depends on a parameter , and where the invariant density of the map depends on the control parameter in a nontrivial way .examples are blaschke products , studied in detail in .if the parameter varies on a large time scale , so does the probability of iterates to enter the region , and hence the rate parameter of the above poisson - like process will also vary .the result is a superstatistical poisson - like process , generated by a family of deterministic chaotic mappings . in this way we can build up a formal mathematical framework to dynamically generate superstatistical poisson processes .we started this paper with experimental observations : the probability densities of daily rainfall amounts at a variety of locations on the earth are not gaussian or exponentially distributed , but follow an asymptotic power law , the -exponential distribution .the corresponding entropic exponent is close to .the waiting time distribution between rainy episodes is observed to be close to an exponential distribution , but again a careful analysis shows that a -exponential is a better fit , this time with close to 1.05 .we discussed the corresponding extreme value distributions , leading to gumbel and frchet distributions .we made contact with a very important concept that is borrowed from nonequilibrium statistical mechanics , the superstatistics approach , and pointed out how to generalize this concept to strongly mixing mappings that can generate langevin - like and poisson - like processes for which the corresponding variance or rate parameter fluctuates in a superstatistical way .of course rainfall is ultimately described by a very high dimensional and complicated spatially extended dynamical system , but simple model systems as discussed in this paper may help .g.c . yalcin was supported by the scientific research projects coordination unit of istanbul university with project number 49338 .she gratefully acknowledges the hospitality of queen mary university of london , school of mathematical sciences , where this work was carried out .the research of p. rabassa and c. beck is supported by epsrc grant number ep / k013513/1 .silva , v.b.s ; kousky , v.e .; higgins r.w .daily precipitation statistics for south america : an intercomparison between ncep reanalyses and observations ._ journal of hydrometeorology _ * 2011 * 12(1 ) , 101 - 117 .kaniadakis , g. ; lissia , m. ; scarfone , a.m. two - parameter deformations of logarithm , exponential , and entropy : a consistent framework for generalized statistical mechanics ._ physical review e _ * 2005 * 71(4 ) , 046128 .faranda , d. ; lucarini , v. ; turchetti , g. ; vaienti , s. numerical convergence of the block - maxima approach to the generalized extreme value distribution ._ journal of statistical physics _ * 2011 * 145(5 ) , 1156 - 1180. gupta , c. ; holland , m. ; nicol , m. extreme value theory and return time statistics for dispersing billiard maps and flows , lozi maps and lorenz - like maps ._ ergodic theory and dynamical systems _ * 2011 * 31(05 ) , 1363 - 1390 .fisher , r. a. ; tippett , l. h. c. limiting forms of the frequency distribution of the largest or smallest member of a sample ._ mathematical proceedings of the cambridge philosophical society _ * 1928 * ( vol .180 - 190 ) .cambridge university press .leadbetter , m. r. on extreme values in stationary sequences ._ zeitschrift fr wahrscheinlichkeitstheorie und verwandte gebiete _ * 1974 * 28(4 ) , 289 - 303 .rabassa , p. ; beck , c. extreme value laws for superstatistics ._ entropy _ * 2014 * 16(10 ) , 5523 - 5536 .
we analyse the probability densities of daily rainfall amounts at a variety of locations on the earth . the observed distributions of the amount of rainfall fit well to a -exponential distribution with exponent close to . we discuss possible reasons for the emergence of this power law . on the contrary , the waiting time distribution between rainy days is observed to follow a near - exponential distribution . a careful investigation shows that a -exponential with yields actually the best fit of the data . a poisson process where the rate fluctuates slightly in a superstatistical way is discussed as a possible model for this . we discuss the extreme value statistics for extreme daily rainfall , which can potentially lead to flooding . this is described by frchet distributions as the corresponding distributions of the amount of daily rainfall decay with a power law . on the other hand , looking at extreme event statistics of waiting times between rainy days ( leading to droughts for very long dry periods ) we obtain from the observed near - exponential decay of waiting times an extreme event statistics close to gumbel distributions . we discuss superstatistical dynamical systems as simple models in this context .
we consider a conditional moment restricted model where is a vector of observable random variables , and may or may not be included in . here is a one - dimensional residual function known up to .the conditional expectation is taken with respect to the conditional distribution of given and , assumed unknown .the parameter of interest is , which is infinite dimensional . moreover , suppose we observe independent and identically distributed data of . model ( [ equ11 ] ) is a very general setting , which encompasses many important classes of nonparametric and semiparametric models .[ examp11 ] consider the model assuming .let , then it can be written as the conditional moment restricted model with . [ examp12 ]consider the single index model where .the parameter of interest is , with being nonparametric .this type of model is studied by and . by defining , and , we can write . [ examp13 ]consider the nonparametric model where is an endogenous regressor , meaning that does not vanish .however , suppose we have observed an instrumental variable for which ; then it becomes a nonparametric regression model with instrumental variables ( npiv ) , studied by and .define , with .then we have the conditional moment restriction . [ examp14 ] the nonparametric quantile iv regressionwas previously studied by , and .the model is where is the unknown function of interest , and is known and fixed .assume is a continuous random variable .then the conditional moment restriction is given by if we define ^ 2 ] with } ] .then we have the risk consistency result at rate the naming of these conditions is obvious , except for ( ii ) .there , the approximation refers to the ability of the functions in ( proposed by the prior ) to approximately minimize the risk over with not - too - small prior probability .when the following condition is added , the risk consistency leads to the estimation consistency .[ t22 ] suppose there exists a sequence such that the following conditions hold : in the previous theorem ; ( distinguishing ability ) for any , then for any , we have theorem [ t21 ] is implied by lemma [ l21 ] .now we prove theorem [ t22 ] . for any , by theorem [ t21 ] , where the third inequality is implied by condition ( iv ) for all large . as a special case of these results ,note that when is point identified as the unique minimizer of on , that is , , ( [ e22 ] ) then becomes the regular posterior consistency result .in the subsequent sections , we will construct a so - called _ limited information likelihood _ and apply the previous two theorems to the conditional moment restricted model ( [ equ11 ] ) , by verifying conditions ( i)(iv ) .consider a conditional moment condition =0,\ ] ] where is the true nonparametric structural function . here is -dimensional , with fixed . for simplicity , throughout the paper , let us assume is supported on ^d ] be a partition of ] , where for each , \qquad\mbox{for some } i_l\in\{1,\ldots , k_n\}.\ ] ] we require .let .for each , define where is the indicator function .let , which is a vector . equation ( [ e31 ] ) then implies where the expectation is taken with respect to the joint distribution of conditional on . throughout the paper , the expectationis always taken conditionally on .when there are more moment conditions than the parameters , and hence ( [ e33 ] ) is a problem of many moment conditions with increasing number of moments studied by .it is straightforward to verify that for each , and , write and . instead of , we construct the posterior for its approximating function inside . under some regularity conditions , for each fixed , would satisfy the central limit theorem : for any , as goes to infinity , this motivates a likelihood function on the sieve space , according to , the function lil can be more appropriately interpreted as the best approximation to the true likelihood function under the conditional moment restriction by minimizing the kullback leibler divergence , which is known as the _ limited information likelihood _ ( lil ) .note that lil is not feasible , as depends on the unknown function ; therefore suggested replacing with a constant matrix ( not dependent on ) , while maintaining the order of each element .for each element on the diagonal , suppose we have the integration mean value theorem : for some , provided that ^d } e[\rho(z , g_0)^2|w]<\infty ] , where , and is the sobolev norm , then for some ; see , for example , kress [ ( ) , chapter 8 ] and ; see also and for splines and orthogonal wavelets in other function spaces .[ a34 ] there exists such that , this assumption is trivially satisfied by the nonparametric iv regression in example [ examp13 ] .here we give another example that satisfies this assumption .[ ex32 ] consider the model in example [ examp14 ] , in which the conditional moment restriction is given by it is straightforward to verify that for any , \\ & & { } + e\bigl[p\bigl(g_2(x)\leq y\leq g_1(x)|x\bigr)\bigr].\end{aligned}\ ] ] suppose there exists a constant such that , the conditional c.d.f . of , satisfies for any and in the support of .then the first term on the right - hand side is bounded by &\leq & e|f_{y|x}(g_2(x))-f_{y|x}(g_1(x))|\\ & \leq & ce|g_2(x)-g_1(x)|.\end{aligned}\ ] ] likewise , \leq ce|g_2(x)-g_1(x)| ] and ^ 2\} ] for some weight function , then with proper choices of , can be used to test some special properties of , such as the monotonicity , the convexity , etc . .on the other hand , itself may have interesting meanings .for example , when denotes the inverse demand function in nonparametric regression , can be the consumer surplus [ ] . have provided conditions to point identify even if itself is not identified .[ examp32 ] suppose we want to test whether the unknown function is weakly increasing .note that any weakly increasing function must satisfy ; hence the functional of interest here is .suppose the joint distribution of has density function .by , is point identified , if there exists such that <\infty ] if and for any .[ h ] ( i ) ; ( ii ) is uniformly continuous .[ cor1 ] suppose the assumptions of theorem [ t33 ] ( if the truncated priors are used ) and theorem [ t34 ] ( if the thin - tail prior is used ) are satisfied .in addition , suppose assumption [ h ] holds .when is not necessarily point identified , , nonparametric instrumental variable regression ( npiv ) model is given by where is endogenous , which is correlated with .we consider the following parameter space and the norm : <\infty\},\qquad \|g\|_s^2=e[g(x)^2].\ ] ] in addition , suppose we observe an instrumental variable ^d ] that is bounded away from both zero and infinity .[ a42 ] there exists such that for all : , ; is lipschitz continuous with respect to on ^d ] , condition ( iii ) requires that the family is lipschitz equicontinuous on ^d ] . , , where the fact that is bounded away from infinity is guaranteed by condition ( i ) .] [ a43 ] there exist , with , and a positive sequence that strictly decreases to zero as such that as .( we will choose to be the projection of onto , unless otherwise noted . )examples of the rate are discussed earlier behind assumption [ a33 ] .[ t41 ] assume .then under assumptions [ iid ] , [ a41 ] , [ a42 ] , define a semi - norm , which is weaker than , as it can be easily verified that satisfies the triangular inequality , but does not necessarily imply if the conditional distribution is not complete .note that ; hence this semi - norm induces an equivalence class characterized by the identified region , such that if and only if .in other words , we can say that _ is weakly identified _ under , since for any , and are equivalent under .the following theorem is a straightforward application of theorems [ t31 ] and [ t32 ] : [ t42 ] under assumptions [ iid ] , [ a41][a43 ] , suppose is such that : for the truncated priors assuming , for the thin - tail prior with , assuming , then define <\infty\},\qquad t(g)=e(g(x)|w)\ ] ] and write . then the npiv model can be equivalently written as under assumption [ a44 ] , is a compact linear operator [ see ] , and therefore is continuous .equation ( [ e41 ] ) is usually called the _ fredholm integral equation of the first kind_. [ a44 ] the joint distribution is absolutely continuous with respect to the lebesgue measure .in addition , suppose , , denote the density functions of , and , respectively , then as described before , the problem of inference about is ill - posed in two aspects .the first ill - posedness comes from the identification , which depends on the invertibility of .if is nonsingular , in which case its null space is , can be point identified by , but not otherwise .see and for detailed descriptions of the identification issues .even when is identified , in which case exists , as pointed out by and , since is of infinite dimension , and is compact , is not bounded ( therefore is not continuous ) . as a result ,small inaccuracy in the estimation of can lead to large inaccuracy in the estimation of , which is known as the type - iii ill - posed inverse problem described in section [ sec32 ] .when is partially identified , this problem is still present when ^ 2\}=0.\ ] ] by theorems [ t33 ] , [ t34 ] and [ t42 ] , in order to achieve the posterior consistency , it suffices to verify where hence it requires us to derive a lower bound of first , and , in addition , this lower bound should decay at a rate slower than . when is point identified and a slowly growing finite - dimensional sieve is used , showed the existence of such a lower bound using the singular value decomposition of .their approach is briefly illustrated in the following example .[ ex41 ] let $ ] denote the inner product of two elements in , and be the ordered singular value system of such that suppose is nonsingular , then forms an orthonormal basis of . showed that when is used as the basis in the sieve approximation space , . therefore , condition ( [ e43 ] ) is satisfied if we assume .in addition , suppose decays at a polynomial rate for some ; then we require , a slowly growing sieve dimension .we impose the following assumption to derive a lower bound for and verify ( [ e43 ] ) , which , in the identified case , uses more general basis functions for the sieve space .therefore we allow the sieve basis to be different from the eigenfunctions of .a similar approach was used by chen and reiss [ ( ) , section 6.1 ] , who used the wavelets as the sieve basis functions while the eigenfunctions of form a fourier basis .[ a45 ] there is a continuous and increasing function satisfying such that , for as defined in assumption [ a43 ] and some constants : for all ; .\(1 ) this assumption implies a generalization of the relation in example [ ex41 ] . in this assumption , are the basis functions whose first terms span the sieve approximation space . in the identified case, can be a general set of basis functions that is different from the eigenfunctions of .chen and pouzo [ ( ) , section 5.3 ] identified the singular value of example [ ex41 ] as a special case of the general , in which case assumption [ a45 ] is satisfied . in its general form , assumption [ a45 ]is standard in the literature for the linear ill - posed inverse problem when the convergence rate of the estimator is studied ; see , for example , , chen and pouzo [ ( ) , assumption 5.2 ] , chen and reiss [ ( ) , section 2.1 ] , etc . as described above , however , this assumption is also needed in order to verify ( [ e43 ] ) and show consistency when general basis functions are used . provided sufficient conditions of assumption [ a45 ] for the npiv model setting .\(2 ) in the partially identified case when is not a singleton , assumption [ a45 ] is still satisfied , if we take to be the eigenfunctions of that correspond to its nonzero eigenvalues , where is the conditional expectation operator , and is its adjoint .the spectral theory of compact operators [ ] implies that for all , where represent all the ( nonzero ) eigenvalues of , and are the corresponding eigenfunctions ( the zero eigenvalues of do not contribute to the right - hand side of the spectral decomposition ) .therefore , assumption [ a45 ] remains valid with , with denoting the sequence of decreasing nonzero eigenvalues .this idea of using the spectral representation of is related to the commonly used `` general source condition '' in the literature [ and ] , where , for example , used this condition to derive the convergence rate of their kernel - based tikhonov regularized estimator in npiv regression .\(3 ) when a more general sieve basis is used in the partially identified case , condition ( i ) of assumption [ a45 ] is not generally satisfied .for example , suppose there exists , but . by the definition of , , but the right - hand side of the displayed inequality in condition ( i )is strictly positive unless are the eigenfunctions of . to allow for more general sieve basis in this case ,a possible approach is to assume the true in the data generating process to lie in a compact set , for example ., a sobolev ball [ ] .it is then not hard to show that is bounded away from zero .restricting inside a compact set is actually a quite common approach in nonparametric iv regression , and the literature is found in , , , etc .recently , extended this approach to the partially identified case , with the compactness restriction .we do not pursue this approach here , since our other results on posterior consistency allow a noncompact parameter space .as in , generally the degree of ill - posedness has two types : _ mild ill - posedness _ : for some ._ severe ill - posedness _ : for some . under assumption [ a45 ], it can be shown that for any ; see lemma c.5 of the supplementary material . intuitively speaking , is associated with the singular values of and is related to how severe the type - iii ill - posed inverse problem is .when the nonzero singular values decay at a polynomial rate , corresponds to the mildly ill - posed case ; when the singular values decay at an exponential rate , it corresponds to the severely ill - posed case . before formally presenting our posterior consistency result , we briefly comment on the role of condition ( ii ) of assumption [ a45 ] .assumption 5.2(ii ) is the so - called `` stability condition '' in that is required to hold only in terms of the sieve approximation error on one element in . by theorems [ t33 ] and [ t34 ] , we require . it can be easily shown that , and hence was replaced with in the condition of theorem [ t42 ] .in addition , condition ( i ) of assumption [ a45 ] implies that . with condition ( ii ) of assumption [ a45 ], it can be further shown that ( see lemma c.6 in the supplementary material ) . since , is verified . under this framework, we have the posterior consistency under : [ t43 ] under assumptions [ iid ] , [ a41][a45 ] , suppose : for the truncated priors assuming , for the thin - tail prior with , assuming , then for any , when is point identified , we can also establish the posterior consistency using normal priors for some constant . as discussed previously , by restricting to grow slowly as , we do not need a shrinking prior to function as a penalty term attached to the log - likelihood for the regularization purpose .therefore is treated to be a fixed constant that does not depend on . with the assumptions imposed in sections [ sec42 ] and [ sec43 ] , we can verify all the conditions in theorem [ t22 ] , which then leads to the following theorem : [ t44 ] assume is point identified . under assumptions [ iid ] , [ a41][a45 ] , suppose the normal prior ( [ e46 ] ) is used , and then for any , to choose that satisfy ( [ e44 ] ) ( [ e45 ] ) and ( [ e47 ] ) for each specified prior , consider the case where is decreasing as some power of [ see , e.g. , and ] , and grows at a polynomial rate of , that is , \\[-8pt ] \frac{k_n^{3d/2}}{\sqrt{n}}+\frac{1}{k_n}&\sim & n^{-p},\qquad 0<p\leq\frac{1}{3d+2}.\nonumber\end{aligned}\ ] ] we then have the following corollaries : [ cor41 ] suppose the truncated prior ( either uniform or truncated normal ) is used ; then the following choice of achieves the posterior consistency , for : in the mildly ill - posed case , in the severely ill - posed case , [ cor42 ] suppose the thin - tail prior is used ; then the following choice of achieves the posterior consistency , for : in the mildly ill - posed case , in the severely ill - posed case , [ cor43 ] suppose the normal prior is used , and is point identified , the following choice of achieves the posterior consistency : in the mildly ill - posed case , in the severely ill - posed case , in the conditions of these consistency results , the choice of tuning parameters ( , , ) depend on some parameters that one either knows or chooses ( , ) , as well as some parameters related to the true model ( , ) . the latter , although undesirable , can not be totally avoided when we study the frequentist convergence properties under ill - posedness .[ conditions depending on the true model are also used , e.g. , by , directly in their corollary 5.1 , and indirectly at the end of their section 3.1 . ] on the other hand , these results can still have meaningful implications that do not explicitly depend on the indexes and ( which are probably unknown in practice ) .for example , we note that in the mildly ill - posed situations , the condition on would be satisfied if it grows as any finite power of .likewise , in the severely ill - posed situations , the condition on would be satisfied if it grows as any finite power of .in addition , we will indicate in the next section that the current bayesian - flavored treatment can even allow a data - driven choice of the sieve dimension , using a posterior distribution derived from a mixed prior .as the sieve dimension plays an important role not only in dealing with the ill - posed inverse problem , but also in many applied sieve estimation methods , in this section we briefly discuss the possibility of choosing it based on a posterior distribution .this will require specifying a prior distribution on the sieve dimension first .since the conditions of a deterministic for consistency only restricts the growth rate , as a result , would also lead to consistency for a positive constant , if ensures consistency .we denote the sieve dimension by , let it be random and place a discrete uniform prior for some deterministic sequence and constant .then the prior on the sieve coefficients becomes a mixture prior where follows a prior as specified before for a given sieve dimension .the feasible limited information likelihood is , as before , denoted by .we have the joint posterior it can be shown that the uniform mixture prior can also lead to the posterior consistency .[ t51 ] for each theorem in sections [ sec3 ] and [ sec4 ] , suppose the corresponding conditions are satisfied for the deterministic sieve dimension instead of , for some . then all the posterior consistency results stated in sections [ sec3 ] and [ sec4 ] ( on risk consistency and on estimation consistency ) remain valid for the mixed prior ( [ e52 ] ) with random following prior ( [ e51 ] ) , with no extra conditions , with the following two exceptions : we will additionally assume that holds for the statement of theorem [ t32 ] to hold .we will additionally assume that for the statement of theorem [ t32 ] to hold .note that the uniform prior is used for , which gives zero prior probability on very large choice beyond .however , from a technical point of view , the result can be extended to the case with tails of prior on extending to infinity , as long as the tail is thin enough so that is dominated by a small enough upper bound .the marginal posterior of is given by practically , we can choose from .we studied the nonparametric conditional moment restricted model in a quasi - bayesian approach , with a special focus on the large sample frequentist properties of the posterior distribution .there was no distribution assumed on the data generating process .instead , we derived the posterior using the _ limited information likelihood _ ( _ lil _ ) , allowing the proposed procedure to be simpler than the traditional nonparametric bayesian approach which would model the data distribution nonparametrically .there are several alternative moment - condition - based likelihood functions .the empirical likelihood [ ) ] and the generalized empirical likelihood [ , and ] are typical examples .it is still possible to establish the posterior consistency if these alternative nonparametric likelihoods are used , which is left as a future research direction .the parameter space does not need to be compact .we approximate using a finite - dimensional sieve space , and the regularization is carried out by a slowly growing sieve dimension .we then studied in detail the npiv model and verified all the sufficient conditions proposed in section [ sec3 ] in order for the posterior to be consistent .it is also possible to achieve the posterior consistency using a larger sieve dimension . in this case , the regularization is carried out by a truncated normal prior with shrinking variance , and the log - prior is then a regularization penalty attached to the log - likelihood .conditions ( [ e310 ] ) , ( [ e311 ] ) and assumption [ a45 ] can be relaxed .we describe this procedure in the technical report [ ] .an interesting research direction is to derive the convergence rate .with all the tools given in this paper , it is possible to obtain the rate of convergence of our procedure . however , the rate would be sub - optimal , possibly due to the technical bound ( [ e21 ] ) used in this paper . it would be interesting to develop a method based on a bound tighter than ( [ e21 ] ) , in order to prove the nonparametric minimax optimal rate of convergence as in . in applications ,our method requires a priori choices of , and for the truncated prior .we conjecture that the finite sample behavior of the posterior is robust to the choice of .however , it should be sensitive to , as a large value of may lead to over - fitting . therefore , we proposed an approach to allow for a random sieve dimension by putting a discrete uniform prior on it and selecting it from its posterior . with the upper bound of the uniform prior growing under the same rate restriction as before , the posterior consistency is also achieved .this feature , however , requires specifying . in practice, one may start with a moderate level that is less than ten . in the npivsetting , recently introduced an empirical approach for selecting .moreover , developing methods of selecting in a bayesian ( or quasi - bayesian ) approach is another important research topic .this paper develops from a chapter of the first author s ph.d .dissertation at northwestern university .we are grateful to joel horowitz , elie tamer , hidehiko ichimura , jia - young fu , tom severini , xiaohong chen , anna simoni , an associate editor and two referees for many helpful comments and suggestions on this paper .we also thank the discussions with seminar participants at the 2010 summer cemmap conference on `` recent developments in nonparametric instrumental variable methods '' in london .the first author appreciates the constant encouragements from his ph.d .committee members at northwestern university .
this paper addresses the estimation of the nonparametric conditional moment restricted model that involves an infinite - dimensional parameter . we estimate it in a _ quasi - bayesian _ way , based on the limited information likelihood , and investigate the impact of three types of priors on the posterior consistency : ( i ) truncated prior ( priors supported on a bounded set ) , ( ii ) thin - tail prior ( a prior that has very thin tail outside a growing bounded set ) and ( iii ) normal prior with nonshrinking variance . in addition , is allowed to be only partially identified in the frequentist sense , and the parameter space does not need to be compact . the posterior is regularized using a slowly growing sieve dimension , and it is shown that the posterior converges to any small neighborhood of the identified region . we then apply our results to the nonparametric instrumental regression model . finally , the posterior consistency using a random sieve dimension parameter is studied . . .
frequency division multiplexing ( gfdm ) has attracted much attention in recent years as a candidate waveform of 5 g cellular systems for its low spectral leakage due to the flexibility of its pulse shaping filter .a pulse shaping filter with better spectral property , however , may cause intersymbol interference ( isi ) and inter carrier interference ( ici ) , which becomes more severe in a broadband channel and may cause problems at the receiver . among the methods in for signal recovery in the receiver for a gfdm system , matched filter ( mf )receiver maximizes the signal - to - noise ratio ( snr ) while causing self - interference from the nonorthogonality of the transmit waveform .zero - forcing ( zf ) receiver can cancel the self - interference at the price of the channel noise enhancement . to reduce the high self - interference in mf , mf with successive interference cancellation ( mf - sic ) receiveris presented in at the cost of high - complexity iterative processing .linear minimum mean square error ( mmse ) receiver can improve the performance of zf receiver .however , based on the transmitter matrix for generating the gfdm signal , these gfdm receivers have high complexities proportional to the square of the total number of the data symbols in a gfdm symbol . to obtain a low - complexity implementation in the gfdm receiver , based on fast fourier transform ( fft ) and inverse fft ( ifft ) , fft - based zf / mf , fft - based mf - sic and several techniques for mf , zf , and mmse are proposed . in the ideal channel , among the low - complexity methods ,the zf / mf receiver in can obtain the lowest complexity by splitting the multiplication of the transmitter matrix and discrete fourier transform ( dft)/inverse dft ( idft ) matrix into small blocks with fft / ifft implementation . in a broadband channel , besides the complexity of the techniques themselves , another key factor is the channel equalization that should be considered in the receiver . since the direct channel equalization in time domain in has a high complexity proportional to the square of the total number of the data symbols in a gfdm symbol , frequency domain equalization ( fde ) can be used to reduce the complexity . in this case , the proposed receivers in have lower computational cost than the low - complexity receivers in .unfortunately , compared to the orthogonal frequency multiplexing division ( ofdm ) receiver , the fde needs extra fft / ifft operations , where in , it is called zf / mf receiver directly and its complexity will be compared in details . in this paper , to simplify the gfdm receiver for a broadband channel similar to the ofdm receiver , a relationship between a gfdm signal and discrete gabor transform ( dgt ) is first investigated , similar to , , i.e , a transmit gfdm signal is an inverse dgt ( idgt ) of a data array .then , according to dgt , a frequency - domain dgt is proposed for gfdm signal recovery , which is different from the time - domain dgt in , causing high - complexity time - domain channel equalization . by analyzing the interference after the frequency - domain dgt for gfdm signals ,we conclude that the coherence bandwidth , related to the reciprocal of the maximum channel delay , and the roll - off factor of a transmit waveform are two key factors of the interference in a gfdm system , where high coherence bandwidth and small roll - off factor can make the gfdm signal recovered by the frequency - domain dgt much like ofdm .furthermore , to reduce the complexity of the frequency - domain dgt in the whole band , a suboptimal frequency - domain dgt in local subbands , called local dgt ( ldgt ) , is proposed .simulation results show that the frequency - domain dgt with small roll - off factor can achieve considerable bit - to - error rate ( ber ) performance close to ofdm , and ldgt significantly reduces the complexity of the frequency - domain dgt with a small ber performance degradation .the rest of the paper is organized as follows . in sectionii , gfdm signals are formulated in transmitter as idgt and in receiver as dgt , and the frequency domain dgt is proposed . in section iii ,a received gfdm signal is formulated by the frequency - domain dgt followed by analyzing the interference generated in the frequency - domain dgt , and ldgt is presented and analyzed for complexity reduction . in section iv , simulation results for the frequency - domain dgt , ldgt , and several other existing gfdm signal recovery methods are presented . finally , in sectionv , this paper is concluded .in this section , transmitted and received gfdm signals are first briefly introduced .then , based on the theory of dgt , an idgt is investigated for a transmitted gfdm signal .lastly , a frequency - domain dgt is proposed for the gfdm signal recovery . in gfdm transmitter ,bit streams are first modulated to complex symbols that are divided into sequences of _ km _ symbols long . each sequence ( as a vector ) ^{t} ] , , is spread on _ k _ subcarriers in _ m _ time slots .therein , is the transmitted data on the _ _ k__th subcarrier in the _ _ m__th subsymbol of each gfdm block .the data symbols are taken from a zero mean independent and identically distributed ( i.i.d ) process with the unit variance .each is transmitted with a pulse shaping filter where the signal sample index is with satisfying the condition of critical sampling in dgt , denotes the modulo of _ n _ , and is a prototype filter whose time and frequency shifts are . by the superposition of all the filtered , the gfdm signal in transmissionis at the receiver , the received gfdm signal is where denotes the linear convolution operation , is the channel response in the time domain , and is the awgn noise with zero mean and variance . assuming perfect synchronization and long enough cyclic prefix ( cp ) against the maximum channel delay are implemented , the frequency - domain expression of can be written as where , is the _n_-point dft of as and is the _n_-point dft of as where for is the _ n_-point dft of for , and thus the frequency and time shifts of are shown in fig .[ fig : fig1 ] . in fig .[ fig : fig1 ] , where is a baseband - equivalent window function in the frequency domain , for example , the raised cosine ( rc ) function , the root raised cosine ( rrc ) function and the xia pulse , integer _l _ is in the finite interval ] , the variance of can be expressed by where denotes the expectation .suppose that paths are in the jakes model of the rayleigh fading channel with the discrete maximum doppler shift , the index set of the paths is , and is the average power per path in the fading channel .then , we can obtain where is the zeroth order bessel function of the first kind and ( or ) is the index of the channel path .it is noted from that the large distance between ( or ) and will increase the differences of the exponential functions .when the distance between and is smaller than or equal to the coherence bandwidth , the differences of the exponential functions are small , that is is close to .thus , the result of is small . on the contrary ,when the distance between and exceeds the coherence bandwidth , the increased difference enlarges . on the other hand , with the reduced maximum channel time delay ,that is the increased coherence bandwidth , the difference of in becomes small and also becomes small . from , we can obtain moreover , according to the local property of , we can get when . thus , the variance of is further given by eq .denotes that the variances of is influenced by and the product of and , where decreases with the increase of the channel coherence and the product of and decreases with the decrease of the roll - off factor .[ fig : fig3 ] compares the variances of with different maximum channel delays .it is shown that when the number of delayed signal samples equals to 1 , the maximum channel delay is far smaller than the length , _n _ , of a gfdm symbol , and thus approaches zero .the result is that the summation of is close to zero and the variance of approaches zeros . obviously , in awgn channel, the whole band is completely flat without channel delay , that is ( or ) , we obtain and , similar to the narrowband channel shown in fig .[ fig : fig8 ] .thus , the maximum channel delay , related to the reciprocal of the coherence bandwidth , is the key factor of the variance of . on the other hand , with the increased roll - off factor of ,the frequency - domain dgt enlarges the variance of , as shown in fig .[ fig : fig8 ] , due to the decreased time - frequency localization of and .when the roll - off factor is , the synthesis window becomes the rectangular window and its support length becomes : \ ; \text{for even \emph{m } } , \ ; l \in [ -\frac{m-1}{2 } , \frac{m-1}{2 } ] \ ; \text{for odd \emph{m } } , \\ 0 , & \text{otherwise } , \end{cases } \nonumber\ ] ] and is also the same rectangular window as . in this case , in and becomes _ k _ many _ m_-point dfts : for \cup[(n - km)_n , \ ; ( n-1-(k-\frac{1}{2})m+\eta)_n] ] , ^{\rm t} ] .however , the frequency - domain dgt in the whole band still causes high complexity .firstly , to get the received gfdm signal , _mk_-point fft is required with complex multiplications .then , for the frequency - domain dgt in , the number of complex multiplications required for _ k _ many _ mk_-point circular convolutions between and is .after that , based on the dft - based dgt , the frequency - domain dgt in the whole band of length _ mk _ can be implemented by _ mk_-point fft .lastly , for detecting the data in , complex multiplications are required from and modulus in .thus , for a large _ m _ or _k _ , the complexity of the frequency - domain dgt receiver is high . in order to further reduce the complexity of the frequency - domain dgt in at the receiver, the frequency - domain dgt in the local subbands is proposed below .similar to the running window processing in time domain in , a signal with a localized analysis window in the frequency domain called frequency - domain local dgt ( ldgt ) can be defined below .the ldgt of to get the ( _ k _ , _m_)-th data in the subband ] and 2__l__+1 is the support length of the analysis window . note that an analysis window function usually has lowpass property , the non - zero elements of are .the biorthogonality relationship between the synthesis window and the analysis window becomes for , , .clearly when the synthesis window is given , the local analysis window can be solved from if has solutions . by rearranging into a matrix vector form and deleting the all - zero rows , becomes where is a matrix with , and is the non - zero length of the synthesis window , and , respectively , denote the number of all nonzero rows and the number of all - zero rows in , and thus the element of is for , ] , ^{\rm t} ] is a vector with its first element equal to 1 .the support length of always satisfies , as an example , for the rc window shown in fig .[ fig : fig1 ] , where for an even _m _ and for an odd _m_. since , we can obtain , where the equal sign can be obtained when . as mentioned above , when , the analysis window becomes a rectangular window the same as with the support length _ m_. in this case , the frequency - domain dgt becomes _ k _ many _m_-point dfts .then , becomes an dft matrix and has a unique solution .thus , the data easily recovered by a dft is unique and is also with the least - squared error . on the contrary , when , we can obtain .then , we have for , which means that there are more equations than unknowns in .therefore , in general , the system of linear equations does not have a solution .we next focus on the case of in the gfdm system in the following . in this case , we find in by using the following least squares criterion : whose solution is the pseudoinverse of , i.e. , where ^{\rm t} ] , ^{\rm t} ] , ] and the all - zero matrix , is the shifted version of where the ( _ _ m__+1)th row of is the cyclic shift of the ( _ _ m__+1)th row of by _ km _ for , the matrix is defined by \ ] ] with from the block - cyclic format of , we just need to study any sub - matrix in . by deleting the all - zero matrix , only in is left . in the following ,we prove that and its shifts in have the same optimal solution . for simplicity , by replacing _m _ and _ k _ with _ k _ and in , corresponding to the data indices of and , the element of is given by where ] is a vector with its element equal to 1 for .since has the same numbers of rows and columns as , similar to , we can also formulate with the optimal solution where the element of is expressed by for \cup[k-\alpha+1 , k-1] ] and .from , one can see that it is independent of _ m_. as a result , which proves that the optimal solution with the least - squared error in is identical to the optimal solution with the least - squared error in .thus , using the optimal for the ldgt of the gfdm signal , for any other of length 2__l__+1 , according to , we can obtain we assume that . since for all _ k _ and _ m _ are i.i.d. , based on , we have where can be obtained by replacing in with .it is concluded by that the data demodulated by the ldgt with the optimal analysis window has the least - squared error compared to the original data among all analysis window functions of length 2__l__+1 as above . in this case , the channel is ideal . in the receiver for a broadband channel , similar to - , the ldgt for the received gfdm signal in in the subband ] . in this way, we can also obtain a fast gfdm receiver with the same complexity as the ldgt , which can be expressed by where and we next give the optimal analysis window with the least - squared error in the ldgt for the received gfdm signal when the channel statistics is known .firstly , based on - , the average - squared error between and is expressed by where the channel matrix is defined as ^{\rm t} ] with =\tilde{\mathbf{\gamma}}^{\rm h}_{2l+1}\mathbf{h}_{0 , 2l+1}\mathbf{b}^{\rm t} ] , ] , is shifted version of where the ( _ _ m__+1)th row of is the cyclic shift of the ( _ _ m__+1)th row of by _ km _ for . considering the property of , we can further rewrite as - \tilde{\mathbf{\gamma}}^{\rm h}_{2l+1 } \mathbf{h}_{k , 2l+1}\mathbf{b}^{\rm t}\right ) \nonumber \\ & \qquad\qquad\cdot \left(h(km)\left [ \mathbf{i}_m \ ; \mathbf{0}_{m\times(2\alpha-2)m}\right ] - \tilde{\mathbf{\gamma}}^{\rm h}_{2l+1 } \mathbf{h}_{k , 2l+1}\mathbf{b}^{\rm t}\right)^{\rm h } \big\}\big\ } + n\sigma^2\sum\limits^{k-1}_{k=0}{{\rm tr}\left\{\tilde{\mathbf{\gamma}}^{\rm h}_{2l+1}\tilde{\mathbf{\gamma}}_{2l+1}\right\ } } \nonumber \\ & = \sum\limits^{k-1}_{k=0}{\sum\limits^{m-1}_{m=0}{{\rm tr}\left\ { { \rm e}\left\ { \left(h(km)\tilde{\mathbf{e}}^{\rm t}_{m+1}- \tilde{\mathbf{\gamma}}^{\rm h}\mathbf{\phi}^{*}_{m } \mathbf{h}_{k,2l+1 } \mathbf{b}^{\rm t } \right ) \left(h(km)\tilde{\mathbf{e}}^{\rm t}_{m+1}- \tilde{\mathbf{\gamma}}^{\rm h}\mathbf{\phi}^{*}_{m } \mathbf{h}_{k,2l+1 } \mathbf{b}^{\rm t } \right)^{\rm h}\right\}\right\ } } } \nonumber \\ & \quad + n \sigma^2 \sum\limits^{k-1}_{k=0}{\sum\limits^{m-1}_{m=0}{\left\| \tilde{\mathbf{\gamma}}^{\rm h } \mathbf{\phi}^{*}_{m } \right\|^2_2 } } \nonumber \\ & = \sum\limits^{k-1}_{k=0 } \sum\limits^{m-1}_{m=0 } \big({\rm e}\left\ { |h(km)|^2 \right\ } - \tilde{\mathbf{g}}^{\rm h}_0{\rm e}\left\{h(km)\mathbf{h}^{*}_{k,2l+1}\right\}\tilde{\mathbf{\gamma } } - \tilde{\mathbf{\gamma}}^{\rm h}{\rm e}\left\{\mathbf{h}_{k,2l+1 } h^{*}(km ) \right\ } \tilde{\mathbf{g}}_0 \nonumber \\ & \qquad\qquad\quad + \tilde{\mathbf{\gamma}}^{\rm h}{\rm e}\left\{\mathbf{\phi}^{*}_{m}\mathbf{h}_{k,2l+1}\mathbf{b}^{\rm t}\mathbf{b}^{*}\mathbf{h}^{*}_{k,2l+1}\mathbf{\phi}_{m}\right\}\tilde{\mathbf{\gamma}}\big ) + n^2 \sigma^2 \left\|\tilde{\mathbf{\gamma}}\right\|^2_2 \nonumber \\ & = \sum\limits^{k-1}_{k=0 } \sum\limits^{m-1}_{m=0 } \big({\rm e}\left\ { |h(km)|^2 \right\ } - \tilde{\mathbf{g}}^{\rm h}_0{\rm e}\left\{h(km)\mathbf{h}^{*}_{k,2l+1}\right\}\tilde{\mathbf{\gamma } } - \tilde{\mathbf{\gamma}}^{\rm h}{\rm e}\left\{\mathbf{h}_{k,2l+1 } h^{*}(km ) \right\ } \tilde{\mathbf{g}}_0 \nonumber \\ & \qquad\qquad\quad + \tilde{\mathbf{\gamma}}^{\rm h}{\rm e}\left\{\mathbf{h}_{k,2l+1}\mathbf{b}^{\rm t}\mathbf{b}^{*}\mathbf{h}^{*}_{k,2l+1}\right\}\tilde{\mathbf{\gamma}}\big ) + n^2 \sigma^2 \left\|\tilde{\mathbf{\gamma}}\right\|^2_2 \end{aligned}\ ] ] under the constraint of the constant , to minimize the error in , we just need to minimize the first term of . thus , for obtaining the analysis window with the least - squared error, we formulate where by , we have therefore , the optimal solution is for simplicity , by replacing _k _ and _ m _ of in with _m _ and _ k _ , according to , the element of in is expressed by for \cup[k-\alpha+1 , k-1] ] , and .meanwhile , according to , the element of in is eqs . andshow that in is related to the synthesis window and its shifts and the channel covariance .when the channel is ideal , there is one channel delay , i.e. , . in this case, we have and .thus , the optimal analysis window in is the same as the optimal analysis window in . by decreasing the length of the analysis window to ,the complexity of the ldgt can be reduced compared to the frequency - domain dgt .mk_-point fft , the number of the complex multiplications of the convolutions between and is reduced to , and the number of multiplications based on fft for the ldgt is reduced to in . the same as the frequency - domain dgt receiver , the data detection in after the ldgtis also used .thus , for , the complexity of the ldgt receiver is lower than the complexity of the frequency - domain dgt receiver .table [ table2 ] compares the complexities of several gfdm receivers in a broadband channel , where indicates the span of a receiver filter in the neighborhood of each subcarrier band in and is the number of iterations in the sic algorithm . according to , and considered for the mf / mf - sic and zf receivers .considering the channel equalization in ofdm , for fair complexity comparison , fde is used as the channel equalization in the zf receiver in , the fft - based zf / mf receiver in , the mf - sic receiver in , and the zf / mf receiver for gfdm in .the fde for the channel of length _ mk _ in the gfdm receivers has complex multiplications caused by a pair of fft and ifft and zf / mf . for simplicity ,uncoded systems are considered here .j _ be the size of the constellation . for ,the ldgt in can make a fast implementation of gfdm signal recovery . as shown in fig .[ fig : fig6 ] , for small , the zf / mf receiver for gfdm in has the lowest complexity , while the ldgt receiver has the complexity close to the zf / mf receiver in and the fft - based mf receiver in and better than the fft - based zf receiver in . on the contrary , when , the ldgt receiver has the lowest complexity among the gfdm receivers . [ ht ] .computational complexities of different gfdm receiver techniques in a broadband channel [ cols="^,^",options="header " , ] in fig .[ fig : fig4 ] , the ber performances of the frequency - domain dgt , the truncated frequency - domain dgt and the ldgt with varying lengths of the analysis window and varying roll - off factors are depicted in rayleigh fading channel .it is shown that the ldgt can obtain better ber performance than the truncated frequency - domain dgt , such as for and _ _ l__=9 in qpsk and and _ _ l__=20 in 16qam .compared to the frequency - domain dgt , the ldgt has the system performance degradation for the inaccurate , which is the analysis window in the local subband , obtained by the least squares criterion in , but with the increased _, the ldgt can obtain better ber performance than the frequency - domain dgt for the improved accuracy of and the removal of the part of the channel noise due to the local property of .for example , when and _ _ l__=9 in qpsk and , _ _ l__=20 in 16qam , the ldgt can obtain better ber performance than the frequency - domain dgt , while the truncated frequency - domain dgt can not do .meanwhile , the complexity of the ldgt in , the same as the truncated frequency - domain dgt in , is significantly reduced compared to the frequency - domain dgt in , such as when , _ _ l__=20 in 16qam , the complexity reduction ratio is 85.5% . furthermore , with a small roll - off factor , both the ldgt and the truncated frequency - domain dgt can obtain the same ber performance as the frequency domain dgt in the whole band , such as .it is concluded that compared to to the frequency - domain dgt in the whole band , the ldgt with a small length of the analysis window has significant complexity reduction while it can achieve a similar or better error performance .[ hp ] + figs . [ fig : fig7 ] and [ fig : fig5 ] compare the ber performances among the zf receiver in , the fft - based mf receiver in , the mf - sic receiver in , the zf receiver in , and the ldgt receiver in a narrowband channel and a broadband channel , respectively , where qpsk is adopted . compared to the other gfdm receivers , the ldgt receiver shows the promising ber performance .the ber performance in the ldgt receiver can be significantly improved by a large _l _ or a small roll - off factor .for example , let the parameter increase from to when in the broadband channel and the performances are shown in fig .[ fig : fig5 ] . in this case , the ldgt receiver can obtain the better ber performance than the zf receiver in , the zf receiver in , and the mf - sic receiver with =1 .this is because our proposed ldgt receiver does not use a direct channel equalization or the symbol - by - symbol detection in to calculate the soft information of the channel decoder .however , before the calculation of the soft information , the other gfdm receivers in still employ channel equalization before decoding . without consideration of the complexity of the soft information calculation and the channel decoding , according to table [ table2 ] , in the coded gfdm system with , compared to the zf receiver in , the mf - sic receiver in , the zf receiver in and the fft - based mf receiver in , the complexity reduction ratios in ldgt receiver with are 99.6% , 66.5% , 50.4% , and 60.3% , respectively .thus , the ldgt receiver has the lowest complexity while maintaining considerable ber performance in the broadband channel .in this paper , the transmitted gfdm signal was first considered as the idgt in time domain and frequency domain , respectively .then , for redcing the complexity caused by the channel equalization , we proposed the frequency - domain dgt for the received gfdm signal to simplify the gfdm signal recovery similar to ofdm . by analyzing the interference caused by the frequency - domain dgt, the channel with high coherence and a small roll - off factor of the synthesis widow can lead to small interference to the received signal .based on the localized synthesis window in the frequency domain , the ldgt was proposed in the local band to further reduce the complexity of the frequency - domain dgt in the whole band .although the truncation of the frequency - domain dgt can achieve the same complexity as the ldgt , we proved that the data demodulated by the ldgt with the optimal analysis window has the least - squared error in the ideal channel and the broadband channel compared to the truncated frequency - domain dgt .simulation results showed that as the length of the optimal analysis window increases , the ldgt can obtain ber performance as good as the frequency - domain dgt , while having notable complexity reduction compared to other gfdm receivers .99 n. michailow , m. matth , i. gaspar , a. navarro caldevilla , l. l. mendes , a. festag , and g. fettweis , generalized frequency division multiplexing for 5th generation cellular networks , " _ ieee trans . on commun .9 , pp . 3045 - 3061 , 2014 .r. datta , n. michailow , m. lentmaier , and g. fettweis , gfdm interference cancellation for flexible cognitive radio phy design , " in _ proc .76th ieee vtc fall _ ,qubec city , qc , canada , sep .2012 , pp . 1 - 5 .g. wunder , p. jung , m. kasparick , t. wild , f. schaich , y. chen , s. brink , i. gaspar , n. michailow , a. festag , l. mendes , n. cassiau , d. ktenas , m. dryjanski , s. pietrzyk , b. eged , p. vago , and f. wiedmann , 5gnow : non - orthogonal , asynchronous waveforms for future mobile applications , " _ ieee commun . mag .97 - 105 , feb . 2014 .i. gaspar , n. michailow , a. navarro , e. ohlmer , s. krone , and g. fettweis , low complexity gfdm receiver based on sparse frequency domain processing , " in _ proc .77th ieee vtc spring _ , dresden , germany , jun .2013 , pp . 1 - 6 .
generalized frequency division multiplexing ( gfdm ) is a recent multicarrier 5 g waveform candidate with flexibility of pulse shaping filters . however , the flexibility of choosing a pulse shaping filter may result in inter carrier interference ( ici ) and inter symbol interference ( isi ) , which becomes more severe in a broadband channel . in order to eliminate the isi and ici , based on discrete gabor transform ( dgt ) , in this paper , a transmit gfdm signal is first treated as an inverse dgt ( idgt ) , and then a frequency - domain dgt is formulated to recover ( as a receiver ) the gfdm signal . furthermore , to reduce the complexity , a suboptimal frequency - domain dgt called local dgt ( ldgt ) is developed . some analyses are also given for the proposed dgt based receivers . discrete gabor transform ( dgt ) , generalized frequency division multiplexing ( gfdm ) , inter carrier interference ( ici ) , inter symbol interference ( isi ) .
in order to establish a link to experiments , every physical theory needs to define the notions of observers and observables . from an experimentalist s point of view, an observation is the process of an observer performing an experiment in which he measures a number of physical quantities , called observables .each measured observable is expressed by a single number or a set of numbers . in order to understand the meaning of these numbers from a theorist s point of view , and thus in a mathematical language ,observers and observables must be modeled by mathematical objects , which can in turn be related to the outcomes of measurements .this model determines how the result of an observation depends on the observer who is performing it , and how the results obtained by different observers can be related to each other . in this workwe will focus on geometric models for these relations .we start our discussion from the viewpoint of general relativity .the most basic notion of general relativity is that of spacetime , which is modeled by a smooth manifold equipped with a pseudo - riemannian metric of lorentzian signature , an orientation and a time orientation .observers are modeled by world lines , which are smooth , future directed , timelike curves .their tangent vectors satisfy by a reparametrization we can always normalize the tangent vectors , so that in this case we call the curve parameter the proper time along the world line and denote it by the letter instead of .the proper time along a timelike curve with arbitrary parametrization is given by the arc length integral the clock postulate of general relativity states that any clock moving along the world line measures the proper time , independent of the construction of the clock .the prescription for the measurement of time is thus crucially linked to the lorentzian metric of spacetime .similarly , the metric provides a definition of rulers and the length of spacelike curves by the same expression of the arc length integral .finally , it also defines the angle between two tangent vectors at the same point as in summary , the lorentzian metric defines the _ geometry of spacetime_. closely related to the geometry of spacetime is the notion of causality .it answers the question which events on a spacetime manifold can have a causal influence on which other events on .an event at can influence an event if and only if there exists a continuous , future directed , causal ( i.e. , timelike or lightlike ) curve from to .all events which can be influenced by constitute the causal future of .conversely , all events which can influence form the causal past of .this structure , called the _ causal structure of spacetime _ , is defined by the metric geometry via the definition of causal curves .the lorentzian spacetime metric serves several further purposes besides providing a definition of spacetime geometry and causality .we have already seen that it enters the definition of observer world lines as timelike curves , whose notion is thus also relevant when we consider the measurements of observables by these observers .observables are modeled by tensor fields , which are smooth sections of a tensor bundle over .their dynamics are consequently modeled by tensorial equations , which are derived from a diffeomorphism - invariant action of the generic form where the lagrange function depends on the metric geometry , the fields and their derivatives . combining the notions of observers and observables we may define an observation by an observer with world line at proper time as a measurement of the field at the point .however , this definition yields us an element of the tensor space , and not a set of numbers , as we initially presumed . we further need to choose a frame ,by which we denote a basis of the tangent space .this frame allows us to express the tensor in terms of its components with respect to .the tensor components of are finally the numeric quantities which are measured in an experiment .the frame chosen by an observer to make measurements is usually not completely arbitrary . since the basis vectors are elements of the tangent space , they are characterized as being timelike , lightlike or spacelike and possess units of time or length .we can thus use the notions of time , length and angles defined by the spacetime metric to choose an orthonormal frame satisfying the condition with one unit timelike vector and three unit spacelike vectors . the clock postulate that proper timeis measured by the arc length along the observer world line further implies a canonical choice of the timelike vector as the tangent vector to the observer world line .this observer adapted orthonormal frame is a convenient choice for most measurements .it follows immediately from this model of observables and observations how the measurements of the same observable made by two coincident observers , whose world lines and meet at a common spacetime point , must be translated between their frames of reference .if both observer frames and are orthonormalized , the condition implies that they are related by a lorentz transform .the same lorentz transform must then be applied to the tensor components measured by one observer in order to obtain the tensor components measured by the other observer , using the standard formula this close connection between observations made using different observer frames constitutes the principle of local lorentz invariance .it is a consequence of the fact that we model the geometry of spacetime , which in turn defines the notion of orthonormal frames , by a lorentzian metric .even deeper implications arise from the fact that we model both observables and geometry by tensor fields on the spacetime manifold , and observations by measurements of tensor components .if we introduce coordinates on and use their coordinate base in order to express the components of tensor fields , it immediately follows how these components translate under a change of coordinates .moreover , since we model the dynamics of physical quantities by tensor equations , they are independent of any choice of coordinates .this coordinate freedom constitutes the principle of general covariance .besides its role in providing the background geometry which enters the definition of observers , observations and causality , the lorentzian metric of spacetime has a physical interpretation on its own , being the field which carries the _gravitational interaction_. it does not only govern the dynamics of matter fields , but is also influenced by their presence .this is reflected by the dynamics of gravity , which is governed by the einstein - hilbert action which together with the matter action yields the einstein equations understanding the geometry of spacetime as a dynamical quantity , which mutually interacts with matter fields , establishes a symmetric picture between both matter and gravity . however , it is exactly this symmetry between gravity and matter which may lead us to new insights on the nature of spacetime geometry , and even question its description in terms of a lorentzian metric , from which we derived a number of conclusions as stated above .this stems from the fact that all known matter fields in the standard model are nowadays described by quantum theories .while the process of quantization has been successfully applied to matter fields even beyond the standard model , it is significantly harder in the case of gravity .this difficulty has lead to a plethora of different approaches towards quantum gravity , many of which suggest modifications to the geometry of spacetime , or even resolve the unity of spacetime into a time evolution of spatial geometry .main contenders which fall into this class are given by geometrodynamic theories such as loop quantum gravity and sum - over - histories formulations such as spin foam models or causal dynamical triangulations .theories of this type introduce non - tensorial quantities , which may in turn suggest a breaking of general covariance at least at the quantum level .moreover , other approaches to gravity may induce a breaking of local lorentz invariance , for example , by a preferred class of observers , or test particles , described by a future unit timelike vector field .the possible observer dependence of physical quantities beyond tensorial transformations motivates the introduction of spacetime geometries obeying a similar observer dependence , which generalize the well - known lorentzian metric geometry . in this work we review and discuss two different , albeit similar , approaches to observer dependent geometries under the aspects of observers , causality and gravity . in section [ sec : finsler ] we review the concept of finsler spacetimes .we show that it naturally generalizes the causal structure of lorentzian spacetimes , provides clear definitions of observers , observables and observations , serves as a background geometry for field theories and constitutes a model for gravity .in section [ sec : cartan ] we review the concept of observer space in terms of cartan geometry .our discussion is based on the preceding discussion of finsler spacetimes , from which we translate the notions of observers and gravity to cartan language .we finally ponder the question which implications observer dependent geometries have on the nature of spacetime .as we have mentioned in the introduction , the metric geometry of spacetime serves multiple roles : it provides a causal structure , crucially enters the definition of observers , defines measures for length , time and angles and mediates the gravitational interaction . in this sectionwe discuss a more general , non - metric spacetime geometry which is complete in the sense that it serves all of these roles .this generalized geometry is based on the concept of finsler geometry .models of this type have been introduced as extensions to einstein and string gravity . in this workwe employ the finsler spacetime framework , which is an extension of the well - known concept of finsler geometry to lorentzian signature , and review some of its properties and physical applications .this framework is of particular interest since , in addition to its aforementioned completeness , it can also be used to model small deviations from metric geometry and provides a possible explanation of the fly - by anomaly .the starting point of our discussion is the clock postulate , which states that the time measured by an observer s clock moving along a timelike curve is the proper time given by the arc length integral .the expression under the integral depends on both the position along the curve and the tangent vector .hence , it can be regarded as a function on the tangent bundle .the clock postulate thus states that the proper time measured by an observer s clock is given by the integral where is the function on the tangent bundle given by equation . for conveniencewe introduce a particular set of coordinates on .let be coordinates on . for then use the coordinates defined by we call these coordinates induced by the coordinates . as a further shorthand notation we use for the coordinate basis of .we now introduce a different , non - metric geometry of spacetime which still implements the clock postulate in the form of an arc length integral , but with a more general function on the tangent bundle .geometries of this type are known as finsler geometries , and is denoted the finsler function .the choice of we make here is not completely arbitrary . in order for the arc length integral to be well - defined and to obtain a suitable notion of spacetime geometry we need to preserve a few properties of the metric - induced finsler function . in particular we will consider only finsler functions which satisfy the following : . [ finsler : fpositive ] is non - negative , .[ finsler : fsmooth ] is a continuous function on the tangent bundle and smooth where it is non - vanishing , i.e. , on .[ finsler : fhomorev ] is positively homogeneous of degree one in the fiber coordinates and reversible , i.e. , property [ finsler : fpositive ] guarantees that the length of a curve is non - negative . we can not demand strict positivity here , since already in the metric case we have the notion of lightlike curves , for which .for the same reason of compatibility with the special case of a lorentzian spacetime metric we can not demand that is smooth on all of , since the metric finsler function does not satisfy this condition .it does , however , satisfy the weaker condition [ finsler : fsmooth ] , which guarantees that the arc length integral depends smoothly on deformations of the curve , unless these pass the critical region where . finally , we demand that the arc length integral is invariant under changes of the parametrization and on the direction in which the curve is traversed , which is guaranteed by condition [ finsler : fhomorev ]. one may ask whether the lorentzian metric can be recovered in case the finsler function is given by .indeed , the finsler metric which is defined everywhere on , agrees with whenever is spacelike and with when is timelike . however , for null vectors where we see that the finsler metric is not well - defined , since for a general finsler function will not be differentiable . as a consequence any quantities derived from the metric , such as connections and curvatures , are not defined along the null structure , which renders this type of geometry useless for the description of lightlike geodesics . in the followingwe will therefore adopt the following definition of finsler spacetimes which remedies this shortcoming : a _ finsler spacetime _ is a four - dimensional , connected , hausdorff , paracompact , smooth manifold equipped with continuous real functions on the tangent bundle which has the following properties : .[ finsler : lsmooth ] is smooth on the tangent bundle without the zero section .[ finsler : lhomogeneous ] is positively homogeneous of real degree with respect to the fiber coordinates of , and defines the finsler function via .[ finsler : lreversible ] is reversible : .[ finsler : lhessian ] the hessian of with respect to the fiber coordinates is non - degenerate on , where has measure zero and does not contain the null set .[ finsler : timelike ] the unit timelike condition holds , i.e. , for all the set with contains a non - empty closed connected component .one can show that the finsler function induced from the fundamental geometry function defined above indeed satisfies the conditions [ finsler : fpositive ] to [ finsler : fhomorev ] we required .further , the finsler metric is defined on and is non - degenerate on , where is the degeneracy set of the hessian defined in condition [ finsler : lhessian ] above .this definition in terms of the smooth fundamental geometry function will be the basis of our discussion of finsler spacetimes in the following sections , where we will see that it also extends the definitions of other geometrical structures such as connections and curvatures to the null structure .the first aspect we discuss is the causal structure of finsler spacetimes and the definition of observer trajectories . for this purposewe first examine the causal structure of metric spacetimes from the viewpoint of finsler geometry , before we come to the general case .we have already mentioned in the introduction that the definition of causal curves is given by the split of the tangent spaces into timelike , spacelike and lightlike vectors .figure [ fig : smcausal ] shows this split induced by the lorentzian metric on the tangent space .solid lines mark the light cone which is constituted by null vectors . in terms of the fundamental geometry function are given by the condition .outside the light cone we have spacelike vectors with , while inside the light cone we have timelike vectors with .the hessian therefore has the signature indicated in condition [ finsler : timelike ] inside the light cone .in both the future and the past light cones we find a closed subset with . using the time orientationwe pick one of these subsets and denote it the shell of future unit timelike vectors . in the tangent space of a metric spacetime.,width=272 ] the shell has the important property that rescaling yields a convex cone the convexity of this cone is crucial for the interpretation of the elements of as tangent vectors to observer world lines , as it is closely linked to the hyperbolicity of the dispersion relations of massive particles and the positivity of particle energies measured by an observer .we require this property also for the future light cone of a finsler spacetime . in order to find this structure in terms of the fundamental geometry function consider the simple bimetric example with two lorentzian metrics and , where we assume that the light cone of lies in the interior of the light cone of .the sign of and the signature of on the tangent space are shown in figure [ fig : bmcausal ] .solid lines mark the null structure , while the dashed - dotted lines marks the degeneracy set of as defined in condition [ finsler : lhessian ] .the remaining dashed and dotted lines mark the unit timelike vectors as defined in condition [ finsler : timelike ] ; for these only the future directed tangent vectors are shown . the connected component marked by the dashed line is closed , while the one marked with the dotted line is not .hence , the former marks the set .as the figure indicates , the set indeed forms a convex cone for this simple bimetric example .it can be shown that condition [ finsler : timelike ] always implies the existence of a convex cone of observers , in consistency with the requirement stated above . in the tangent space of a bimetric finsler spacetime .,width=498 ]it is now straightforward to define : a physical _ observer world line _ on a finsler spacetime is a curve such that at all times the tangent vector lies inside the forward light cone , or in the unit timelike shell if the curve parameter is given by the proper time . in the following section we will discuss which of these observers are further singled out by the finsler spacetime geometry as being inertial observers . in the preceding sectionwe have seen which trajectories are allowed for physical observers .we now turn our focus to a particular class of observers who follow the trajectories of freely falling test masses .these are denoted inertial observers , since in their local frame of reference gravitational effects can be neglected . on a metric spacetimethey are given by those trajectories which extremize the arc lenth integral .in finsler geometry we can analogously obtain them from extremizing the proper time integral .variation with respect to the curve yields the equation of motion where the coefficients are given by the following definition : the coefficients of the _ cartan non - linear connection _ are given by \ ] ] and define a connection in the sense that they induce a split of the tangent bundle over , where is spanned by and is spanned by .in the case of a metric - induced finsler function the coefficients are given by where denotes the christoffel symbols .the split of into horizontal and vertical subbundles plays an important role in finsler geometry , as we will see in the following sections . for conveniencewe use the following adapted basis of : the _ berwald basis _ is the basis of which respects the split induced by the cartan non - linear connection . for the dual basiswe use the notation it induces a similar split of the cotangent bundle into the subbundles we can now reformulate the geodesic equation by making use of the geometry on . for this purposewe canonically lift the curve to a curve in .the condition that is a finsler geodesic then translates into the condition since is simply the tangent bundle coordinate , it thus follows that the canonical lift of a finsler geodesic must be an integral curve of the vector field which is defined as follows : the _ geodesic spray _ is the vector field on which is defined by we now generalize this statement to null geodesics . herewe encounter two problems .first , we see that the coefficients of the non - linear connection are not well - defined for null vectors where , since is not differentiable on the null structure .we therefore need to rewrite their definition in terms of the fundamental geometry function .it turns out that it takes the same form \,,\ ] ] where has been replaced by and by .we can see that this is well - defined whenever is non - degenerate , and thus in particular on the null structure .the second problem we encounter is that we derived the geodesic equation from extremizing the action , which vanishes identically in the case of null curves .we therefore need to use the constrained action = \int_{t_1}^{t_2}\left(l(\gamma(t),\dot{\gamma}(t ) ) + \lambda(t)[l(\gamma(t),\dot{\gamma}(t ) ) - \kappa]\right)dt\ ] ] with a lagrange multiplier and a constant .a thorough analysis shows that the equations of motion derived from this action are equivalent to the geodesic equation also for null curves .the definitions of this and the preceding section provide us with the notions of general and inertial observers . in the following sectionwe will discuss how these observers measure physical quantities and how the observations by different observers can be related .as we have mentioned in the introduction , the notion of geometry in physics defines not only causality and the allowed trajectories of observers , but also their possible observations and the relation between observations made by different observers . in the case of metric spacetime geometrywe have argued that observations are constituted by measurements of the components of tensor fields at a spacetime point with respect to a local frame at . a particular class of frames singled out by the geometry and most convenient for measurements is given by the orthonormal frames .different observations at the same spacetime point , but made with different local orthonormal frames , are related by lorentz transforms . in this sectionwe discuss a similar definition of observations on finsler spacetimes and relate the observations made by different observers . as a first stepwe need to generalize the notion of observables from metric spacetimes to finsler spacetimes . in their definition in section [ subsec : finslerdef ]we have already seen that the geometry of finsler spacetimes is defined by a homogeneous function on the tangent bundle , which in turn induces a finsler function and a finsler metric .these geometric objects explicitly depend not only on the manifold coordinates , but also on the coordinates along the fibers of the tangent bundle .it therefore appears natural that also observables should be functions not on the spacetime manifold , but homogeneous functions on its tangent bundle .a straightforward idea might thus be to model observables as homogeneous tensor fields over , i.e. , as sections of a tensor bundle however , since is an eight - dimensional manifold , each tensor index would then take eight values , so that the number of components of a tensor of rank would increase by a factor of .since we do not observe these additional tensor components in nature , we will not follow this idea .instead we define observables as tensor fields with respect to a different vector bundle over , whose fibers are four - dimensional vector spaces generalizing the tangent spaces of . in the preceding section we have seen that the cartan non - linear connection of a finsler spacetime equips the tangent bundle of with a split into a horizontal subbundle and a vertical subbundle .the fibers of both subbundles are four - dimensional vector spaces .a particular section of , which we have already encountered and which is closely connected to finsler geodesics , is the geodesic spray .we therefore choose as the bundle from which we define observables as follows : the _ observables _ on a finsler spacetime are modeled by homogeneous horizontal tensor fields , i.e. , sections of the tensor bundle over the tangent bundle of .consequently we define observations in full analogy to the case of metric spacetime geometry : an _ observation _ of an observable by an observer with world line at proper time is a measurement of the components of the horizontal tensor with respect to a basis of the horizontal tangent space at .as we have argued in the introduction , the most natural frame an observer on a metric spacetime can choose is an orthonormal frame whose temporal component agrees with his four - velocity .if we wish to generalize this concept to finsler spacetimes , we first need to map the basis vectors , which are now elements of , to . for this purposewe use the differential of the tangent bundle map , which isomorphically maps every horizontal tangent space to .we can then orthonormalize the frame using the finsler metric , which now explicitly depends on the observer s four - velocity . taking into accountthe signature of the finsler metric on timelike vectors inside the forward light cone we arrive at the following definition : an _ orthonormal observer frame _ on an observer world line at proper time is a basis of the horizontal tangent space at which has and is orthonormal with respect to the finsler metric , an important property of metric spacetimes is the fact that any two orthonormal observer frames at the same spacetime point are related by a unique lorentz transform . together with the definition that observations yield tensor components this property implies local lorentz invariance , which means that the outcomes of measurements are related by the standard formula .we now generalize this concept to finsler spacetimes . for this purposewe consider two coincident observers whose world lines meet at together with orthonormal frames at .one immediately encounters the difficulty that and are now bases of different vector spaces and .we therefore need to find a map between these vector spaces which in particular preserves the notion of orthonormality .the canonical map given by the isomorphisms and , however , does not have this property . in the followingwe will therefore discuss a different map which will yield the desired generalization of lorentz transformations . in order to construct a map between the horizontal tangent spaces and we employ the concept of parallel transport .we thus need a connection on the horizontal tangent bundle with respect to which the finsler metric is covariantly constant , so that the notion of orthonormality is preserved . in finsler geometry an appropriate choice which satisfies these conditions is the cartan linear connection on the tangent bundle , which is defined as follows : the _ cartan linear connection _ is the connection on defined by the covariant derivatives where the coefficients are given by [ eqn : clcoefficients ] the cartan linear connection is adapted to the cartan non - linear connection in the sense that it respects the split into horizontal and vertical components . by restriction it thus provides a connection on the horizontal tangent bundle .given a curve \to tm ] satisfying however ,this map in general depends on the choice of the curve . we therefore restrict ourselves to a particular class of curves .note that and have the same base point in , and are thus elements of the same fiber of the tangent bundle .hence it suffices to consider only curves which are fully contained in the same fiber .curves of this type are vertical , i.e. , their tangent vectors lie in the vertical tangent bundle .we further impose the condition that is an autoparallel of the cartan linear connection .this uniquely fixes the curve , provided that is in a sufficiently small neighborhood of . using the unique vertical autoparallel defined above we can now generalize the notion of lorentz transformations to coincident observers on a finsler spacetime . consider two observers meeting at and using frames and ,i.e. , orthonormal bases of and .the map maps the horizontal basis vectors to horizontal vectors , which constitute a basis of . since is orthonormal with respect to and the cartan linear connection preserves the finsler metric , it follows that is orthonormal with respect to .since also is orthonormal with respect to the same metric , there exists a unique ordinary lorentz transform mapping to .the combination of the parallel transport along and this unique lorentz transform finally defines the desired generalized lorentz transform .the procedure to map bases of the horizontal tangent space between coincident observers further allows us to compare horizontal tensor components between these observers , so that they can communicate and compare their measurements of horizontal tensors .this corresponds to the transformation of tensor components of observables between different observer frames in metric geometry . since observables in metric geometryare modeled by spacetime tensor fields , their observation in one frame determines the measured tensor components in any other frame .this is not true on finsler spacetimes , since we defined observables as fields on the tangent bundle .they may therefore also possess a non - tensorial , explicit dependence on the four - velocity of the observer who measures them . as in metric geometry , also in finsler geometry the dynamics of tensor fields should be determined by a set of field equations which are derived from an action principle .this will be discussed in the next section . in the preceding sectionwe have argued that observables on a finsler spacetime are modeled by homogeneous horizontal tensor fields , which are homogeneous sections of the horizontal tensor bundle .we will now discuss the dynamics of these observable fields .for this purpose we will use a suitable generalization of the action to horizontal tensor fields on a finsler spacetime .this will be done in two steps .first we will lift the volume form from the spacetime manifold to its tangent bundle , then we generalize the lagrange function to fields on a finsler spacetime . in order to define a volume form on we proceed in analogy to the volume form of metric geometry , which means that we choose the volume form of a suitable metric on tm .we have already partly obtained this metric in the previous section when we discussed orthonormal observer frames .the definition of orthonormality we introduced corresponds to lifting the finsler metric to a horizontal metric on , which measures the length of horizontal vectors in .this metric needs to be complemented by a vertical metric , which analogously measures the length of vertical vectors in .both metrics together constitute the desired metric on the tangent bundle . the canonical choice for this metric is given by the sasaki metric defined as follows : the _ sasaki metric _ is the metric on the tangent bundle which is defined by the factor introduced here compensates for the intrinsic homogeneity of degree 1 of the one - forms , so that the sasaki metric is homogeneous of degree 0 .this intrinsic homogeneity becomes clear from the definition of the dual berwald basis , taking into account that the coefficients are homogeneous of degree 1 , as can be seen from their definition . using the volume form of the sasaki metric one can now integrate functions on the tangent bundle , if one chooses the function to be a suitable lagrange function for a physical field on a finsler spacetime , one encounters another difficulty . since all geometric structures and matter fields are homogeneous , it is natural to demand the same from the lagrange function .however , for a homogeneous function the integral over the tangent bundle generically diverges , unless the function vanishes identically .this follows from the fact that along any ray with in the value of is given by , where is the degree of homogeneity .this difficulty can be overcome by integrating the function not over , but over a smaller subset of which intersects each ray , which is not part of the null structure , exactly once , and which is defined as follows : the _ unit tangent bundle _ of a finsler spacetime is the set on which the finsler function takes the value . note that intersects each ray exactly once which is not part of the null structure .this suffices since the null structure is of measure 0 and therefore does not contribute to the integral over .the canonical metric on is given by the restriction of the sasaki metric , which finally determines the volume form .this is the volume form we will use in the generalized action integral . in the second part of our discussionwe generalize the lagrange function in the metric matter action . for simplicitywe restrict ourselves here to -form fields whose lagrange function depends only on the field itself and its first derivatives .these are of particular interest since , e.g. , the klein - gordon and maxwell fields fall into this category .the most natural procedure to generalize the dynamics of a given field theory from metric to finsler geometry is then to simply keep the formal structure of its lagrange function , but to replace the lorentzian metric by the sasaki metric and to promote the -form field to a horizontal -form field on .the generalized lagrange function we obtain from this procedure is now a function on , which we can integrate over the subset to form an action integral . using this procedure we encounter the problem that even though we have chosen to be horizontal , will in general not be horizontal . in order to obtain consistent field equationswe therefore need to modify our procedure . instead of initially restricting ourselves to horizontal -forms on the tangent bundle ,we let be an arbitrary -form with both horizontal and vertical components .the purely horizontal components can then be obtained by applying the horizontal projector in order to reduce the number of physical degrees of freedom to only these horizontal components we dynamically impose that the non - horizontal components vanish by introducing a suitable set of lagrange multipliers , so that the total action reads {\sigma}\,.\ ] ] variation with respect to the lagrange multipliers then yields the constraint that the vertical components of vanish .variation with respect to these vertical components fixes the lagrange multipliers .finally , variation with respect to the horizontal components of yields the desired field equations .it can be shown that in the metric limit they reduce to the usual field equations derived from the action for matter fields on a metric spacetime . in the previous sections we have considered the geometry of finsler spacetimes solely as a background geometry for observers , point masses and matter fields .we now turn our focus to the dynamics of finsler geometry itself .as it is also the case for lorentzian geometry , we will identify these dynamics with the dynamics of gravity .for this purpose we need to generalize the einstein - hilbert action , from which the gravitational field equations are derived , and the energy - momentum tensor , which acts as the source of gravity .we start with a generalization of the einstein - hilbert action to finsler spacetimes . as in the case of matter field theories detailed in the preceding sectionthis generalized action will be an integral not over spacetime , but over the unit tangent bundle , since the geometry is defined in terms of the homogeneous fundamental geometry function on tm .we have already seen that a suitable volume form on is given by the volume form of the restricted sasaki metric .this leaves us with the task of generalizing the ricci scalar in terms on finsler geometry .the most natural and fundamental notion of curvature is defined by the cartan non - linear connection , which we already encountered in the definition of finsler geodesics in section [ subsec : finslerpm ] and which corresponds to the unique split of the tangent bundle into horizontal and vertical components .this split is also the basic ingredient for the following construction .the curvature of the cartan non - linear connection measures the non - integrability of the horizontal distribution , i.e. , the failure of the horizontal vector fields to be horizontal .in fact their lie brackets are vertical vector fields , which are used in the following definition : the _ curvature of the non - linear connection _ is the quantity which measures the non - integrability of the horizontal distribution induced by the cartan non - linear connection , = ( \delta_bn^c{}_a - \delta_an^c{}_b)\bar{\partial}_c = r^c{}_{ab}\bar{\partial}_c\,.\ ] ] the simplest scalar one can construct from the curvature coefficients defined by is the contraction , so that the action for finsler gravity takes the form in the case of a metric - induced finsler function , in which the non - linear connection coefficients are given by , the expression under the integral indeed reduces to the ricci scalar , so that is a direct generalization of the einstein - hilbert action . in order to obtain a full gravitational theory this action needs to be complemented by a matter action , such as the field theory action we encountered in the previous section .this total action then needs to be varied with respect to the mathematical object which fundamentally defines the spacetime geometry . on a finsler spacetimethis is the fundamental geometry function .consequently , the gravitational field equations are not two - tensor equations as in general relativity , but instead the scalar equation \bigg|_{\sigma } = \kappa t|_{\sigma}\end{gathered}\ ] ] on the unit tangent bundle . here denotes the energy - momentum scalar obtained by variation of the matter action with respect to the fundamental geometry function . for the field theory actionit is given by \right\}\right|_{\sigma}\,.\ ] ] it can be shown that in the metric limit the resulting gravitational field equation is equivalent to the einstein equations , whose free indices are to be contracted with .we finally remark that also the cartan linear connection we used to define generalized lorentz transformations in section [ subsec : finslerlt ] defines a notion of curvature , which may in principle be used to generalize the einstein - hilbert action .this curvature is defined as follows : the _ curvature of the cartan linear connection _ is given by }z\ ] ] for vector fields on .using the action of the cartan linear connection on the vector fields constituting the berwald basis and the coefficients one finds that its curvature can be written in the form where the coefficients are given by [ eqn : clcurvcomp ] in the metric limit the coefficient reduces to the riemann tensor , while the remaining coefficients and vanish .one may therefore consider the term as another generalization of the ricci scalar to generate the gravitational dynamics on finsler spacetimes .we do not pursue this idea further here and only remark that also other choices are possible .in the previous section we have seen that on finsler spacetimes the definitions of observers and observables are promoted from geometrical structures on the spacetime manifold to homogeneous geometrical structures on its tangent bundle , and that this homogeneity fixes quantities on when they are given on the unit tangent bundle .we have also seen that measurements by an observer probe these structures along a lifted world line in .however , it follows from the definition of physical observer trajectories that every curve is entirely confined to future unit timelike vectors , so that observations can be performed only on a smaller subset , which we denote observer space . in this sectionwe will therefore restrict our discussion to observer space and equip it with a suitable geometrical structure in terms of cartan geometry , which we derive from the previously defined finsler geometry .while cartan geometry turns out to be useful already as a geometry for spacetime in the context of gravity , it becomes even more interesting as a geometry for observer space and provides a better insight into the role of lorentz symmetry in canonical quantum gravity .we start our discussion with the definition of observer space as the space of all tangent vectors to a finsler spacetime which are allowed as tangent vectors of normalized observer trajectories , i.e. , observer trajectories which are parametrized by their proper time .this leads us to the definition : the _ observer space _ of a finsler spacetime is the set of all future unit timelike vectors , i.e. , the union of all unit shells inside the forward light cones .note that is a seven - dimensional submanifold of and that its tangent spaces are spanned by the vectors which satisfy .further , there exists a canonical projection onto the underlying spacetime manifold .the natural question arises which geometrical structure the finsler geometry on the spacetime manifold induces on its observer space .the structure which is most obvious already from our findings in the previous section is the restricted sasaki metric , which we defined in as the restriction of the full sasaki metric to and which we now view as a metric on the smaller set .it follows from the signature of that has lorentzian signature .another structure which we already encountered in the previous section is the geodesic spray . since it preserves the finsler function , , it is tangent to the level sets of , and thus in particular tangent to observer space .it therefore restricts to a vector field on , which we denote the reeb vector field : the _ reeb vector field _ is the restriction of the geodesic spray to , we now have a metric and a vector field on . combining these two structures we can form the dual one - form of the reeb vector field with respect to the restricted sasaki metric , which we denote the contact form : the _ contact form _ is the dual one - form of the reeb vector field with respect to the restricted sasaki metric , conversely , the reeb vector field is the unique vector field on which is normalized by and whose flow preserves , i.e. , which satisfies the naming of and originates from the notion of contact geometry . in this contexta contact form on a -dimensional manifold is defined as a one - form , which is maximally non - integrable in the sense that the -form is nowhere vanishing , hence defines a volume form , and the reeb vector field is the unique vector field satisfying .indeed it turns out that the volume form defined by is simply the volume form of the sasaki metric on .as we have seen in section [ subsec : finslerpm ] the finsler geometry induces a split of the eight - dimensional tangent bundle into two four - dimensional subbundles and , denoted the vertical and horizontal subbundles , respectively .a similar split also applies to the tangent bundle of observer space .it splits into the three subbundles which we denote the vertical , spatial and temporal subbundles , respectively .the vertical bundle is defined in analogy to the vertical tangent bundle as the kernel of the differential of the canonical projection .it is constituted by the tangent spaces to the shells of unit timelike vectors at and hence three - dimensional .its orthogonal complement with respect to the sasaki metric is the four - dimensional horizontal bundle .one can easily see that the contact form vanishes on .its kernel on defines the three - dimensional spatial bundle . finally , the orthogonal complement of in is the one - dimensional temporal bundle , which is spanned by the reeb vector field .the split of the tangent bundle has a clear physical interpretation .vertical vectors in correspond to infinitesimal generalized lorentz boosts , which change the velocity of an observer , but not his position .they are complemented by horizontal vectors in , which change the observer s position , but not his direction of motion . these further split into spatial translations in and temporal translations in with respect to the observer s local frame .this interpretation will become clear when we discuss the split of the tangent bundle from a deeper geometric perspective using the language of cartan geometry. we will give a brief introduction to cartan geometry in the following section . in order to describe the geometry of observer space ,we make use of a framework originally developed by cartan under the name `` method of moving frames '' .his description of the geometry of a manifold is based on a comparison to the geometry of a suitable model space .the latter is taken to be a homogeneous space , i.e. , the coset space of a lie group and a closed subgroup .homogeneous spaces were extensively studied in klein s erlangen program and are hence also known as klein geometries .cartan s construction makes use of the fact that they carry the structure of a principal -bundle and a connection given by the maurer - cartan one - form on taking values in the lie algebra of . using these structures in order to describe the local geometry of ,a cartan geometry is defined as follows : let be a lie group and a closed subgroup of .cartan geometry _ modeled on the homogeneous space is a principal -bundle together with a -valued one - form , called the _cartan connection _ on , such that . [ cartan : isomorphism ] for each , is a linear isomorphism .[ cartan : equivariant ] is -equivariant : .[ cartan : mcform ] restricts to the maurer - cartan form on vertical vectors . instead of describing the cartan geometry in terms of the cartan connection , which is equivalent to specifying a linear isomorphism for all due to condition [ cartan : isomorphism ], we can use the inverse maps . for each define a section of the tangent bundle , which we denote a fundamental vector field : let be a cartan geometry modeled on . for each _ fundamental vector field _ is the unique vector field such that .we can therefore equivalently define a cartan geometry in terms of its fundamental vector fields , due to the following proposition : let be a cartan geometry modeled on and its fundamental vector fields .then the properties [ cartan : isomorphism ] to [ cartan : mcform ] of are respectively equivalent to the following properties of : . [ cartan : isomorphism2 ] for each , is a linear isomorphism .[ cartan : equivariant2 ] is -equivariant : .[ cartan : canonical ] restricts to the canonical vector fields on .we illustrate these definitions using a physically motivated example .let be the oriented , time - oriented , orthonormal frame bundle of a lorentzian manifold .it carries the structure of a principal -bundle , where is the proper orthochronous lorentz group .the homogeneous space can be any of the maximally symmetric de sitter , minkowski or anti - de sitter spacetimes , which is achieved by choosing the group to be where is the proper orthochronous poincar group and the subscript indicates the connected component of the corresponding group . here denotes the cosmological constant on the respective maximally symmetric spacetime and does not necessarily agree with the physical cosmological constant .we further need to equip the frame bundle with a cartan connection . for this purposewe introduce a component notation for elements of the lie algebra and its subalgebras .first observe that splits into irreducible subrepresentations of the adjoint representation of , these subspaces correspond to infinitesimal lorentz transforms and infinitesimal translations of the homogeneous spacetimes .we can use this split to uniquely decompose any algebra element in the form where are the generators of and are the generators of translations on .they satisfy the algebra relations = \delta^j_k\mathcal{h}_i{}^l - \delta^l_i\mathcal{h}_k{}^j + \eta_{ik}\eta^{lm}\mathcal{h}_m{}^j - \eta^{jl}\eta_{km}\mathcal{h}_i{}^m\,,\label{eqn : algebra}\\ [ \mathcal{h}_i{}^j,\mathcal{z}_k ] = \delta^j_k\mathcal{z}_i - \eta_{ik}\eta^{jl}\mathcal{z}_l\ , , \qquad [ \mathcal{z}_i,\mathcal{z}_j ] = \operatorname{sgn}\lambda\,\eta_{ik}\mathcal{h}_j{}^k\,.\nonumber\end{gathered}\ ] ] the last expression explicitly depends on the choice of the group , which can conveniently be expressed using the sign of the cosmological constant .we can now apply this component notation to the cartan connection .we first split into a -valued part and a -valued part .the latter we set equal to the solder form , which in component notation can be written as where the coordinates on the fibers of are defined as the components of the frames in the coordinate basis of the manifold coordinates , and denote the corresponding inverse frame components . for the -valued part we choose the levi - civita connection . given a curve on it measures the covariant derivative of the frame vectors along the projected curve on . for a tangent vector yields using the same component notation as above it reads where denotes the christoffel symbols .it is not difficult to check that the -valued one - form defined above indeed satisfies conditions [ cartan : isomorphism ] to [ cartan : mcform ] of a cartan connection , and thus defines a cartan geometry modeled on .equivalently , we can describe the cartan geometry in terms of the fundamental vector fields .using the notation they take the form where we have introduced the notation for tangent vectors to the frame bundle . a well - known result of cartan geometry states that the metric can be reconstructed from the cartan connection , up to a global scale factor .we finally remark that the cartan geometry provides a split of the tangent bundle which has a similar physical interpretation as the split of .this split is induced by the decomposition of the lie algebra , which is carried over to the tangent spaces by the isomorphic mappings as shown in the following diagram : {\omega } \ar@{}[r]|{\oplus } \ar@{}[dr]|{+ } & h_pp \ar@{^(->>}[d]_e \ar@{}[r]|{= } \ar@{}[dr]|{= } & t_pp \ar@{^(->>}[d]_a\\ \mathfrak{h } \ar@{}[r]|{\oplus } & \mathfrak{z } \ar@{}[r]|{= } & \mathfrak{g}}\ ] ] the vertical subbundle is constituted by the tangent spaces to the fibers of the bundle , which are given by the kernel of the differential of the canonical projection . this is a direct consequence of condition [ cartan : mcform ] on the cartan connection . the elements of can be viewed as infinitesimal local lorentz transformations , which change only the local frame and leave the base point unchanged .conversely , the elements of the horizontal subbundle correspond to infinitesimal translations , which change the base point without changing the orientation of the local frame .this follows from the fact that we constructed the -valued part of the cartan connection from the levi - civita connection .we will now employ cartan geometry in order to describe the geometry of observer space .hereby we will proceed in analogy to the metric spacetime example discussed in the previous section , where we constructed a cartan connection on the orthonormal frame bundle . for this purposewe refer to the definition of orthonormal observer frames in section [ subsec : finslerlt ] . if we translate this definition to the context of observer space geometry , we find that an observer frame at is a basis of the horizontal tangent space such that and the normalization holds .equivalently , we can make use of the differential of the canonical projection , which isomorphically maps to , and regard frames as bases of , in analogy to the case of metric geometry . herewe choose the latter and define : the space of _ observer frames _ of a finsler spacetime with observer space is the space of all oriented , time - oriented tangent space bases of , such that the basis vector lies in and the frame is orthonormal with respect to the finsler metric , one can now easily see that although there exists a canonical projection , which assigns to an observer frame its base point on , it does in general not define a principal -bundle , where is the lorentz group as in the preceding section .this follows from the fact that the generalized lorentz transforms discussed in section [ subsec : finslerlt ] do not form a group , but only a grupoid . however , this is not an obstruction , as it is our aim to construct a cartan geometry on and not on . indeedthe projection , which simply discards the spatial frame components , carries the structure of a principal -bundle , where by we denote the rotation group .it acts on by rotating the spatial frame components .the cartan geometry on observer space will thus be modeled on the homogeneous space instead of .we further need to equip with a cartan connection which generalizes the cartan connection on the metric frame bundle displayed in the previous section . herewe can proceed in full analogy and choose as the -valued part of the connection the solder form .the expression in component notation , agrees with the analogous expression in metric geometry . for the -valued part generalize the levi - civita connection .recall from section [ subsec : finslerlt ] that the tangent space of a finsler spacetime , and hence also its observer space , is equipped with the cartan linear connection .we can therefore replace the projection to in with the projection to and define where now denotes the cartan linear connection . in component notationthis yields the expression \nonumber\\ & = \frac{1}{2}\left(\delta_i^k\delta_l^j - \eta^{jk}\eta_{il}\right)f^{-1}{}^l_a\,df_k^a + \frac{1}{2}\eta^{jk}f_i^bf_k^c(\delta_bg^f_{ac } - \delta_cg^f_{ab})dx^a\,,\label{eqn : connectionh}\end{aligned}\ ] ] where the coefficients and are the coefficients of the cartan linear connection . from the cartan connection andwe then find the fundamental vector fields [ eqn : fundvecfields ] for and .one easily checks that indeed for all , so that condition [ cartan : isomorphism ] is satisfied .another simple calculation shows that also conditions [ cartan : equivariant ] and [ cartan : mcform ] are satisfied , so that defines a cartan geometry . the cartan geometry on the observer frame bundle induces a split of the tangent bundle in analogy the split we observed for the cartan geometry of a metric spacetime .since the observer space cartan geometry is modeled on instead of we first decompose the lie algebra into irreducible subrepresentations of the adjoint representation of , the subspaces we encounter here are the rotation algebra , the rotation - free lorentz boosts , as well as the spatial and temporal translations of the homogeneous spacetimes .we can decompose the cartan connection accordingly and obtain the following split of the tangent spaces : {\omega } \ar@{}[r]|{\oplus } \ar@{}[dr]|{+ } & b_pp \ar@{^(->>}[d]_b \ar@{}[r]|{\oplus } \ar@{}[dr]|{+ } & \vec{h}_pp \ar@{^(->>}[d]_{\vec{e } } \ar@{}[r]|{\oplus } \ar@{}[dr]|{+ } & h^0_pp \ar@{^(->>}[d]_{e^0 } \ar@{}[r]|{= } \ar@{}[dr]|{= } & t_pp \ar@{^(->>}[d]_a\\ \mathfrak{k } \ar@{}[r]|{\oplus } & \mathfrak{y } \ar@{}[r]|{\oplus } & \vec{\mathfrak{z } } \ar@{}[r]|{\oplus } & \mathfrak{z}^0 \ar@{}[r]|{= } & \mathfrak{g}}\ ] ] the elements of these subbundles correspond to infinitesimal rotations of observer frames in , infinitesimal rotation - free lorentz boosts in as well as translations along the spatial and temporal frame directions in and , respectively . for conveniencewe introduce a component notation for the algebra - valued one - forms , , and in the form where are the generators of rotations , lorentz boosts as well as spatial and temporal translations .the ten components are ordinary one - forms on .note that for each they are linearly independent and thus constitute a basis of . in a similar fashionwe will write the fundamental vector fields in the decomposed form where the ten components are ordinary vector fields on .they constitute bases of the tangent spaces which respect the split into the respective subspaces and are dual to the aforementioned cotangent space bases .recall from section [ subsec : observerspace ] that the tangent bundle of observer space features a split into lorentz boosts and spatial and temporal translations which is similar to the split .in fact these two splits are closely related . for each frame the differential of the bundle projection isomorphically maps the subspaces of , except the kernel , to the corresponding subspaces of , as shown in the following diagram : {\pi _ * } \ar@{}[r]|{\oplus } & b_pp \ar@{^(->>}[d]_{\pi _ * } \ar@{}[r]|{\oplus } & \vec{h}_pp \ar@{^(->>}[d]_{\pi _ * } \ar@{}[r]|{\oplus } & h^0_pp \ar@{^(->>}[d]_{\pi _ * } \ar@{}[r]|{= } & t_pp \ar[d]_{\pi_*}\\ 0 & v_{\pi(p)}o \ar@{}[r]|{\oplus } & \vec{h}_{\pi(p)}o \ar@{}[r]|{\oplus } & h^0_{\pi(p)}o \ar@{}[r]|{= } & t_{\pi(p)}o}\ ] ] we see that we obtain the split of , which we previously derived directly from finsler geometry , also by using cartan geometry .this observation brings us to the question of whether the observer space cartan geometry also yields us the geometric structures on observer space we defined in section [ subsec : observerspace ] the sasaki metric , the contact form and the reeb vector field . in order to relate geometric objects on to the cartan connection and the fundamental vector fields on , one naturally makes use of the bundle projection .its pushforward maps tangent vectors on to tangent vectors on , as displayed also in diagram .however , since is not injective , and thus fails to be a diffeomorphism , it does not allow us to carry vector fields or differential forms from to .we therefore need to enhance the relation between these spaces with a section .it allows us evaluate the fundamental vector fields for on the image of and apply the differential , which yields us vector fields on .note that these depend on the choice of the section . using the component notation we can define component vector fields on by it follows from that vanishes , since the vector fields lie inside the rotation subbundle and thus in the kernel of .further we find that the remaining vector fields constitute bases of the subspaces of for each .this shows that the fundamental vector fields evaluated at isomorphically map the vector space to while respecting the split into subspaces .the inverse maps therefore constitute a one - form whose components are the pullbacks of the components on the image of the section .since the one - form and fundamental vector fields defined above depend on the choice of the section , we now pose the question how they are related if we choose different sections and . recall that is a principal -bundle , so that any two sections are related by a local gauge transform , i.e. , by a function . under this gaugetransform the fundamental vector fields transform as using the irreducible subrepresentations of the adjoint representation of on .similarly , the one - forms transform as since the adjoint representation of acts trivially on the subspace it immediately follows that the component fields and are independent of the choice of the section . from the expressions and of the cartan connection and the fundamental vector fields in terms of finsler geometry we see that these are simply the contact form and the reeb vector field , we have thus expressed these structures on in terms of the cartan connection on .it further turns out that the sasaki metric takes the form and is thus also expressed in terms of the cartan connection .note that also this is invariant under changes of the section , which act as a local rotation of the component fields .the same applies to its volume form in the following sections we will make use of these structures are their expressions in terms of cartan geometry in order to provide definitions for observers and observations in analogy to those given in section [ sec : finsler ] using finsler geometry .we now come to the description of observers and their measurements in the language of cartan geometry on observer space . in the followingwe will discuss which curves on observer space correspond to the trajectories of physical observers .in particular we will define the notion of inertial observers using elements of cartan geometry . in section [ subsec : finslercausality ]we have discussed the notion of physical observers on a finsler spacetime .we have defined the trajectories of physical observers as those curves on a finsler spacetime , whose tangent vectors in arc length parametrization lie in the future unit timelike shell .if we lift these curves canonically to curves on , we thus see that they are entirely contained in observer space .this leads to a very simple definition of physical trajectories on observer space : a physical _ observer trajectory _ is a curve on observer space which is the canonical lift of an observer world line on the underlying finsler spacetime .we will now rewrite this condition in terms of cartan geometry .first observe that canonical lifts in are exactly those curves such that the tangent vector of the projected curve in reproduces , one can easily see that this condition does not restrict the vertical components of , which lie inside the kernel of according to the split , and fully determines its horizontal components as a function of the position in observer space .it therefore defines a horizontal vector field on , i.e. a section of the horizontal tangent bundle which has the property that is the identity on . the unique vector field which satisfies this condition is the reeb vector field defined in .hence , observer trajectories are those curves on whose horizontal tangent vector components are given by the reeb vector field .we can further rewrite this condition by introducing the projectors onto the subbundles of and obtain the form .finally , inserting the explicit formulas for and we arrive at the reformulated definition : a physical _ observer trajectory _ is a curve on observer space whose horizontal components are given by the reeb vector field , i.e. , which satisfies a particular class of observers is given by inertial observers , whose trajectories follow those of freely falling test masses . in section [ subsec : finslerpm ]we have seen that these are given by finsler geodesics , or equivalently by curves whose complete lift in is an integral curve of the geodesic spray .we have further seen that the geodesic spray is tangent to observer space and defined the reeb vector field as its restriction to observer space .it thus immediately follows that inertial observer trajectories on are simply the integral curves of the reeb vector field . comparing this finding with the aforementioned definitionwe see that inertial observer trajectories are exactly those observer trajectories whose vertical tangent vector components vanish .we thus define , using only cartan geometry : an _ inertial observer trajectory _ is an integral curve of the reeb vector field , i.e. , a curve on observer space which satisfies it appears now straightforward to translate the notions of observables and observations from finsler geometry to cartan geometry on observer space .a direct translation yields observables as sections of a horizontal tensor bundle , which is constructed from the horizontal subbundle in analogy to the horizontal tensor bundle .observations by an observer at then translate into measurements of the components of a horizontal tensor field with respect to a basis of the corresponding horizontal tangent space , which can conveniently be expressed using the vector fields . finally ,also a translation of the matter action , where is viewed as a one - form on and the projectors are used , is straightforward .however , we do not pursue this topic here .instead we will directly move on to the gravitational dynamics in the next section . as we have already done in the case of finsler geometry in section [ subsec : finslergravity ] , we now focus on the dynamics of the cartan geometry , which we identify with the gravitational dynamics . since gravity is conventionally related to the curvature of spacetime, we will first discuss the notion of curvature in cartan geometry .we will then derive dynamics for cartan geometry from an action principle and see how this notion of curvature is involved .for this purpose we will consider two different actions , the first being the finsler gravity action we encountered before and which we now translate into cartan language , and an action which is explicitly constructed in terms of cartan geometric objects .we start our discussion of curvature in cartan geometry with its textbook definition : the _ curvature _ of a cartan geometry modeled on the homogeneous space is the -valued two - form on given by \,.\ ] ] the curvature has a simple interpretation in terms of the fundamental vector fields : it measures the failure of to be a lie algebra homomorphism .this can be seen from the relation ) - [ \underline{a}(a),\underline{a}(a ' ) ] = \underline{a}(f(\underline{a}(a),\underline{a}(a')))\,,\ ] ] which can easily be derived from the definition by making use of the standard formula )\ ] ] for any one - form and vector fields . from this general definitionwe now turn our focus to the cartan geometry on observer space modeled on , which we derived from finsler geometry in section [ subsec : cartanobs ] . in this contextthe term ] in the definition of the non - linear curvature on . indeed the similar expression on given by = f_i^bf_j^cf_k^d(\delta_cf^a{}_{bd } - \delta_bf^a{}_{cd } + f^e{}_{bd}f^a{}_{ce } - f^e{}_{cd}f^a{}_{be})\bar{\partial}^k_a\ ] ] reproduces the components of the non - linear curvature , which can equivalently be written in the form we can directly apply this result to the finsler gravity action on the unit tangent bundle . since observer space is simply the connected component of the unit tangent bundle constituted by the future timelike vectors, it is straightforward to consider the restricted action as a gravity action on .this action is still written in terms of finsler geometric objects , which we will now rewrite in terms of cartan geometry .for the volume form of the sasaki metric we have already found the expression , while for the non - linear curvature coefficients we can make use of the lie bracket of horizontal vector fields on together with the relation . in order to reproduce the scalar quantity in the finsler gravity action from this vector fieldwe further apply the boost component of the cartan connection and contract appropriately , which yields ) = r^a{}_{ab}f_0^b = r^a{}_{ab}y^b\,.\ ] ] the last equality follows from the identification of the tangent vector with the temporal frame component .note that this expression is a scalar on which is constant along the fibers of , and can thus be viewed as a scalar on .we thus finally obtain the gravitational action )\operatorname{vol}_{\tilde{g}}\,,\ ] ] which is now fully expressed in terms of cartan geometry . another possible strategy to obtain gravitational dynamics on the observer space cartan geometryis to start from general relativity , rewrite the einstein - hilbert action in terms of the cartan connection derived from the metric geometry displayed in section [ subsec : cartanintro ] , and finally transform the action to an integral over observer space by introducing an appropriate volume form on the fibers of .we will follow this procedure for the remainder of this section .the starting point of this derivation is the action given by macdowell and mansouri . in terms of spacetime cartan geometryit takes the form here is a non - degenerate inner product on . for simplicitywe choose where is the killing form on and denotes a hodge star operator . in componentswe can write the killing form as and the hodge star operator as the two - form is given by the unique decomposition of the -valued cartan curvature into parts with values in and .finally , the tilde indicates that we need to lower this two - form on the frame bundle to a two - form on the base manifold .we now aim to lift the action to observer space . for this purposewe need to find a suitable volume form on the fibers of .recall from the definition of observer space that these are given by the future unit timelike shells for , which are three - dimensional submanifolds of .a natural metric on is thus given by the restriction of the sasaki metric on , or equivalently on , to .using our results from section [ subsec : cartanobs ] on the cartan geometry of observer space we find that the tangent spaces to are spanned by the vertical vector fields , so that the sasaki metric restricts to the euclidean metric .its volume form is given by in combination with the action lifted to observer space , which means that is now regarded as a two - form on , this yields the action in order to analyze the terms in this action we make use of the algebra relations to decompose in the form + \frac{1}{2}[e , e ] = f_{\omega } + \frac{1}{2}[e , e]\ ] ] into the curvature of and a purely algebraic term ] depends on the choice of the group , and thus on the sign of the cosmological constant on the underlying homogeneous space . applying this decomposition to the expression in the actionwe obtain the following terms : * a cosmological constant term : \wedge \star[e , e ] ) = -(\operatorname{sgn}\lambda)^2\epsilon_{ijkl}e^i \wedge e^j \wedge e^k \wedge e^l\,.\ ] ] * a curvature term : \wedge \star f_{\omega } ) = \frac{1}{6}\operatorname{sgn}\lambda\,g^{f\,ab}r^c{}_{acb}\epsilon_{ijkl}e^i \wedge e^j \wedge e^k \wedge e^l + ( \ldots)\,.\ ] ] * a gauss - bonnet term : the ellipsis in the expressions above indicates that we have omitted terms which are not horizontal , i.e. , which contain the vertical one - form . these terms do not contribute to the total action since their wedge product with the vertical volume form vanishes . note the appearance of the common term which , when lowered to a four - form on , combines with the vertical volume form to the volume form of the restricted sasaki metric .the total action thus takes the final form from this we see that we obtain an action based on the curvature of the cartan linear connection , as we have briefly discussed towards the end of section [ subsec : finslergravity ] , provided that we have chosen a model space for which .we also find that we always obtain a non - zero cosmological constant term .the magnitude of the physical cosmological constant can be adjusted by introducing suitable numerical factors into the algebra relations , which corresponds to a rescaling of the basis vectors . in the previous sections we have discussed the physics on finsler spacetimes in the language of cartan geometry . for this purposewe considered a principal -bundle over observer space and equipped it with a cartan connection derived from finsler geometry .this construction allowed us to reformulate significant aspects of finsler spacetime in purely cartan geometric terms : the definition of physical and inertial observers , the split of the tangent bundle into horizontal and vertical components which crucially enters the definition of observables and physical fields , the sasaki metric and its volume measure on and finally the dynamics of gravity .it should be remarked that these formulations can be applied to any cartan geometry modeled on , since they do not explicitly refer to the underlying finsler geometry , or even the spacetime manifold .this observation stipulates the question whether an underlying spacetime geometry is at all required , or may not even exist , at least as a fundamental object . in this final sectionwe will discuss this question .we first discuss whether and how we can reconstruct the finsler spacetime if we are given only its observer space cartan geometry , together with the presumption that an underlying finsler spacetime exists .recall from its definition that the observer space of a finsler spacetime is the ( disjoint ) union of the future unit timelike shells for all spacetime points .every spacetime point thus corresponds to a non - empty subset of . reconstructing the spacetime manifold from its observer space therefore amounts to specifying an equivalence relation which decomposes into subsets , and to equipping the resulting set of equivalence classes with the structure of a differentiable manifoldthis can be done by making use of the vertical distribution , which is tangent to the shells and can be expressed completely in terms of cartan geometry as the span of the vector fields defined in . from our presumptionthat an underlying spacetime manifold exists it follows that is integrable .the frobenius theorem then guarantees that can be integrated to a foliation of , with projection onto its leaf space , and further that carries the structure of a differentiable manifold so that becomes a smooth submersion .the aforementioned procedure allows us to reconstruct the spacetime manifold from observer space cartan geometry .if we now aim to reconstruct also its finsler geometry on , we immediately see that this will be possible at most for vectors which lie inside the forward light cones .this comes from the fact that in the construction of the cartan geometry on we used only the finsler geometry on the shells , which yields the finsler geometry on by rescaling and using its homogeneity properties .this means that we can not reconstruct the finsler geometry on spacelike or lightlike vectors , and in particular we can not reconstruct the null structure of a finsler spacetime . in order to reconstruct the finsler function on the future light cones we need to reconstruct the embedding of observer space into the tangent bundle of the spacetime manifold . for this purposewe make use of the properties of observer trajectories .recall that in section [ subsec : finslerpm ] we applied the canonical lift to a curve on in order to obtain a curve on , and concluded that the canonical lifts of observer trajectories on are exactly those curves whose horizontal tangent vector components are given by the reeb vector field in section [ subsec : cartanpm ] .we can therefore proceed as follows .for we choose an observer trajectory in so that .we then project to a curve on using the projection .the tangent vector , which we identify with via the embedding , is then related to via the differential .this relation yields the formula where we have used the fact that isomorphically maps the horizontal tangent space to .the embedding is thus simply given by finally , we obtain the finsler function on timelike vectors by imposing on the image and the homogeneity .note that can be any homogeneous function here , since is smooth when restricted to the timelike vectors .we now turn our focus to a general cartan geometry modeled on for which we do not presume the existence of an underlying finsler geometry or even a spacetime manifold .indeed the latter will in general not exist , as we can already deduce from the reconstruction of a finsler spacetime detailed above .there we have seen that spacetime naturally appears as the leaf space of a foliation , which we obtained by integrating the vertical distribution on observer space .this procedure fails if is non - integrable .further , even if integrates to a foliation of , this foliation may not be strictly simple , i.e. , its leaf space may not carry the structure of a differentiable manifold .this means that only a limited class of observer space cartan geometries , including those derived from finsler spacetimes , admit for an underlying spacetime manifold .further , even if a spacetime exists , it may not be a finsler spacetime , since the reconstructed metric may not be the sasaki metric induced by finsler geometry .the question arises whether we can still assign a meaningful physical interpretation to an observer space cartan geometry if its vertical distribution is non - integrable , so that there is no underlying spacetime .since any physical interpretation should be given based on the measurement of dynamical , physical quantities by observers , this amounts to the question whether these can meaningfully be defined on an arbitrary observer space cartan geometry .we have provided these definitions throughout our discussion of observer space in section [ sec : cartan ] of this work .our findings suggest that the notion of spacetime is not needed as a fundamental ingredient in the definition of physical observations , but rather appears as a derived object for a restricted class of cartan geometries .the author is happy to thank steffen gielen , christian pfeifer and derek wise for their helpful comments and discussions .he gratefully acknowledges the full financial support of the estonian research council through the postdoctoral research grant ermos115 . 00a. ashtekar , phys . rev .d * 36 * ( 1987 ) 1587 .t. thiemann , cambridge , uk : cambridge univ .( 2007 ) 819 p [ gr - qc/0110034 ] . c. rovelli and l. smolin , phys .d * 52 * ( 1995 ) 5743 [ gr - qc/9505006 ] .m. p. reisenberger and c. rovelli , phys .d * 56 * ( 1997 ) 3490 [ gr - qc/9612035 ] .j. w. barrett and l. crane , j. math .* 39 * ( 1998 ) 3296 [ gr - qc/9709028 ] .j. c. baez , class .grav . * 15 * ( 1998 ) 1827 [ gr - qc/9709052 ] .j. ambjorn and r. loll , nucl .b * 536 * ( 1998 ) 407 [ hep - th/9805108 ] .j. ambjorn , j. jurkiewicz and r. loll , nucl .b * 610 * ( 2001 ) 347 [ hep - th/0105267 ] .j. ambjorn , j. jurkiewicz and r. loll , phys . rev .d * 72 * ( 2005 ) 064014 [ hep - th/0505154 ] .j. d. brown and k. v. kuchar , phys .d * 51 * ( 1995 ) 5600 [ gr - qc/9409001 ] .t. jacobson and d. mattingly , phys .d * 64 * ( 2001 ) 024028 [ gr - qc/0007031 ] . c. pfeifer and m. n. r. wohlfarth , phys .d * 84 * ( 2011 ) 044039 [ arxiv:1104.1079 [ gr - qc ] ] . c. pfeifer and m. n. r. wohlfarth , phys .d * 85 * ( 2012 ) 064009 [ arxiv:1112.5641 [ gr - qc ] ] . c. pfeifer , desy - thesis-2013 - 049 .s. gielen and d. k. wise , j. math .* 54 * ( 2013 ) 052501 [ arxiv:1210.0019 [ gr - qc ] ] . m. hohmann , phys . rev .d * 87 * ( 2013 ) 124034 [ arxiv:1304.5430 [ gr - qc ] ] .d. bao , s. s. chern and z. shen , _ an introduction to riemann - finsler geometry _ , springer , new york , 2000 .s. i. vacaru , int .j. mod .d * 21 * ( 2012 ) 1250072 [ arxiv:1004.3007 [ math - ph ] ] .s. i. vacaru , arxiv:0707.1524 [ gr - qc ] .s. i. vacaru , hep - th/0211068 .j. d. anderson , j. k. campbell , j. e. ekelund , j. ellis and j. f. jordan , phys .* 100 * ( 2008 ) 091102 .d. raetzel , s. rivera and f. p. schuller , phys .d * 83 * ( 2011 ) 044047 [ arxiv:1010.1369 [ hep - th ] ] .cartan , _ exposs de gomtrie , v. _ ; actualits scientifiques et industrielles , * 194 * ( hermann , paris , 1935 ) . d. k. wise , class .* 27 * ( 2010 ) 155010 [ gr - qc/0611154 ] .s. gielen and d. k. wise , phys .d * 85 * ( 2012 ) 104013 [ arxiv:1111.7195 [ gr - qc ] ] .s. gielen and d. k. wise , gen .* 44 * ( 2012 ) 3103 [ arxiv:1206.0658 [ gr - qc ] ] . s. w. macdowell and f. mansouri , phys .* 38 * ( 1977 ) 739 [ erratum - ibid .* 38 * ( 1977 ) 1376 ] .
from general relativity we have learned the principles of general covariance and local lorentz invariance , which follow from the fact that we consider observables as tensors on a spacetime manifold whose geometry is modeled by a lorentzian metric . approaches to quantum gravity , however , hint towards a breaking of these symmetries and the possible existence of more general , non - tensorial geometric structures . possible implications of these approaches are non - tensorial transformation laws between different observers and an observer - dependent notion of geometry . in this work we review two different frameworks for observer dependent geometries , which may provide hints towards a quantization of gravity and possible explanations for so far unexplained phenomena : finsler spacetimes and cartan geometry on observer space . we discuss their definitions , properties and applications to observers , field theories and gravity .
a variety of biological phenomena have been extensively investigated in light of modern nonequilibrium physics .tissue turnover in multicellular organisms is an interesting example of stationary nonequilibrium system ( see fig .[ fig : clonal analysis](a ) ) . throughout adult life, biological tissues are constantly renewed by newly born cells from stem cell pools .the production of cells must be balanced with the death of old cells , which is called tissue homeostasis .tissue stem cells , which are able to proliferate ( i.e. , divide ) and differentiate into tissue specific cells , play a key role in tissue turnover . since differentiated cells exit from the proliferation cycle and are eventually killed , tissue homeostasis is maintained by balanced kinetics of stem cell fate ( proliferation or differentiation ) .intercellular interactions in cell fate decision kinetics are considered to be crucial during homeostasis , and therefore , nonequilibrium statistical mechanics of many - body systems is expected to play an important role to clarify the mechanism of tissue homeostasis .recent advances in experiments have enabled the tracing of cell fate dynamics ( i.e. , kinetics of proliferation and differentiation ) in adult mammalian tissues . in these experiments ,cells in the tissue are labeled by fluorescent proteins , which are inherited to their progeny .starting from isolated labeled single cells in the basal layer tissue , the cells can divide and expand its population within the basal layer , or get excluded from the basal layer through differentiation ( fig .[ fig : clonal analysis](a ) ) .the measured population of the labeled cells that survived in the basal layer ( i.e. , clone ) showed scaling behavior in statistics .most significantly , the average number of cells in surviving clones showed power - law growth , and the cumulative clone size distribution ( i.e. , the probability of having a clone with no less than cells ) showed the scaling law , which both depended on the spatial dimension of the tissue .the scaling behavior ruled out one of the classical pictures of tissue homeostasis , where stem cells always undergo asymmetric division ( one daughter cell is differentiating and the other maintains its stemness ) .the quantitative modeling approach has revealed two canonical examples of stochastic dynamics that explain the scaling behaviors .importantly , these two models reflect the different mechanisms of cell fate regulation .firstly , cell - extrinsic regulation results in the scaling form with and in one - dimension , and and in two - dimension .these scaling forms are derived from the voter model ( vm ) , in which a differentiating cell leaving the basal layer is assumed to directly trigger the proliferation of a neighboring cell to compensate for the loss ( the mechanism can be vice versa , fig . [fig : clonal analysis](b ) ) .the second scheme is the cell - intrinsic regulation , which is described by the critical birth - death process ( cbd ) ( fig .[ fig : clonal analysis](c ) ) .this model assumes that a cell stochastically chooses proliferation or differentiation with equal probability independent from other cells , which results in the scaling form with and . the extrinsic model ( i.e. , and ) was consistent with clonal labeling experiments in some one - dimensional tissues such as intestinal crypts and seminiferous tubules .on the other hand , the experimental works in skin tissues ( i.e. , two - dimensional tissues ) showed that both cell - intrinsic and extrinsic models are consistent with the clonal dynamics . in this paper, we propose a model of cell fate decision , focusing on the cell - cell interaction associated with a finite range . in our model , the population of cells is regarded as a self - replicating many - body langevin system , where we incorporate intercellular interaction in the self - replication process through local cell density , where density is defined with a certain length scale .both short- and long - range interaction can be realized in real tissues as consequences of different regulatory mechanisms .for example , recent experimental and theoretical works suggest that mechanical cues could be relevant in cell fate decision .long - range interaction via autocrine signaling is also crucial in skin stem cells .therefore , exploring cell fate decision processes from the point of view of cell - cell interaction range would be significant .we find that homeostasis is maintained in this model as a consequence of the interaction , meaning that global cell density is autonomously kept constant on average .furthermore , we show that the previously proposed vm and cbd scenarios are incorporated in our model as the small and large limits of the cell - cell interaction range .this indicates that the interaction range of the density - dependent replication process is a key in determining which model of the two appears in biological tissues .we find that in the case of the intermediate value of the interaction range , the clone size statistics cross over from the cbd statistics to the vm statistics as time evolves .we propose that by evaluating the timing of the crossover in experiment , we can indirectly infer the interaction range of the fate decision dynamics in the tissue .our results also reveal a natural scenario in which vm can arise in real experimental setups .this paper is organized as follows . in section[ sec : model ] , we describe our model , based on cell density dependent interaction with a finite interaction range . in section[ sec : main results ] we give the main results , showing the dynamical crossover of clone size statistics in the case of finite interaction range . in section [ sec : scaling hypothesis ] , we discuss the nature of crossover , and show the scaling hypothesis for the crossover . [ tbp ] ( a ) c , title="fig : " ] + ( b ) + , title="fig : " ] + ( c ) + ]we model the population of stem cells as an interacting many - particle system with being the position of the center of the cells existing on the basal layer at time ( see fig . [fig : models](a ) ) .we assume for simplicity that the basal layer is occupied only by a single type of stem cell which can move and divide .the irreversible differentiation of the cells is described by the stochastic exclusion of a cell from the dynamics .considering two or more types of cells coexisting in the basal layer as in the case of previous models will not change the scaling behavior discussed in the following . although our model can be extended to higher dimensions ,we here focus on the one - dimensional case in order to study the distinct limits of asymptotic behavior .neighboring cells in a tissue are typically attached to each other by cell - cell adhesion .we incorporate this interaction by the following many - body langevin equations : here , represent the white gaussian noise satisfying for .initial positions of the cells are prepared so that the label of the cells are ordered as , and the periodic boundary condition is employed .a positive constant is the noise strength and denotes the two - body interaction , which describes the cell - cell adhesion between nearest neighbors . for simplicity , we set .the strength of the adhesive potential determines a typical time scale for spatial relaxation , and the natural length determines a typical length scale of a cell .[ tbp ] ( a ) c represents the sensitivity of the growth rate around the steady - state density , and is the effective diffusion constant that depends on the strength of the cell - cell adhesion . is the relative noise strength .the definitions of , , and are given in eq . .our clonal analysis is performed in the spatially homogeneous region .[ fig : models ] , title="fig : " ] + ( b ) ( c ) + represents the sensitivity of the growth rate around the steady - state density , and is the effective diffusion constant that depends on the strength of the cell - cell adhesion . is the relative noise strength .the definitions of , , and are given in eq . .our clonal analysis is performed in the spatially homogeneous region .[ fig : models ] ] represents the sensitivity of the growth rate around the steady - state density , and is the effective diffusion constant that depends on the strength of the cell - cell adhesion . is the relative noise strength .the definitions of , , and are given in eq . .our clonal analysis is performed in the spatially homogeneous region .[ fig : models ] ] a crucial point of our model is that the tissue homeostasis is achieved as a consequence of the cell - cell interaction . to this end , we assume that the local cell density affects the cell fate decision process .we define the local cell density as : where denotes the interaction range of cell - cell interactions .the interaction range corresponds to the biologically relevant length scale of the cell fate regulation through local cell density .for example , the case of long range interaction can be realized by external chemical control or autocrine signaling . on the other hand, describes the situation where the fate is associated with the distance of a cell from its closest neighbors , corresponding for instance to mechanical regulation . a proliferating cell ( pc )at position undergoes the following birth - death process with rates that depend on local cell density ( see fig . [fig : models](a ) ) : where denotes differentiation ( i.e. , removal from the basal layer ) , and the typical timescale is set by .we here do not explicitly assume any correlation between the fates of the two daughter cells .this means that the newly born pcs will independently choose division or differentiatiation according to eq ., as opposed to the asymmetric division scenario where the fates of siblings are strictly anti - correlated . .we assume that has an attractive fixed point such that and , where .an important role of the attractive fixed point is to regulate cell fate through local cell density in an autonomous fashion ( see fig .[ fig : models](b ) ) .we will see in our model that the power law and scaling law in the clone size statistics appear at the fixed point of cell density , corresponding to the situation where homeostasis is achieved .equations are discretized and numerically solved by the euler - maruyama method .the cell fate decision process is implemented as follows .when a cell undergoes proliferation , the local cell density is evaluated for newly born cells , and two lifetimes are generated from exponential distributions with rates , respectively .if , the cell undergoes proliferation after time , and otherwise it undergoes differentiation after time . in order to study the asymptotic clone size statistics ,we introduce the label degree of freedom to cells . in our numerical simulations , we initially prepare only one labelled cell and many other unlabeled cells , mimicking the induction of marker protein in experiments .the quantities of interest are the average clone size of the labeled progenies and the labeled clone size distribution .the clones here refer to the labeled descendants within the stem cell population .we remark on the stability of the model .when is sufficiently large , the population of cells tends to form spatial clustering .this is regarded as an example of the brownian bug problem , which has been observed in various models with self - replication and diffusion .we identified the parameter regimes as shown in fig .[ fig : models](c ) , in which the clustering of cells does not occur , by analyzing the structure factor ( see appendix a ) .this argument ensures that our numerical analysis is performed in this homogeneous region .we now discuss our numerical results . figure [ fig : results](a ) shows the time evolution of the average clone size for several values of .the average clone size grows linearly in the short time scale , and exhibits the power - law growth with exponent in the long time scale .figure [ fig : results](b ) shows that the clone size distribution is the exponential form in the short time , and then crosses over to the half - gaussian form in the long time scale .these results imply that the clone size statistics cross over from the cbd statistics with and to the vm statistics with and , in the course of time . in order to clarify the dynamical crossover of the clone size statistics , we consider the two opposite limits of the interaction range . since the interaction range lies between the cell size and the system size ( i.e. , the size of the tissue ) , we call the two limits and the small limit and the large limit , respectively . in our simulations , we take and . figures [ fig : results](c ) and [ fig : results](d ) show the time evolution of the clone size distributions for and , respectively .the inset in fig .[ fig : results](c ) or [ fig : results](d ) shows the time evolution of the average clone size .the clone size statistics asymptotically approach the cbd statistics in the large limit and to the vm statistics in the small limit .these results imply that the cbd statistics and the vm statistics are formulated in a unified view through the interaction range .[ tbp ] \(a ) ( b ) c + ( c ) + ( d ) + in the following , we consider the mechanism that gives rise to the cbd statistics and the vm statistics in the large and small limit , where the existence of the attractive fixed point plays a significant role . in the large limit , the cell fate regulation is governed by global cell density in the continuum limit as : the existence of an attractive fixed point of ensures homeostasis so that holds in the long time scale .therefore , the clone size statistics are expected to behave asymptotically as the cbd statistics : and .we emphasize that in our model , the critical clone size statistics are achieved as a result of cell - cell interactions through cell density , in contrast to cbd , where the criticality is assumed a priori .the asymptotic cbd statistics can now be interpreted as a consequence of long - range density feedback interaction .on the other hand , in the case of small , the cell - cell interaction is effectively short - ranged , because the ever - expanding average size of the surviving clones is always larger than . in this case , as soon as a proliferating cell undergoes differentiation , the proliferation rate increases locally around that position , and the neighboring cells will be likely to compensate for the loss of the adjacent cell .therefore , the resulting clone size statistics are expected to asymptotically behave as the vm statistics with and . in other words, our model implies that the vm statistics can naturally arise as a result of short - range interaction .our numerical results suggest that the time scale for the crossover increases with the interaction range .the dynamical crossover takes place due to the competition between the interaction range and the average clone size .since the average clone size is ever - increasing in time , one expects that exceeds at certain time .we refer to as the crossover time , at which the behavior of the clone size statistics change .since the clone size statistics are effectively the cbd statistics in short time scale , the crossover time is estimated from the following equation : where the average clone size is approximated by the exact expression of that of the cbd statistics : .therefore , we expect that .scaling the time by the crossover time , and scaling the average clone size by the interaction range , all curves collapse onto a single master curve , as shown in fig .[ fig : crossoverscaling ] . [ tbp ] . is scaled by and plotted against for , and .[ fig : crossoverscaling ] ] from fig .[ fig : crossoverscaling ] , we find that the average clone size has the following scaling form : which reconfirms the cbd statistics and the one - dimensional vm statistics .we now discuss the nature of crossover in the case of .the dynamical crossover takes place due to the competition between two length scales : the interaction range and the ever - expanding average clone size . in the short time scale with , the cell fate regulation is effectively governed by global cell density , leading to the cbd statistics . on the other hand , the clonal dynamics in the long time scale with can be explained by coarse - graining the cell population ( see fig .[ fig : coarsegraining ] ) . in the coarse - graining scheme ,the total system is divided into a chain of boxes with width , which is the cluster size of cells that essentially feel the same cell density according to eq . .now we focus on the two boxes at the boundary between labeled and unlabeled population .since the clonal dynamics within each box approximately follows the cbd dynamics , the average time in which all the unlabeled cells in a box are replaced by the labeled cells in the neighboring box ( or vice versa ) is approximately .therefore , by scaling by and by , the clonal dynamics in terms of the coarse - grained local cell density would be similar to that in the small limit , and thus the vm statistics appear in the long time scale . scaling collapse in fig .[ fig : crossoverscaling ] ensures the validity of this discussion .this implies the dynamical crossover of the clone size statistics in one - dimension .[ tbp ] limit .[ fig : coarsegraining ] ] from the scaling form of ( eq . ) , we find that the interaction range in a tissue can be estimated from , where is obtained by fitting experimental data to the curve .since the crossover appears after the average clone size reaches the interaction range , the system size needs to be large enough compared with in order to detect the crossover in finite size tissues .our crossover detection scheme would be applicable to one - dimensional systems such as seminiferous tubules .previous clonal labeling experiments on seminiferous tubules have shown that one - dimensional vm statistics appear in the millimeter order length scale in the time scale of several months . from the viewpoint of our model, this means that there could be a length scale that is smaller than a millimeter within which the cell fate dynamics will look more like the autonomous model .thus , by examining the statistics of smaller clones at shorter time scale , it may be possible to detect the crossover before the clonal dynamics converging to the vm statistics , allowing us to estimate the finite interaction range .we presented a novel model of stochastic cell fate decision , based on cell - cell interactions through local cell density . in our model ,two previous stochastic models ( i.e. , cbd and vm ) are unified by introducing the interaction range .we numerically studied the asymptotic clone size statistics for the one - dimensional case .the asymptotic clone size statistics of cbd and vm are realized in the large and small limits of , respectively . in the case of intermediate ,the clone size statistics cross over from that of cbd to that of vm in the course of time .furthermore , we studied the scaling hypothesis for the dynamical crossover of the average clone size .the one - dimensional tissue experiments revealed the vm statistics without its detailed mechanism , and our study clarified a natural scenario behind the emergence of the vm statistics in biological systems .although the mechanism of tissue homeostasis has not yet been revealed at the level of molecular biology , our phenomenological analysis has quantitatively elucidated the role of the time and length scales of cell - cell interactions .the density dependent mechanism of cell fate regulation could arise via either mechanical cues from surrounding cells , external cues from the niche , or long - range autocrine signaling . extracting the length scale from the crossoverwould possibly be a key to investigate the nature of regulation in cell fate decision .we expect that our analysis would provide a platform for further studies .for instance , the scenario of cell fate decision in two - dimensional tissues is still unsettled , since cbd and vm have the same asymptotic statistics , apart from the logarithmic correction in the average clone size . on the basis of our model ,it is left for future studies to discuss the spatial correlation of labeled cell configuration to clarify the cell fate decision scenario in sheet tissues .we are grateful to allon m. klein and kazumasa a. takeuchi for fruitful comments .ts is supported by jsps kakenhi grant no .25800217 and no .22340114 , by kakenhi no .25103003 fluctuation & structure " , and by platform for dynamic approaches to living system from mext , japan .in this appendix , we discuss the stability of the spatially uniform distribution of cells in our model . in our numerical simulation, we have observed that the population of cells exhibits spatial clustering , when the interaction range is sufficiently large .similar phenomena have been reported in a variety of systems , which has been known as the brownian bug problem . in the following ,we give a simple scenario of the spatial clustering , and clarify the parameter region in which the spatial clustering does not occur .we briefly review the setup to discuss the linear stability .we consider a population of proliferating cells , which are confined within the one - dimensional progenitor cell pool .let be the set of coordinates of the centers of them . here denotes the number of cells at time .we assume that the cells obey the following overdamped langevin equations : for , where s denote independent white gaussian noise terms satisfying for and with the noise intensity . denotes the interaction potential among neighboring cells .in particular , we assume the following form : , which represents stored force acting among neighboring cells , mimicking the cell adhesion forces . the two - body potential is assumed to include only short - range interaction ; for example we can take following form : . here , denotes a length scale which represents the inter - particle ( cell ) distance or the typical size of cells , and denotes the cutoff length , comparable with ( or slightly greater than ) .each proliferating cell ( pc ) at position undergoes the following birth - death process where a new cell is created at the position as a birth event occurs , while the cell simply annihilates as a death event occurs .the birth and death rates depend on the local density of cells , which is defined as where is the local density field and is the interaction kernel . the parameter , which we call the interaction range , expresses the length scale within which a cell can respond to change in the local density .we discuss the linear stability around the spatially uniform distribution of cells . to this end, we consider a continuum description of the many - body langevin equations . in the continuum description ,the population of the cell is described by the local density field . by following the argument by dean and taking into account the cell fate decision process, we obtain the dynamical equation for the local density field : where denotes partial differentiation with respect to , and and are white gaussian noise fields satisfying with it s forward discretization . in the first line and and in the second line are given by we now linearize eq .with respect to . without noise terms , eq .has a nontrivial fixed point . by expanding the density around the steady state as and picking up leading order terms ,we obtain and in the last line , we have used the following approximation finally , eq . is linearized as where the parameters are given by here , is the sensitivity of growth rate around steady state density , is the effective diffusion constant , and and represent the amplitudes of fluctuations arising from cell birth - death kinetics and langevin motion , respectively .we employ the structure factor analysis in the linearlized langevin system at steady state .the structure factor , which quantifies the density fluctuation associated with a wavenumber , is defined as , where is the fourier component of .first we decompose eq . into the fourier mode with wavenumber .the fourier decomposition of the first term on the right hand side of eq . takes the form of where is the fourier component of .we then obtain where is the fourier component of the noise terms , and has mean zero and the following correlation : since eq .is linear , we obtain the structure factor as the structure factor depends on the choice of interaction kernel .we here take gaussian kernel as well as top - hat kernel , which is used for numerical simulations in the main text . here, is the indication function , which takes if is true and otherwise .thus , we obtain respectively . from the exact expression of in eq ., we discuss whether or not the spatial clustering takes place in the system described by the linearized langevin equation at steady state .figure [ fig : strfac ] shows that has a nontrivial peak at as exceeds a certain threshold , while decreases monotonically when is below the threshold .existence of a nontrivial peak indicates the enhancement of density fluctuation at mode , and thus the formation of spatial pattern with a characteristic length scale .\(a ) gaussian kernel associated with ( a ) gaussian and ( b ) top - hat kernel .a nontrivial peak appears after exceeds a certain threshold .we set the parameters as and .[ fig : strfac ] , title="fig : " ] \(b ) top - hat kernel associated with ( a ) gaussian and ( b ) top - hat kernel .a nontrivial peak appears after exceeds a certain threshold .we set the parameters as and .[ fig : strfac ] , title="fig : " ] by employing the stationary condition , we derive the condition for the existence of a peak as follows : where .inequality naturally defines a threshold of interaction range , above which the spatial clustering of cells takes place : the phase diagram based on eq . with the top - hat kernelis shown in fig .[ fig : models ] ( c ) .some studies discussed the appearance of spatial clustering by linear stability analysis .however , the validity of linear stability analysis depends on the choice of kernel .according to linear stability analysis , linearly unstable region appears for top - hat kernel , while it does not for gaussian kernel .this is not a good characterization of the phenomenon .taking into account the fluctuation of density field , and quantifying the two - point correlation of density field by structure factor , our analysis reveals that spatial clustering occurs irrespective of the choice of the interaction kernel , and specifies the parameter region where the clustering occurs .therefore , our analysis clarifies a better understanding of the brownian bug problem .we take the following functional form of : with , , , , and .the numerical parameters are set as follows : , , , , , , and .we take for fig .[ fig : results ] ( c ) and ( d ) , for fig . [ fig : results ] ( a ) , ( b ) , and fig .[ fig : crossoverscaling ] , and for all cases .the discretization time step for the numerical integration of eq . is set as .the clone size distribution is calculated with independent runs .50 h. hinrichsen , adv . phys . *49 * , 815 ( 2000 ) .g. dor , rev .phys . * 76 * , 663 ( 2004 ) .b. d. simons and h. clevers , cell * 145 * , 851 ( 2011 ) .a. m. klein and b. d. simons , development * 138 * , 3103 ( 2011 ) . c. lopez - garcia , a. m. klein , b. d. simons , and d. j. winton , science * 330 * , 822 ( 2010 ). a. m. klein , t. nakagawa , r. ichikawa , s. yoshida , and b. d. simons , cell stem cell * 7 * , 214 ( 2010 ) .s. sawyer , ann .prob . * 4 * , 699 ( 1976 ) .m. bramson and d , griffeath , zeitschrift fr wahrscheinlichkeitstheorie und verwandte gebiete * 53 * , 183 ( 1980 ) .r. a. holley and t. m. liggett , ann .prob . * 3 * , 643 ( 1975 ) .i. dornic , h. chat , j. chave , and h. hinrichsen , phys .lett . * 87 * , 045701 ( 2001 ) .e. clayton , d. p. doup , a. m. klein , d. j. winton , b. d. simons , and p. h. jones , nature * 446 * , 185 ( 2007 ) .a. m. klein , d. p. doup , p. h. jones , and b. d. simons , phys .e * 76 * , 021910 ( 2007 ) .f. galton , j. stat . lond . * 36 * , 19 ( 1873 ) .h. w. watson and f. galton , the journal of the anthropological institute of great britain and ireland * 4 * , 138 ( 1875 ) .d. g. kendall , j. lond .soc . * 41 * , 385 ( 1966 ) .t. e. harris , the theory of branching processes " , dover ( 2002 ) .g. mascr , s. dekoninck , b. drogat , k. k. youssef , s. brohe , p. a. sotiropoulou , b. d. simons , and c. blanpain , nature * 489 * , 257 ( 2012 ) .p. rompolas , k. r. mesa , k. kawaguchi , s. park , d. gonzalez , s. brown , j. boucher , a. m. klein , and v. greco , science * 352 * , 1471 ( 2016 ) . b. i. shraimann , proc . natl .102 * , 3318 ( 2005 ) .j. ranft , m. basan , j. elgeti , j .- f .joanny , j. prost , and f. julicher , proc .107 * , 20863 ( 2010 ) .e. hannezo , j. prost , and j .- f .joanny , j. r. soc .interface * 11 * , 20130895 ( 2014 ). f. montel , _ et al .. lett . * 107 * , 188102 ( 2011 ) .b. coste , _ et al ._ , science * 330 * , 55 ( 2010 ) .s. dupont , _ et al ._ , nature * 474 * , 179 ( 2011 ). g. t. eisenhoffer , _ et al ._ , nature * 484 * , 546 ( 2012 ) .x. lim , _ et al ._ , science * 342 * , 1226 ( 2013 ) .a. puliafito , l. hufnagel , p. neveu , s. streichan , a. sigal , d. k. fygenson , and b. i. shraiman , proc .109 * , 739 ( 2012 ) .w. r. young , a. j. roberts , and g. stuhne , nature * 412 * , 328 ( 2001 ) .e. hernndez - garca , and c. lpez , phys .e * 70 * , 016216 ( 2004 ) .f. ramos , c. lpez , e. hernndez - garca , and m. a. muoz , phys .e * 77 * , 021102 ( 2008 ) .e. heinsalu , e. hernndez - garca , and c. lpez , europhys . lett .* 92 * , 40011 ( 2010 ) .d. s. dean , j. phys.a : math . gen . * 29 * , l613-l619 ( 1996 ) .
we study the asymptotic behaviors of stochastic cell fate decision between proliferation and differentiation . we propose a model of a self - replicating langevin system , where cells choose their fate ( i.e. proliferation or differentiation ) depending on local cell density . based on this model , we propose a scenario for multi - cellular organisms to maintain the density of cells ( i.e. , homeostasis ) through finite - ranged cell - cell interactions . furthermore , we numerically show that the distribution of the number of descendant cells changes over time , thus unifying the previously proposed two models regarding homeostasis : the critical birth death process and the voter model . our results provide a general platform for the study of stochastic cell fate decision in terms of nonequilibrium statistical mechanics . pacs numbers : : 05.65.+b , 87.17.ee , 87.18.hf