article
stringlengths
0
456k
abstract
stringlengths
0
65.5k
the ability to learn the social path lengthto other social network users can often help individuals make informed trust and access control decisions . for instance , if attendees at a large convention could easily find other attendees with whom they share social links ( e.g. , a linkedin connection ) , this could help them decide who to chat or meet up with .similarly , travelers and commuters could more consciously decide with whom to interact , share rides , etc . in general , discovering the social path lengthbetween users is beneficial in many interesting scenarios , such as estimating the _ familiarity _ to a location ( which can in turn be used for context - based security ) , as well as for routing in delay - tolerant ad - hoc mobile networks and anonymous communications . * problem statement . *the widespread adoption of online social networks ( osns ) makes it appealing to measure the length of a path between two nodes , e.g. , to use this information as a signal of reciprocal trust and/or social interest .today , a facebook user can see the number of common friends with another user , while linkedin also displays the social path length .however , as popular osns are centralized systems , so are the features they offer to discover social paths . as such , they do not particularly adapt well to mobile environments where social interactions are tied to physical proximity , thus severely limiting the feasibility of many applications scenarios users may not always be able to connect to the internet or willing to reveal their location and/or interests to the provider .relying on centralized systems to learn social path lengths implies that , whenever alice queries a server for the social path lengthto bob , the server learns that alice is interested in bob , their frequency of interactions , and their locations .this prompts the need for decentralized and privacy - preserving techniques for social path lengthestimation .users should only learn if they have any common friends , without having to reciprocally reveal the identities of friends that they do not share , and discover the length of the social path between them ( for paths longer than two ) , without learning which users are in the path .* technical roadmap . *our work builds on common friends , a system supporting privacy - preserving common friend discovery on mobile devices : building on a cryptographic primitive called private set intersection ( psi ) , it allows mutual friends to be discovered by securely computing the intersection of friendship _ capabilities _ , which are periodically distributed from a common friendsuser to all the friends who are also using it .however , besides being limited to social paths of length two , common friendsonly discovers the subset of the mutual friends who are _ already _ in the system , thus suffering from an inherent bootstrapping problem .this paper introduces social pal , the first system that supports the privacy - preserving estimation of the social path lengthbetween any two social network users .we introduce the notion of _ ersatz nodes _ , created for users that are direct friends of one or more users of social palbut are not part of the system .we guarantee that two users of social palwill be able to discover _ all _ their common friends in the osn ( i.e. , all paths of length two ) .we then present a hash chain - based protocol that supports the ( private ) discovery of social paths longer than two , and demonstrate its effectiveness by means of extensive simulations .our work is not limited to designing protocols : we also fully implement social paland deploy a scalable server architecture and an android client library enabling developers to seamlessly integrate it into their applications .* contributions . * in summary, we make the following contributions : 1 .we present an efficient privacy - preserving estimation of social paths of arbitrary length ( section [ sec : social ] ) .2 . we state and prove several properties of social palincluding the fact that , for two users and : ( i ) social palwill find all common friends of and , including those who are not using it , and ( ii ) for each discovered path between and , social palallows each party to compute the _exact _ length of the path ( section [ sec : analysis ] ) .3 . using samples of the facebook graph ,we empirically show that even when only 40% of users use the system , social palwill discover more than 70% of all paths , and over 40% with just 20% of the users( section [ sec : evaluation ] ) .4 . we support facebook and linkedin integration and release the implementation of a scalable server - side architecture and a modular android client library , allowing developers to easily integrate social palin their applications ( section [ sec : implementation ] ) .we build two android apps : a friend radar displaying the social path lengthto nearby users , and a tethering app enabling users to securely share their internet connection with people with whom they share mutual friends ( see section [ sec : apps ] ) .we start by discussing the problem of privately discovering common friends , i.e. , social paths of length two .we argue that minimal security properties for this problem include both _ privacy _ and _ authenticity _ , as users should neither learn the identity of non - shared friends nor claim non - existent friendships . *private set intersection ( psi ) . *a straightforward approach for privately discovering common friends is to rely on psi , a primitive allowing two parties to learn the intersection of their respective sets ( and nothing else ) .if friend lists are encoded as sets , then psi could be used to privately find common friends as the set intersection .one could also limit disclosure to the _ number _ , but not the identity , of shared friends , using private set intersection cardinality ( psi - ca ) , which only reveals the size of the intersection .however , using psi ( or psi - ca ) guarantees privacy but not authenticity , as one can not prevent users from inserting arbitrary users in their sets and claim non - existent relationships . *bearer capabilities . * in order to guarantee authenticity , nagy et al . combine bearer capabilities ( aka bearer tokens ) with psi , proposing the common friendsservice , whereby users generate ( and periodically refresh ) a random number the `` capability '' and distribute it to their friends via an authentic and confidential channel . as possession ofa capability serves as a proof of friendship , users can input it into the psi / psi - ca protocol , thereby only learning the identity / number of common and authentic friends .since capabilities are large random values , a simpler variant of psi for private common friend discovery that only relies on cryptographic hash functions and does not involve public - key operations can be used .parties can hash each item in their set and transfer the hash outputs : since the hash is one - way , parties can not invert it , thus they only learn the set intersection upon finding matching hashes .this can be further optimized using the * _ bloom filter based psi ( bfpsi ) _ * primitive ( for high - entropy items ) outlined in . on the other hand ,it is not clear whether it is possible to do so for psi - ca , i.e. , to only count the number of common friends. * bloom filters . *a bloom filter ( bf ) is a compact data structure for probabilistic set membership testing .specifically , a bf is an array of bits that can be used to represent a set of elements in a space - efficient way .bfs introduce no false negatives but can have false positives , even though the probability of a false positive can be estimated ( and bounded ) as a function of and .formally , let be a set of elements , and bf be an array of bits initialized to 0 .bf denotes the -th item in bf .then , let \}_{i=1}^\gamma$ ] be independent cryptographic hash functions , salted with random periodically refreshed nonces .for each element , set for . to test if , check if bf .an item appears to be in a set even though it was never inserted in the bf ( i.e. , a false positive occurs ) with probability .the optimal size of the filter , for a desired probability , can be estimated as : .[ [ subsec : system - model ] ] in figure [ fig : architecture ] , we illustrate the common friends system : it involves a server , a set of osn servers ( such as facebook or linkedin ) , and a set of mobile users , members of one or more of these osns . is implemented as a social network app ( i.e. , a third - party server ) , which stores the bearer capabilities uploaded by the common friendsapplication running on users devices .it also allows a user s common friendsapplication to download bearer capabilities uploaded by that user s friends in the osns .common friendsconsists of three protocols : ( 1 ) user authentication , ( 2 ) capability distribution , and ( 3 ) common friend discovery .[ fig : architecture ] * osn user authentication * enables the osn server to authenticate a user , provide s osn identifier ( ) to , and to authorize to access information about s friends in the osn , which can be done using standard mechanisms , such as oauth .* capability distribution * involves and , communicating over a secure channel with server authentication provided by certificate , and client authentication based on the previous osn user authentication process .user generates a random capability ( taken from a large space ) and uploads it to over the established channel .stores along with the social network user identifier , and sends back , which contains pairs of identifiers and corresponding capabilities of each friend that has already uploaded his capability .the protocol needs to be run periodically to keep up - to - date , as capabilities are periodically refreshed . *common friend discovery * is a protocol run between two users , and , illustrated it in figure [ fig : bffof ] , allowing and to privately discover their common ( authentic ) friends , based on bfpsi .first , and exchange their public keys ( and , respectively ) and generate a shared key ( ) used to encrypt messages exchanged as part of the protocol . to prevent man - in - the - middle attacks , ( resp ., ) cryptographically binds the dh channel to the protocol instance : ( resp . , ) extends each item in the capability set ( resp . , ) by appending dh public keys , , building effectively a new set ( resp . , ). inserts every element of the set into a bloom filter and sends it to .discovers the set of friends ( ) in common with by verifying whether each item of his input set is in .since bloom filters introduce false positives , the set may contain non - common friends .thus a simple challenge - response protocol is run , where requires to prove knowledge of items available in . at the end of the protocol , and output the set of their common friends .[ fig : bffof ]we now present the design and the instantiation of the social pal , the system to compute the social path lengthbetween two osn users in a decentralized and privacy - preserving way .* limitations of . * before introducing social pal s requirements , we discuss two main limitations of common friends , as addressing them constitute our starting point : 1 ._ bootstrapping _ : users and can discover a mutual friend ( say ) , only if has joined the common friendssystem and uploaded his capability to .that is , common friendswill only discover a subset of the mutual friends between and until all of them start using it .longer social paths _ :common friendsonly allows its users to learn whether they are friends or have mutual friends .if two users have a longer social path between them , common friendscannot detect it . to illustrate common friends s bootstrapping problem , we plot , in figure [ fig : bootstrapping ] , a simple social network with 27 nodes ( i.e. , users ) and 34 edges ( i.e. , friendship relationships ) .black circles represent users who are using common friends , and white circles those who are not .purple / solid edges represent direct friend relationships ( i.e. , social paths of length 1 ) that are discoverable by common friends .when the user base is only 40% of all osn users ( figure [ subfig:0.25-users ] ) , only 7 out of the 34 direct friend relationships are discoverable ( i.e. , coverage is approximately ) .when it increases to 60% , coverage increases to about 50% ( figure [ subfig:0.5-users ] ) .0.35 and of users using the system .black ( resp ., white ) nodes denote users ( resp . , non - users ) of the system .purple / solid edges denote a direct friendship discoverable by common friends.,title="fig:",scaledwidth=85.0% ] 0.35 and of users using the system .black ( resp ., white ) nodes denote users ( resp . , non - users ) of the system .purple / solid edges denote a direct friendship discoverable by common friends.,title="fig:",scaledwidth=85.0% ] * system model .* social pal s system model is the same as that of common friends(presented in section [ subsec : system - model ] ) .it involves a server ( which we design as a social network app ) , a set of osn servers ( such as facebook or linkedin ) , and a set of mobile users members of one or more of these osns ( cf .figure [ fig : architecture ] ) .* functional requirements . *ideally , social palshould allow any two users to always compute the exact length of the social path between them , even when social palis being used only by a fraction of osn users . in order to characterize how well social palmeets this requirement, we use a measure of the likelihood that any two users would discover , using social pal , an existing social path of a given length between them .we denote this measure as social pal s _ coverage_. we define social pal s functional requirements as follows : * _ ( correctness ) . _users and can determine the exact length of a social path between them ( if any ) .* _ ( coverage maximization ) ._ social palshould maximize coverage , in other words , the ratio between the number of social paths ( of length ) between and _ discovered _ by social paland the number of _ all _ social paths ( of length ) between and . * privacy requirements .* from a privacy point of view , social palshould satisfy the following requirements .let and be two social palusers willing to discover the length of the social path existing between them : 1 . and discover the set of their common friends but learn nothing about their non - mutual friends ; 2 . and do not learn any more information other than what it is already available from standard osn interfaces . in other words , social palshould allow two users to learn the social path length between them ( if any ) , but not the nodes on the path , without reciprocally revealing their social link .if a path between the users exists that is of length two ( i.e. , users have some common friends ) , then they learn the identity of the common friends ( and nothing else ) .this only pertains to interacting users , as ensuring that no eavesdropping party learn any information about users friends can be achieved by letting users communicate via confidential and authentic channels .* threat model .* we assume that the participants in social palare honest - but - curious .the osn server is trusted to correctly authenticate osn users and not to attempt posing as any osn user .the social palserver is trusted to distribute social palcapabilities only to those social palusers authorized to receive them .social palusers use the legitimate social palclient , but they might attempt to learn as much information as possible about friends of other social palusers with whom they interact .we aim to guarantee the privacy requirements discussed above in this setting , and prevent the osn server or the server to learn any information about interactions between social palusers . before presenting the details of the system , in table [ tab : fofprotocol - notation ] , we introduce some notation used throughout the rest of the paper . [tab : fofprotocol - notation ] * ersatz nodes . * one fundamental building block of social palare ersatz nodes , which we introduce to overcome the bootstrapping problem faced by services like common friends .recall from section [ subsec : system - model ] that , in the original common friendsdesign , the server stores bearer capability , uploaded by a user , together with his social network identifier .the pair constitutes s _ user node _ in the social graph maintained by .the set of s friends is the set of edges incident in the user node . in social pal , we let create an _ersatz node _ for all users who have not joined the system but who are friends with a user who has .an ersatz node is identical to a standard user node , but its capability is generated by , instead of the user .figures [ subfig:0.25-users - ersatz ] and [ subfig:0.5-users - ersatz ] show how coverage improves when ersatz nodes are added , e.g. , with only joining the system , coverage reaches .0.38 and of users using the system and with the addition of ersatz nodes ( in grey ) .purple / solid edges here denote a direct friendship discoverable by social pal.,title="fig:",scaledwidth=83.0% ] 0.37 and of users using the system and with the addition of ersatz nodes ( in grey ) .purple / solid edges here denote a direct friendship discoverable by social pal.,title="fig:",scaledwidth=83.0% ] * ersatz node creation .* adding ersatz nodes requires a few changes in the capability distribution protocol , compared to that from section [ subsec : system - model ] .we highlight these changes in figure [ fig : ersatz - capability - updates ] , specifically , in the blue - shaded box . before returning to, first computes the set which contains the social network identifier of each `` missing user '' . then , , it creates s an ersatz node as follows : 1 .create an _ ersatz capability _ ( where is the length of a capability ) for store .2 . create an initial friend set , which at this stage contains only . after the successful creation of all needed ersatz nodes , returns , which includes the capabilities from the nodes of all of s friends , including the ersatz nodes .[ fig : ersatz - capability - updates ] * active social graph updates . *users of social palexplicitly authorize the server to retrieve their friend lists from the osns . since an ersatz node not a user of social pal , can not learn the full set of s friends .instead , it maintains an estimate of based on the events it can observe from users of social pal . for example , when a user adds a friend , learns that is added to and can infer that should be added to .each corresponds to a real user who has explicitly authorized to learn about the edge - in the social graph . *turning ersatz nodes into `` standard '' nodes .* if a user whom has created an ersatz node later joins social pal , he can simply upload his capability to , who ( 1 ) overwrites the old ersatz capability with , ( 2 ) queries the osn for s friend list , and ( 3 ) updates the existing , possibly incomplete , list of s friends with , turning an ersatz node into a standard node .note that this operation is transparent to all users .we now present the full details of our social palinstantiation : besides addressing the bootstrapping problem ( using ersatz nodes ) , it also allows two arbitrary users to calculate the social path lengthbetween them .we denote with the social path lengthbetween two users and , i.e. , the minimum number of hops in the social network graph that separates and . * intuition . * we set to allow social palto discover the social path lengthbetween users in the osn by extending the capability distribution to include further relationships beyond friendship ( e.g. , friend - of - a - friend ) and rely on capability matches for estimating the social path length . by using cryptographic hash functions , we can generate and distribute capabilities of higher order that serve as a proof of a social path between users .* notation . * in the rest of the paper , we use the following notation : the hash chain of item ( of length ) corresponds to the evaluation of a cryptographic hash function performed times on .when , .specifically : a -degree capability and is defined as denotes the set of social contacts that are -hops from user . * capability distribution : * in figure [ fig : distance - capability - protocol ] , we detail social pal s protocol for capability distribution .interaction between and is identical to the capability distribution protocol from figure [ fig : ersatz - capability - updates ] , up until the creation of missing ersatz nodes is completed . in the updated protocol returnstwo sets , namely , where the set of higher order capabilities provided to by other osn members that are at least -hops from .it is composed of a number of subsets with each subset order capabilities of users in .formally , { \ensuremath{\mathit{c_{i}}}\xspace}=\ { ( i-1,c^{i-1}_{j } ) : \exists ( { \ensuremath{id_{j}}\xspace } , { \ensuremath{\mathit{c_{j}}}\xspace } ) \land { \ensuremath{id_{j}}\xspace}\in { \ensuremath{\mathit{f^{i}}}\xspace}({\ensuremath{id_{{\textsf{u}\xspace}}}\xspace } ) \}\vspace{-0.1 cm } \end{array}\ ] ] consequently , the total cardinality of : finally , generates missing higher order capabilities . for every received capability degree , hashes it times to generate a sequence of higher order capabilities of the form : all elements of such sequences are combined into one set of derived higher order capabilities .finally all capability sets are combined to form : the resulting set will be used to derive the input sets for psi during the social path lengthdiscovery protocol as explained below .the cardinality of the input set to psi is therefore : [ fig : distance - capability - protocol ] to construct of user , tracks changes in friend lists of users by using the following logical implication : if represents a friend of and is hops from and was not previously identified at less than hops from , then is hops from .formally : finally , as capabilities are meant to be short - lived ( i.e. , they should expire within a couple of days ) , the protocol needs to be run periodically in order to keep up - to - date . *social path discovery : * in figure [ fig : distance - client - protocol ] , we illustrate the social paldiscovery protocol .the protocol involves two users and , who are members of the same osn .it begins with establishing a secure channel ( via diffie - hellman key exchange ) , followed by cryptographic binding of the diffie - hellman channel to the protocol instance , which is needed to avoid man - in - the - middle attacks ., ) appends both public keys to each capability in ( resp ., ) set to form .the resulting sets are : note that the symbol in the above equations indicate that , while constructing and , the first element of each pair contained in and is ignored .next , both users execute the steps of common friends s discovery protocol , on the above input sets .specifically , they interact in a bloom - filter based psi execution and run the challenge - response part of the protocol needed to remove potential false positives ( as discussed in section [ subsec : system - model ] ) .the interactive protocol ends with parties outputting the intersection of the sets . from this point on, both users perform identical actions to calculate the social path length between them .all operations are done locally , i.e. , with no need to exchange data .this process consists of two phases : ( 1 ) calculating the social path length input set ( i.e. , the set containing lengths for all discovered paths between and ) , and ( 2 ) selecting the shortest length among all lengths contained in . to this end ,( ) builds set by performing following actions on every item : 1 .finding a capability such that 2 . calculating path length via matching capability ( which was obtained from some user , say ) and inserting it into : l.insert(l_{x})\vspace{-0.1cm } \end{array}\ ] ] at the end , and learn the final path length between them by finding the lowest value of items included in : if , then and have common friends between them , thus social palreturns identifiers of all these common friends as in the original common friendsservice . while we could use social palto reveal the first hop identifiers for , we do not due to the privacy requirements outlined in section [ subsec : privacy - consideration ] .this section presents the analysis of social pal , showing that it fulfills functional and privacy requirements from section [ subsec : requirements ] .lemma 1 . _if , then : _ 1 .there exists a path , for , between and , in the social graph .2 . receives the order capability from every in . proof . when , by using the logical implication for the social graph building ( see section [ sec : distance ] ) , it must hold that , in order to include in , be included in .therefore , we can recursively argue : + note that , for every , there exists a connection to and , thus , form a path between and . considering , since , then receives .theorem 1 ._ let there be a path between and in the social graph .if path is discovered by the social paldiscovery protocol , then both and can estimate the exact length of path . _ proof .let denote the highest degree of capabilities .+ if : * the set of capabilities of and are , and , respectively .( see figure [ subfig : proof-1 ] for a graphic illustration of the distribution of capabilities for and . ) * if gets a matching capability for , then it must corresponds to for .* substitutes in and receives : * gets multiple capability matches , and sets to be the minimum , which is for , and .( similar argument holds for . )if : * the set of capabilities of and are and , respectively see figure [ subfig : proof-2 ] .( capabilities for which and obtains matches are marked in green . ) * if gets a capability match for : , substitutes in and receives : * gets multiple capability matches , and sets to be the minimum , which is for , and .( similar argument holds for . )0.75 0.75 [ fig : proofs ] as discussed in section [ subsec : requirements ] , our social palinstantiation needs to provide users with strong privacy guarantees , i.e. , interaction between two users and does not reveal any information about their non - mutual friends or any other information than they could discover by gathering information from the standard osn interface . *capability intersection . *first , we review the security of the common friends discovery protocol from common friends , since it constitutes the basis of our work . its security , in the honest - but - curious model , reduces to the privacy - preserving computation of set intersection .that is , privacy stems from the security of the underlying private set intersection ( psi ) protocol that common friendsinstantiates to privately intersect capabilities and discover common friends .this is proven by means of indistinguishability between a real - world execution and an ideal - world execution where a trusted third party receives the inputs of both parties and outputs the set intersection . uses bloom filter based psi ( bfpsi ) and , as discussed in section [ subsec : pcd ] , this does not impact security since sets to be intersected are random capabilities , thus , high - entropy , non - enumerable items. * discovery .* now observe that the interactive part of the social pal s ( social path ) discovery protocol i.e. , the part where information leakage might occur mirrors that of common friends s discovery protocol . during the protocol execution , users andengage in a bfpsi interaction , on input , respectively , and , i.e. , the sets of their capabilities , and obtain , which is used to reconstruct the social path between and .if and are friends with each other ( or have mutual friends ) , they can find out the identity of the user(s ) corresponding to matching capabilities , thus , learning that there exists a social path of length 1 ( or 2 ) _ and _ the identity of their mutual friends , but nothing else .in fact , if an adversary could learn the identity of non - mutual friends , then , we could build an adversary breaking the common friends discovery protocol from based on bfpsi . similarly ,if there exists no social path between and , then the bfpsi interaction does not reveal any information to each other . on the other hand ,if there exists a social path between and of length , then the matching capabilities are for user nodes for which has removed identifiers . therefore , and do not learn the identity of the users yielding a social path between them , but only how many .* trust in server .* each user explicitly authorizes social palto retrieve the set s friends . requesting users to disclose their friend listsis a common practice in social network and smartphone applications .social paluses this information to have the server maintain , distribute , and , in the case of ersatz nodes , create capabilities attesting to the authenticity of friendships .this implies that gradually learns the social graph from social palusers , however , what learns is a small subset of what the osn already knows .neither nor the osn learn any additional information , e.g. , as opposed to centralized solutions , user locations or interactions between users .* authenticity of capabilities . * in section [ subsec : requirements ] , we assumed the use of legitimate social palclient applications : as all mobile platforms provide application - private storage , it is reasonable to assume that an adversary on a client device can not steal the capabilities downloaded on that device by the legitimate client application or otherwise manipulate the input to the protocol .alternatively , the integrity of the bloom filter could be ensured by letting sign the bloom filter along with the public key of the corresponding social palclient .the bfpsi protocol would then need to be modified accordingly so that each party checks the signature on the other party s bloom filter is valid and that the same keypair is used to establish the secure channel .we now present an empirical evaluation of social pal s coverage , using three publicly available facebook sample datasets . specifically ,we analyze how _ coverage _ attained by the social path discovery depends on the fraction of osn users who join the system , i.e. , the probability that two users discover an existing path between them in the social network .we use three datasets derived from a single dataset , created by gjoka et al . , using three different sampling techniques : \(1 ) the * social filter dataset * is our primary dataset .it contains users , a connected component derived using the forest fire " sampling method from the original dataset . as forest fire samplingdoes not preserve node degree , each node in this dataset has an average node degree of , which is significantly less than in the original dataset .to investigate the effect of the reduced node degree on coverage , we also use the two more datasets .( 2 ) the * mhrw dataset * is built using the _ metropolis - hastings random walk _ ( mhrw ) method with independent random walks .it contains the friend lists of users .we call this the set of _ sampled users_. each of them has an average of friends , including both other sampled users and those who were part of the original dataset but that were not sampled we call them _outside users_. the mhrw datasetcontains a total of million _ outside users _ ( who are friends of one or more _ sampled users _ ) .because of the nature of the mhrw sampling , the average number of connections between two _ sampled users _ in this set is only 3 , thus it is used to evaluate social pal s coverage among poorly connected users .\(3 ) the * bfs dataset * is built using _breadth first search _ ( bfs ) from independent bfs traversals .it consists of million _ sampled users _, with an average of friends .the number of _ outside users _ is million .bfs sampling results in highly connected subgraphs , and the average number of connections among _ sampled users _ is .thus , we use the bfs datasetto measure social pal s coverage among well connected users. * procedure . * to evaluate social pal s coverage on each of the three datasets , we used the following simulation procedure . first , we chose , at random , a subset of _ sampled users _ , which we call the _test set_. for the social filter dataset , we used the whole set as the _ sampled users _ set .we chose four different sizes for the _ test set _ : of the _ sampled users_. note that the _ test set _ represents the fraction of the users of an osn who use social pal .then , for a given social path length ( ) , we randomly selected pairs of users from the _ test set _ in such a way that at least one path of length exists .finally , we computed the fraction of user pairs for which social paldiscovers an existing path between them .we did this for two cases : social palwith support for ersatz nodes , and without it .each simulation was repeated 10 times . in total, we conducted 720 different simulations .* results . *we now present the results of our simulations for each of the three datasets .social pal s discovery coverage is presented in figure [ fig : user - coverage-1 ] andtable [ tab : user - coverage-1 ] for the the social filter dataset , figure [ fig : user - coverage-2 ] and table [ tab : user - coverage-2 ] for the mhrw dataset , and figure [ fig : user - coverage-3 ] andtable [ tab : user - coverage-3 ] for the bfs dataset .additional graphs on coverage results are available from the full version of the paper .each graph shows how the coverage ( for paths of different length ) of social palrelates to fraction of osn users who use social pal .red dotted lines indicate the performance of social palwithout ersatz node support , while black solid lines correspond to the user of ersatz nodes . without ersatz nodes ,coverage increases linearly as more users start using social pal .the rate of growth is highest for the social filter datasetand lowest for the mhrw dataset . in general , the coverage figures are low . for instance , even if 80% of osn users have social pal , the coverage for paths of length 4 ranges between 0.19% ( the mhrw dataset ) and 68.71% ( the social filter dataset ) . the introduction of ersatz nodes results in a remarkable improvement across the board in all datasets .as expected , the coverage for paths of length 2 is 100% .when 80% of osn users are in the social palsystem , the coverage is well above 80% in all cases . even when only 20% of users have social pal , coverage is still above 40% in all cases , except for the mhrw dataset . [tab : user - coverage-1 ] [ tab : user - coverage-2 ] [ tab : user - coverage-3 ] * ersatz nodes dramatically improve coverage . * with ersatz nodes , social paldiscovers 100% of social paths of length 2 , thus addressing one of the major limitations of the common friendssystem .the coverage for paths of length 3 and 4 always increases , between and , depending on the fraction of osn users in social paland the dataset used for the simulations .* variation of coverage across different datasets .* we observe better coverage results with the bfs datasetthan with the mhrw dataset .as the bfs datasetrepresents coverage among well connected users , the density of ersatz nodes between random users is higher than in the mhrw dataset , thus yielding better overall coverage .the bfs datasetmodels societies , such as most of the western societies , where the penetration of osns is high .the high coverage results with the bfs datasetsuggests that social palwill do well in this context . on the other hand ,the mhrw datasetmodels societies where osn connectivity is poor and , although social palis not as effective here , it may still perform reasonably well , detecting the majority of social paths even before the number of users joining social palreaches 50% .in this section , we present our full - blown implementation of the social palsystem .we aim to support scalability for increasing number of users ( in terms of cpu performance and memory ) and to enable developers to easily integrate it into their applications. * server components . * on the server side , the social palsystem extends the ` peershare`server , which allows two or more users to share sensitive data among social contacts , e.g. , friends in a social network .we use the following basic functions of ` peershare ` : ( 1 ) osn interfaces to retrieve social graph information , ( 2 ) the oauth component for user authentication , and ( 3 ) the data distribution mechanism . on top of these components , we develop a new server architecture that supports the addition of server - based applications via an extension mechanism .this design choice allows us to implement the social palfunctionality in such a way that the system can efficiently scale ( in terms of memory and cpu performance ) to support an increasingly large number of users .as illustrated in figure [ fig : server - arch ] , the server architecture consists of the following components : the _ common apps server _ , a group of applications ( e.g. , the social palapp ) , the _ osn communicator module _ , and the _ bindings database _ ( bindings db ) . the common apps serverprovides the basic functionality that is common for all applications : ( 1 ) storage of data uploaded by users in the bindings db , ( 2 ) distribution of users data to other authorized users , and ( 3 ) retrieval of basic social graph information , which is needed for enforcing the appropriate data distribution policy . the osn communicator module is a plugin - based service responsible for querying osns for social graph information . its plugin - based structure allows us to easily add support for new osns .the bindings db stores data uploaded by users and information on how to distribute them among social contacts .the components interact with each other using a number of different interfaces : _ server - osn query _, _ bindings protocol _ , _ app event _ , _ app - db updates _ and _ app - osn query _( cf . fig [ fig : server - arch ] ) .the server - osn query interface is used by the common apps serverto retrieve social graph information from the osn .the bindings protocol interface specifies communication between the common apps serverand the bindings db .these two interfaces provide basic data distribution functionality to all applications .the other three interfaces are used only by applications that need to perform specific modifications on distributed data on the server side . each application that requires logic on the server sidehas to be registered in the common apps servervia a uniform resource identifier ( uri ) .this uri is used by the app eventcallback interface to notify the application about incoming application - specific events ( e.g. , an upload of new data items ) .in addition , this interface may be used to modify data read from the bindings db before returning them via the common apps serverto the user ( e.g. , generation of higher order capabilities ) .the server application itself has access to the two remaining interfaces .it uses the app - db updatesto modify data stored in the bindings db or to create new data items .the app - osn query interface is used when the application needs to obtain social graph information from the osn .[ fig : server - arch ] * social palserver implementation . *the social palserver application is notified via a the app eventcallback interface about new capabilities uploaded by users .it uses a the app - db updatesinterface to create any required ersatz nodes and properly update the recipient sets of capabilities during the social graph building process .the social palapplication also uses the app eventinterface to handle capability download requests .the common apps server , instead of immediately returning data it has read from the bindings db , passes them to the social palapplication that generates any missing higher order capabilities . finally , the social palapplication returns the complete set of capabilities back to the common apps server , which completes the request handling .note that our implementation of the osn communicator module supports interactions with both linkedin and facebook .* implementation details . *our core social palserver is written in php .we support capabilities of and order , allowing to discover social paths between users that are up to hops from one another . based on relevant prior work ,a -hop distance is enough for most practical use cases . as the bindings db needs to store the capabilities of users and information about how to share those , the necessary amount of persistent storage will substantially grow if social palbecomes widely used .thus , to limit the data storage overhead , social palserver does not store any higher order capabilities in the bindings db , but generates them when requested by the requesting client. tests on our server show that generating higher order capabilities has a negligible impact on the social pal__capability distribution protocol _ _ performance ( i.e. , the server generates 1 million higher order capabilities in about 500ms using the hardware described in section [ sec : server - eval ] ) . finally , in order to implement the linkedin oauth module for the osn communicator module , we use the oauth pecl extension for php , while , for the bindings db , postgresql database server .* system scaling . * since social palmay generate a large number of server requests if used by a large number of users , we can take following steps to ensure that the system can scale .our proposed scaling architecture is illustrated in figure [ fig : server - scaling ] .it includes a powerful http front - end server ( such as nginx ) acting as load balancer , which terminates incoming secure https connections and forwards server requests upstream to instances of social palservers acting as request handlers .each social palserver instance will run the _ hiphop virtual machine _ ( hhvm ) daemon that handles http requests .hhvm usage can massively improve server performance , as it uses just - in - time compilation to take advantage of the native code execution instead of the interpreted one .each instance of social palserver runs , locally , a database query router ( pgpool ) providing access to the actual database cluster including multiple postgresql servers .the query router enhances the overall database access performance by keeping open connections to the database cluster , load - balancing the stored data among multiple instances of the database servers , and temporarily queuing requests for database access in case of cluster overload .note that there are no cross - dependencies between the social palserver instances for the database read access , thus , no complex control mechanism is needed to support this parallelism .* server code . *the source code of the server implementation is available from https://github.com/secures-aalto/sopal .* performance testbed .* we evaluated our server implementation in a testbed consisting of two machines : the first played the role of a single social palserver instance ( cf .figure [ fig : server - arch ] ) , while the second simulated a group of client devices .the server ran on a 4-core machine with a 2.93ghz cpu on each core and 128 gb of ram .it hosted nginx ( version 1.1.19 ) , postgresql server ( version 9.1 ) , and php5-fpm for the php 5.6 engine . to improve the overall server performance , we adjusted the default settings for nginx , postgresql and php5-fpm ( see table [ tab : server - settings ] ) ..details of server implementation settings . [ cols="^,^ " , ] inter - process communication was implemented via unix sockets .the machine running the clients had 8 cpu cores ( at 2.93 ghz ) and 64 gb of ram . to eliminate the unpredictability of network latency , we modified the server implementation by replacing the osn communicator module with the local service that provided social graph information based on the mhrw dataset .we populated the bindings db with capabilities generated for the 120,000 _ sampled users _ from the mhrw dataset .the capabilities of _ sampled users _ together with capabilities generated for ersatz nodes constituted about 10 million data items that were stored in the bindings db .finally , to minimize impact of the client - server transmission delay , we kept the server and client machines in the same network and connected them using a 1 gbit / s ethernet link via the single switch . * experiments . *we evaluated server performance by sending bursts of requests per second , for , from the client machine to fetch capabilities from the server ( i.e. , download of in figure [ fig : distance - capability - protocol ] ) for seconds .fetching capabilities involves many read operations on the bindings db , thus yielding the highest load on the server among all operations of the social palcapability distribution protocol . in each experiment , which we repeated ten times, we measured the number of received responses per second together with the latency of each response on the client machine , and cpu usage together with memory consumption on the server machine .[ fig : server - eval ] * results .* figure [ fig : server - eval ] illustrates the results of our experiments .we observe that requests per second yields a saturation point for the server .below requests / second , the number of responses per second and the response latency grow linearly .whereas , as depicted in figure [ subfig : recv - latency ] , above , we observe an exponential growth of the response latency and the constant number of received responses per second .figure [ subfig : cpu - ram ] also shows that cpu usage reaches more than above requests / second . peak memory consumption is about mb , which also shoots up significantly when the number of requests crosses the saturation point .the requests / second saturation point shows that the performance of our server implementation is in line with that emerging from studies of systems equipped with similar hardware .we also looked at the server performance when only handling request per second and observed that the average response latency is about , and the average bindings db interaction time is around .since client - server network latency is negligible , the vast majority of request handling takes place in the php interpreter , which highlights that php is the server s bottleneck .therefore , in order to improve the server performance , php5-fpm should be replaced with hhvm , which is reported to be significantly more performant .further gains could also be obtained by migrating the postgresql server to a separate machine connected over a fast link ( cf . fig .[ fig : server - scaling ] ) .we leave these as part of future work . assuming that the server handles requests per second , a total of million requests can be processed daily by a single instance of social palserver with comparable hardware capabilities . assuming that each user executes the social palcapability distribution protocol around times a day , about million social palusers can be handled by one social palserver instance . since user requests are independent of each other , and because the scalable architecture of the social palserver allows adding further instances easily ( as described in section [ sec : server - arch ] ) , the total capacity of social palsystem amounts to the cumulative number of users that can be handled across all social palserver instances . finally in order to avoid making postgresql become the bottleneck of the system ( which may be caused if many social palserver instances are added ) , the bindings db should be turned into a database cluster with data sharding and replication enabled .this guarantees that the data kept in the bindings db is synchronized and accessible with high enough availability . *client architecture . * the client - side architecture of social palis depicted in figure [ fig : client - arch ] .it consists of the _ common apps client _ , the set of osn plugins , and the _ social palclient_. the first two components are responsible for the communication with the social palserver , while the last one provides the interface for the applications . together ,these components form a mobile platform library that can be easily imported by developers into their applications . to facilitate support for multiple osns ,similar to our server design , we have decoupled osn - specific functionality from the common apps client and made it a plugin - based solution . we have considered two possible design choices for the client architecture : ( 1 ) designing it as a stand - alone service with applications connecting to it , using available inter - process communication mechanisms , or ( 2 ) as a service integrated into the application .we choose the latter as it supports application - private storage for capabilities ( i.e. , not accessible by other applications ) and enables each application to have its own social palserver .this choice provides additional protection against capability leakage to a malicious application and removes the requirement to deploy the global social palserver . on the other hand ,if multiple instances of social palapplication runs on the same device , we would incur increased network traffic and require more storage space in comparison to the stand - alone service approach .we argue that this tradeoff is acceptable , as the social palcapability distribution is run no more than a few times a day .also , the amount of data to store is likely to be limited in the order of tens of megabytes , which is justifiable given the clear usability and deployability advantages . * implementation details and performance .* we have implemented the client library on android , operating as an android lightweight service . to evaluate the performance on the client , we measured running times of the social paldiscovery protocolon a samsung galaxy tablet gt - p3100 running android 4.1.2 api 16 and a zte blade s6 running android 5.0 api 21 connected via bluetooth .we assumed that both parties have the same number of input items , ranging from 1000 to 35000 ( with 5000 increments ) .we also fixed the intersection of the sets to be of the set size .table [ tab : perf ] shows average computation and communication times . * social palclient api . *the social palapplication interface is used by applications to run the social paldiscovery protocol .it has been designed to be readily usable by application developers that are not cryptography experts , but are nonetheless interested in implementing privacy - preserving discovery of social paths .this allows developers to delegate the responsibility of this process to the social palclient , and requires them to integrate only four basic methods into the application code , which we present below .applications running the social paldiscovery protocol act as social palmessage relays between the two social palclient instances .the application starting the social paldiscovery session calls the ` startsopalsession ` method .this returns an opaque social palobject , which is forwarded to the remote party .from this point onward , both parties invoke ` handlesopalmessage ` for every message received .this method processes the received message , and if needed , creates a response .it also returns a flag indicating if the protocol execution is completed .if so , the application uses the ` getresult ` method to get the social path length it has to a remote party . finally , the application must call ` endsopalsession ` to let the social palclient release all resources from the session .besides these four basic methods , the social palclient also provides three advanced methods : ( 1 ) ` rejectsopalsession ` creates a social palmessage that can be sent by the application to the remote party if it does not want to run the discovery protocol ; ( 2 ) ` updatecapabilities ` and ( 3 ) ` renewcapability ` can be used by the application to force fetching the most recent capabilities from the server , and to generate and upload a new capability to the server , respectively .table [ tab : sopal - iface ] summarizes all the methods available in social palclient .[ fig : apps ] to illustrate social pal s relevance and practicality , we integrate it into two android apps , ` spotshare`and ` nearbypeople ` , supporting both facebook and linkedin .* ` spotshare ` * is an extension of tetheringapp , presented in .it allows a user to provide tethered internet access to other ` spotshare`users ( where access to the tethering hotspot is protected by a password ) so that access control policies can be based on social relationships .for instance , the user can decide to allow tethered access only to friends of friends : to this end , ` spotshare`uses social palin order to determine , in a privacy - preserving way , if the specified social relationship holds .if so , the password is securely and automatically sent to the requesting device .in the current version of ` spotshare ` , we do not enable discovery of social paths beyond two hops , as we assume that most users would not want to allow people with whom they have no common friends to tether off their smartphone , but removing this constraint is trivial . in figure [ subfig :spotshare ] , we present two screenshots of the app .* ` nearbypeople ` * is a `` friend radar '' app allowing users to interact with people around them and discovering common friends shared with users of nearby devices , as well as social path lengths , without having to broadcast their social profiles or rely on a central server .it relies on the privacy guarantees of social paland the scampi opportunistic router for device - to - device communication . a preliminary version of the app was successfully tested at the acm ccs workshop on smart energy grid security workshop . in figure[ subfig : nearbypeople ] , we also show two screenshots .* privately discovering social relationships .* nagy et al . introduce common friends , reviewed in section [ subsec : system - model ] , combining bearer capabilities with bfpsi / psi - ca to allow osn users to discover , respectively , the identity or the number of their common friends in a private , authentic , and decentralized way . while we build on the concept of capabilities and rely on bfpsi , recall that common friendssuffers from an inherent bootstrapping problem and is limited to the discovery of social paths to osn users that are two hops away .our work does not only address common friends s limitations via a novel methodology , but also presents the full - blown implementation of a scalable server architecture and a modular android client library enabling developers to easily integrate social palinto their applications .mezzour et al . also describe techniques for decentralized path discovery in social networks .they use a notion similar to capabilities to represent friendships and hashing to derive higher - order capabilities , however , their scheme distributes a _ different _ capability on behalf of a given user to every other user , while social paldistributes the same capability to all users at a given distance . s computational / communication overhead is significantly higher than that of social pal : the former requires two psi runs , with sets of size equal to the total number of paths from a node up to the maximum supported path length , whereas , the latter only requires a single bfpsi run , with input sets as big as the number of paths that have length equal to half the maximum supported path length .. ] length .table [ tab : input - study ] compares the expected number of input items for social paland based on the datasets introduced in section [ sec : evaluation ] .furthermore , incurs the same bootstrapping problem as : if a friend of user does not participate in the system , can not detect paths to some other user that go through .finally , aims to build a decentralized social network , while we aim to bootstrap the system based on existing centralized social networks .[ tab : input - study ] liao et al . present a privacy - preserving social matching protocol based on property - preserving encryption ( ppe ) , which however relies on a centralized approach .li et al . then propose a set of protocols for privacy - preserving matching of attribute sets of different osn users .similar work include , , and .private friend discovery has also been investigated in and , which do not provide authenticity as they are vulnerable to malicious users claiming non - existent friendships .while addresses the authenticity problem , it unfortunately comes at the cost of relying on relatively expensive cryptographic techniques ( specifically , a number of modular exponentiations linear in the size of friend lists and a quadratic number of modular multiplications ) .smokescreen , smile , and pike support secure / private device - to - device handshake and proximity - based communication .lentz et al .introduce sddr , which allows a device to establish a _ secure encounter _ i.e. , a secret key with every device in short radio range , and can be used to recognize previously encountered users , while providing strong unlinkability guarantees . the encore platform builds on sddr to provide privacy - preserving interaction between nearby devices , as well as event - based communication for mobile social applications . * building on social relationships . *prior work has also focused on building services on top of existing social relationships .cici et al . use osns to assess the potential of ride - sharing services , showing that these would be very successful if users shared rides with friends of their friends. sirivianos et al . propose a collaborative spam mitigation system leveraging social networks of administrators , while and use osns to verify the veracity of online assertions .freedman and nicolosi describe a system using social network for trust establishment in the context of email white - listing , by verifying the existence of common friends .besides not discovering paths longer than two , also does not address the issue of friendships authenticity unlike social pal .daly et al . present a routing protocol ( called simbet ) for dtn networks based on social network data .their protocol attempts to identify a routing bridge node based on the concept of centrality and transitivity of social networks .li et al . design another dtn routing protocol ( called social selfishness aware routing ) which takes into account user s social selfishness and willingness to forward data only to nodes with sufficiently strong social ties .other work also propose adjusting message forwarding based on some social metrics .* osn properties . *another line of work has studied properties of osns .ugander et al . and backstrom et al . study the structure of facebook social graph , revealing that the average social path length suggested by the small world experiment " ( i.e. , six ) does not apply for facebook , as the majority of people are separated by a 4-hop path .gilbert et al . define the relationship between tie strengths ( i.e. , the importance of a social relationship between two users ) and various variables retrieved from the osn social graph . in , arnaboldi et al .investigate the link between the tie strength definition ( given by granovotter ) and a composition of factors describing the emotional closeness in online relationships .they demonstrate the existence of the _ dunbar number _( i.e. , the maximum number of people a user can actively interact with ) for facebook . in follow - up work , they also show the existence of four hierarchical layers of social relationships inside ego networks .existence of the dunbar number is also shown for twitter in .finally , saramki et al . find an uneven distribution of tie strengths within ego networks that is characterized by the presence of a few strong and a majority of weak ties .this paper presented social pal a system geared to privately estimate the social path length between two social network users .we demonstrated its effectiveness both analytically and empirically , showing that , for any two osn users , social paldiscovers all social paths of length two and a significant portion of longer paths . using different samples of the facebook graph, we showed that even when only 20% of the osn users use the system , we discover more than 40% of all paths between any two users , and 70% with 40% of users .we also implemented a scalable server - side architecture and a modular client library bringing social palto the real world .our deployment supports facebook and linkedin integration and allows developers to easily incorporate it in their projects .social palcan be used in a number of applications where , by relying on the ( privacy - preserving ) estimation of social path length , users can make informed trust and access control decisions .* acknowledgments . *we thank minas gjoka and michael sirivianos for sharing the facebook datasets , swapnil udar for helping with the ` spotshare`implementation , and jussi kangasharju , pasi sarolahti , cecilia mascolo , panos papadimitratos , and narges yousefnezhad for providing feedback on the paper .simon eberz suggested the idea of signed bloom filters discussed in section [ subsec : privacy - consideration ] .this work was partially supported by the academy of finland s `` contextual security '' project ( 274951 ) , the ec fp7 precious project ( 611366 ) , and the eit ict labs .
social relationships are a natural basis on which humans make trust decisions . online social networks ( osns ) are increasingly often used to let users base trust decisions on the existence and the strength of social relationships . while most osns allow users to discover the length of the social path to other users , they do so in a centralized way , thus requiring them to rely on the service provider and reveal their interest in each other . this paper presents social pal , a system supporting the privacy - preserving discovery of arbitrary - length social paths between any two social network users . we overcome the bootstrapping problem encountered in all related prior work , demonstrating that social palallows its users to find all paths of length two and to discover a significant fraction of longer paths , even when only a small fraction of osn users is in the social palsystem e.g. , discovering 70% of all paths with only 40% of the users . we implement social palusing a scalable server - side architecture and a modular android client library , allowing developers to seamlessly integrate it into their apps . [ distributed applications ]
the control specifications of modern industrial systems are becoming successively more and more complex ; in addition to the traditional requirements of achieving stabilization , tracking and possibly some form of optimized behavior , contemporary specifications may include safety constraints , human interference , temporal logic statements , and start - up procedures , among others . in the 1990 s ,hybrid systems were introduced , in part , in order to provide a formalized system theoretic framework to handle such complex specifications .the reason for this is that hybrid systems combine both continuous dynamics and discrete events , and so they provide a suitable framework for the development of formal verification and synthesis methods for achieving complex specifications .recently , special interest has been given to an important class of hybrid systems , namely piecewise affine ( pwa ) hybrid systems , since this class of systems has desirable control theoretic properties which hold when simple verifiable conditions are satisfied , can approximate nonlinear systems with arbitrary accuracy , and can be easily identified from experimental data .interesting applications of pwa control systems can be found in .a pwa hybrid system is typically expressed by a discrete automaton such that inside each discrete mode of the automaton , the system is described by affine dynamics defined on a full - dimensional polytope .if the state trajectory of the continuous affine dynamics reaches a prescribed exit facet of the polytope , the pwa hybrid system is transferred to a new discrete mode , in which it evolves according to new affine dynamics in a successive polytope , and so on .hence , the study of pwa hybrid systems at the continuous level reduces to the study of affine dynamics on full - dimensional polytopes . in , we introduce the study of the in - block controllability ( ibc ) of affine systems on polytopes . in particular , for a given affine system and a given full - dimensional polytope , we study whether all the states in the interior of are mutually accessible through its interior by applying uniformly bounded control inputs .the motivation behind the ibc notion is that it formalizes controllability under safety state constraints .moreover , we show in that if one constructs a special partition / cover of the state space of pwa hybrid systems / nonlinear systems , in which each region satisfies the ibc property , then one can systematically study controllability properties and build hierarchical control structures of these complex systems .these hierarchical structures are typically used for synthesizing correct - by - design controllers enforcing formal logic specifications .one advantage of the ibc hierarchical structures is that they take into account the fact that different states in a partition / cover region typically need different inputs to be steered to neighborhood regions , possibly over different time horizons .thus , the ibc hierarchical methods do not require the partition / cover regions to be of very small size , they end up with reasonable number of regions in the partitions / covers , and hence , they may have good potential to be extended to high - dimensional systems .furthermore , the ibc notion is also useful in the context of optimal control problems .in particular , if it is required to find an optimal trajectory connecting two states in the interior of a polytope representing the system s state constraints , then it may be useful to first study ibc to verify that there exists a feasible solution trajectory connecting each pair of states .then , one may apply the pontryagin s minimum principle to find the optimal path .however we have found many examples in which the given affine system is not ibc through the interior of the given polytope , but the mutual accessibility property is achieved if we relax the problem little bit and allow trajectories starting in the interior of to visit a neighborhood of in the transient .this is acceptable in many practical scenarios in which one can distinguish between two types of constraints : soft constraints and hard constraints .the soft constraints may form the region of the nominal operating states of the system , while the hard constraints may represent the strict safety constraints which can not be violated even in the transient period . hence, it is reasonable to study mutual accessibility of the states in the interior of a polytope , formed by the soft constraints , through the interior of a bigger polytope ( ) , formed by the hard constraints .this motivates us to introduce and study the relaxed in - block controllability ( ribc ) notion in this paper .the study of ribc is also the first step in extending the hierarchical structures / controllability results in based on the new , relaxed notion .the study of controllability is fundamental to modern control systems ; as is well - known , for linear systems , algebraic conditions were provided in the 1960 s .restricting our attention here to controllability of linear systems subject to constraints , we next cite and , in which controllability under input constraints was studied . in ,controllability of continuous - time linear systems with input and/or state constraints was studied under the assumption that the transfer matrix of the system is right - invertible . under the same assumption , studied null controllability of discrete - time linear systems with input and/or state constraints .the ibc notion formalizes the study of controllability under state constraints .the notion was first introduced in for finite state machines , and was then extended in for continuous nonlinear systems on closed sets and in for automata . in these papers ,the ibc notion is used to build hierarchical structures of the systems , but these papers do not study conditions for the ibc property to hold .it is worth mentioning that the ibc concept and its associated between block controllability ( bbc ) notion in are entirely different from the bisimulation notion , also used for constructing system abstractions .in addition to having different axioms , the methods of utilizing these notions to construct the abstractions are different .for instance , while the bisimulation - based methods typically use overapproximation of reachable sets to calculate the abstraction , this is not needed for the ibc hierarchical abstractions . in , we provide three easily checkable necessary and sufficient conditions for ibc to hold for affine systems on polytopes .we then use the results of to study controllability and build hierarchical structures of piecewise affine hybrid systems , and to systematically achieve approximate mutual accessibility properties of nonlinear systems under safety constraints . in , we provide computationally efficient algorithms for building polytopic regions satisfying the ibc property , while in , we extend the ibc conditions to controlled switched linear systems having both continuous inputs and on / off control switches . we here extend the results concerning the ibc properties developed in to the case where there are soft and hard safety constraints as discussed above . while the ibc notion was used before in ,the ribc notion is novel . after defining ribc, we explore the geometry of the problem and provide for all the possible geometric cases necessary conditions for ribc .then , we show where these conditions are also sufficient .several illustrative examples are given to clarify the main results .for this study , we exploit some geometric tools used in the study of the controlled invariance problem and the reach control problems . in spite of using similar geometric tools in studying ribc, our problem is different . unlike the controlled invariance problem, we do not force all trajectories starting in to remain in itself , and we have the additional requirement of achieving mutual accessibility .also , unlike the reach control problem , in ribc , we do not try to force the trajectories of the affine system to exit the polytope in finite time through a prescribed facet .the paper is organized as follows .section [ sec : back ] provides some relevant , mathematical preliminaries . in section [ sec : inblock ] , we briefly review ibc . in section [ sec : rel_inblock ] , we define ribc and explore necessary and sufficient conditions for it .section [ sec : ex ] provides examples of the main results . a brief version of this paper appeared in .here we include more results and discussions .for instance , we provide here two other cases where the found necessary conditions are also sufficient .we also provide here computational aspects , complete proofs , and a section on examples with simulation results . _notation_. let be a set .the closure denotes , the interior is denoted , and the boundary is . the notation denotes the affine dimension of . for vectors ,the notation denotes the inner product of the two vectors .the notation denotes the euclidean norm of . for two subspaces , .the notation denotes the convex hull of , while the notation denotes the convex hull of a set of points .finally , denotes the open ball of radius centered at .we provide the relevant geometric background .a set is said to be _ affine _ if for every and every , we have . moreover , if , then is a subspace of . a _hyperplane _ is an -dimensional affine set in , and it divides into two open half - spaces . an_ affine hull _ of a set , , is the smallest affine set containing .we mean by a dimension of a set its _ affine dimension _ , the dimension of .a finite collection of vectors is called _ affinely independent _ if the unique solution to and is for all .if is affinely independent , then these vectors do not lie in a common hyperplane .-dimensional simplex _ is the convex hull of affinely independent points in , and it generalizes the triangle notion in 2d to arbitrary dimensions . an -dimensional _ polytope _ is the convex hull of a finite set of points in , with dimension . in particular , let be a set of points in , where , and suppose that contains ( at least ) affinely independent points .we denote the -dimensional polytope generated by by .note that an -dimensional simplex is a special case of with .a _ face _ of is any intersection of with a closed half - space such that none of the interior points of lie on the boundary of the half - space . according to this definition ,the polytope and the empty set are considered as trivial faces , and we call all other faces _ proper faces_. a _ facet _ of is an -dimensional face of .we denote the facets of by , and we use to denote the unit normal vector to pointing outside .an -dimensional polytope is _ simplicial _ if all its facets are simplices .it is clear that any two - dimensional convex polytope is simplicial . for higher dimensions ,convex compact sets can be approximated by simplicial polytopes with arbitrary accuracy .we conclude this section by reviewing the definition of the bouligand tangent cone of a closed set .let be a closed set .we define the distance function .the _ bouligand tangent cone _( or simply tangent cone ) to at , denoted , is defined by if is convex , so is .in this section we briefly review the in - block controllability ( ibc ) , and then we provide a motivating example for defining the relaxed in - block controllability ( ribc ) in the next section .consider the affine control system : where , , , and . in this paper, we assume that the control input is measurable and bounded on any compact time interval to ensure the existence and uniqueness of the solutions of .let , the image of , and let be the trajectory of , under a control input , with initial condition , and evaluated at time instant .we review the ibc definition ( after ) .[ prob0 ] consider the affine control system on an -dimensional polytope .we say that is in - block controllable ( ibc ) w.r.t . if there exists such that for all , there exist and a control input defined on ] , and ( ii ) . in , it is shown that studying ibc of an affine system on a given polytope is equivalent to studying ibc of a linear system on a new polytope satisfying ( without loss of generality ( w.l.o.g . ) we use the same notation for the new polytope ) .we review the main result of . to that end ,let and .that is , is the set of indices of the facets of , while is the set of indices of the facets of in which is a point .we define the closed , tangent cone to the polytope at as , where is the unit normal vector to pointing outside .[ thm : main_ibc ] consider the system defined on an -dimensional simplicial polytope satisfying .the system is ibc w.r.t . if and only if * is controllable .* the so - called invariance conditions of are solvable ( that is , for each vertex , there exists such that ) .* the so - called backward invariance conditions of are solvable ( that is , for each vertex , there exists such that ) . in , it is also shown that conditions ( i)-(iii ) of theorem [ thm : main_ibc ] are necessary for non - simplicial polytopes .notice that checking solvability of the invariance conditions ( or the backward invariance conditions ) can be simply carried out by solving a linear programming ( lp ) problem at each vertex of . using a straightforward convexity argument , it can be shown that solvability of the invariance conditions ( or the backward invariance conditions ) at the vertices implies that they are solvable at all the boundary points of .nevertheless , the ibc notion is a restrictive one .in particular , we have found in many examples that achieving the conditions ( ii ) , ( iii ) of theorem [ thm : main_ibc ] simultaneously is quite difficult .this motivates us to propose a relaxation of ibc in this paper .in particular , suppose that condition ( ii ) or ( iii ) of theorem [ thm : main_ibc ] is not achieved .the question arises of whether it is still possible to achieve mutual accessibility of points in through a bigger polytope . to that end , let , and define , a -scaled version of .we start with the following technical result that shows if condition ( i ) of theorem [ thm : main_ibc ] is achieved , then as expected there is always a bigger polytope through which the points of are mutually accessible .[ lem : tec1 ] consider the system and an -dimensional convex compact set satisfying .if is controllable , then there exist and such that for all , there exist and a control input defined on ] , and .see the appendix .the following example shows that not every works , and so a careful study is needed to investigate when a given has the desired properties .[ ex : mot2 ] consider the system x + \left [ \begin{array}{rr } 1\\1\end{array } \right ] u \,\ ] ] and a polytope shown in figure [ fig : ex2 ] where , , , and .first , we check whether is ibc w.r.t .it is easy to verify that is controllable .next , we check solvability of the invariance conditions and the backward invariance conditions of . at , we have .solvability of the backward invariance conditions of at requires the existence of such that .this yields and , where and as shown in figure [ fig : ex2 ] .that is , and respectively .therefore , there does not exist that satisfies the backward invariance conditions of at . from theorem [ thm : main_ibc ] , the system is not ibc w.r.t . .next , we investigate whether in this example , defined in lemma [ lem : tec1 ] , can be selected arbitrarily close to .let .we make several observations .first , we have , the perpendicular subspace to , i.e. .then , we have , which is negative for any having whatever is selected .let .it can be easily verified that and . since is controllable , we know from lemma [ lem : tec1 ] there exists such that all the points in are mutually accessible through by applying uniformly bounded control inputs .in particular , there exists a state trajectory connecting the origin to in finite time through .equivalently , there is a state trajectory of the backward dynamics that connects to the origin in finite time through . since and when is negative , it follows that the state trajectory of the backward dynamics starting at must cross the -axis before reaching .let the point denote the intersection of the state trajectory with the -axis . since the -component of the state trajectory of the backward dynamics is decreasing as long as is still negative , then .since has a zero -component , then the -component of must be greater than .recall that , and so clearly .we conclude can not be selected arbitrarily close to in this example .9999 inspired by the above example , we define in the next section the relaxed in - block controllability ( ribc ) . in particular , to tackle this problem, we assume that we have in hand a given polytope satisfying and study whether all the states in are mutually accessible through using uniformly bounded inputs .inspired by our discussion in the previous section , we define the relaxed in - block controllability ( ribc ) as follows .[ prob0_rel ] consider the affine control system and -dimensional polytopes such that .we say that is relaxed in - block controllable ( ribc ) w.r.t . through if there exists such that for all , there exist and a control input defined on ] , and ( ii ) .following , we use a geometric approach in studying ribc in this paper .in particular , we define the set of possible equilibria of as follows : the vector field of the system can vanish at any for a proper selection of .in fact , is the set of all possible equilibrium points of , i.e. if is an equilibrium of under feedback control , then .it can be shown that is closed and affine . also , if , then has dimension .similarly to the case of ibc , the location of the set with respect to affects ribc .thus , in order to simplify our study of ribc , we classify our study of ribc into three geometric cases based on the location of with respect to , namely ( i ) case ( a ) : , ( ii ) case ( b ) : , and ( iii ) case ( c ) : but as shown in figure [ fig : o ] .this geometric situation is shown in figure [ fig : o](a ) .for this case , the following result shows that the affine system is not ribc w.r.t . through .[ thm : casea ] consider the affine control system and -dimensional polytopes such that .if , then the system is not ribc w.r.t . through .the proof is similar to the proof of theorem 3.1 of .we provide a brief sketch of it in the appendix for completeness of this draft version of the paper .an illustrative example that clarifies the proof of theorem [ thm : casea ] is provided in section [ sec : ex ] ( see example [ ex2_sec ] ) .this geometric situation is shown in figure [ fig : o](b ) .for this case , select , and let be such that .this is always possible since .define and .the dynamics in the new coordinates are .therefore , for this geometric case , we can assume w.l.o.g . that we study conditions for ribc of the linear system w.r.t .a polytope through a polytope such that .[ thm : caseb_1 ] consider the system and -dimensional polytopes such that . if is ribc w.r.t . through , then is controllable .the proof is the same as the proof of theorem 4.1 of .a sketch of the proof is provided in the appendix for completeness of this draft version .next , we present a second necessary condition for ribc in this case . to that end , for a convex compact set , we say that _ the invariance conditions of are solvable _ ( w.r.t . the system ) if for each , there exists such that , the bouligand tangent cone to at ( notice that if is an -dimensional polytope , then , which is defined in the previous section ) .[ thm : caseb_2 ] consider the system and -dimensional polytopes such that . if is ribc w.r.t . through , then there exists an -dimensional compact convex set such that and the invariance conditions of are solvable . by assumption , for each , there exists a state trajectory that connects to in finite time through .therefore , we have ,~x_0\in x^{\circ}\right\}\subseteq x'^{\circ} ] such that ( where is as defined in definition [ prob0_rel ] ) , for all ] . for ,~x_0\in x^{\circ}\right\} ] . by the definition of the convex hull, there exist points ,~x_0\in x^{\circ}\right\} ] .the control sequence drives the system from to in finite time through , then keeps the system at till time . now starting at ,apply the control sequence , for all ] , we have for all ] such that and for all ] such that and for all ] , where form a basis for the controllability subspace ( is the controllability matrix ) , and form a basis for .then , let .the dynamics in the new coordinates are : = \left [ \begin{array}{cc } a_{11 } & a_{12 } \\ 0 & a_{22 } \end{array } \right ] \left [ \begin{array}{rr } z_1\\ z_2\end{array } \right ] + \left [ \begin{array}{rr } b_1\\0\end{array } \right ] u \,,\ ] ] where , , , , , and .let denote the polytopes expressed in the new coordinates . clearly , and . by assumption ,any two states are mutually accessible through .we study all the possible cases of the eigenvalues of , and for each case , we reach a contradiction . * the subsystem is unstable : let be arbitrary . by assumption , there exist bounded control inputs that connect to in finite time through .also , there exist bounded control inputs that connect to in finite time through . therefore , starting at , the state trajectory , , can be bounded by a proper selection of .this can only happen if has zero components in the directions of the eigenvectors associated with the unstable eigenvalues of .but since is arbitrary , then it must be that all points in have zero components in the directions of the eigenvectors associated with the unstable eigenvalues of , which clearly contradicts the fact that . *the matrix has an eigenvalue with a negative real part : similar to the previous case , it can be shown that starting from any , the state trajectory of the backward dynamics can be bounded by a proper selection of .but , this is impossible since the backward dynamics have an uncontrollable eigenvalue with a positive real part ( an eigenvalue of the matrix ) , and so similar to the previous case , we reach a contradiction . *the matrix has an eigenvalue : by converting the dynamics to the jordan form , it can be shown there exists such that for any and any . therefore , starting at any , the state trajectory has a fixed -component whatevercontrol input is selected . then , since is an -dimensional polytope , there exists such that .this implies is not accessible from whatever control input is selected , a contradiction . *all the eigenvalues of are complex with zero real part : for the repeated eigenvalues , we assume that the associated eigenvectors are linearly independent . for otherwise , the subsystem is unstable ( case ( i ) of the proof ) .then , by a standard argument , there exist two perpendicular directions such that , for all , where denotes the state trajectory of under some control input starting at a point .let be such that .this is always possible since is an -dimensional polytope .also , since , there exists such that .now , define , where is selected sufficiently large such that , and the -components of and have the same sign . then , since are perpendicular , .thus , the value of evaluated at is different from its value at .hence , starting at , the state trajectory can not reach , a contradiction . since in all the above cases we reach a contradiction, we conclude that is controllable .next , we present a second necessary condition for ribc in this case .recall that for system on a compact convex set , we say the invariance conditions of are solvable if for each , there exists such that , and we say the backward invariance conditions of are solvable if for each , there exists such that .[ thm : casec_2 ] consider the affine control system and -dimensional polytopes such that , , and . if is ribc w.r.t . through , then there exists an -dimensional compact convex set such that ( i ) , ( ii ) , ( iii ) the invariance conditions of are solvable , and ( iv ) the backward invariance conditions of are solvable . by assumption , there exists such that for every , there exist and a control input defined on ] , and .therefore , we have ,~x,~y\in x^{\circ}\right\ } \subseteq x'^{\circ}.\ ] ] since is convex , then we have let .first , we claim that . for if not , then it can be shown using an argument similar to the proof of theorem [ thm : casea ] that states of are not mutually accessible through , a contradiction .second , let , and let be such that .this is always possible since .define and .the dynamics in the new coordinates are .let , and denote the representations of the sets , and in the new coordinates , respectively . clearly , and . also , we have .next , we show there exists such that for each , there exists a control input such that and for all . for , the state trajectory can remain in for all by applying the uniformly bounded control inputs that connect to a point in , say , through , and then from apply the uniformly bounded control inputs that connect to through , and repeat this procedure for all future time . instead , if but , then using an argument similar to the one used in the proof of theorem [ thm : caseb_2 ] , there exists a control input such that ( where is as defined in definition [ prob0_rel ] ) and for all . using a similar argument , it can be also shown that there exists such that for each , there exists a control input such that and the state trajectory of the backward dynamics , denoted , satisfies for all .now let .clearly , is convex and closed . also , since , then , and so is bounded , hence compact .moreover , , and so is an -dimensional set .furthermore , , and so .next , we need to show that the invariance conditions of are solvable . aided with the property proved in the previous paragraph , this follows from the proof of theorem 4.3 and remark 4.2 of .similarly , the invariance conditions of w.r.t . the backward dynamics ( the backward invariance conditions ) can be proved .we conclude is an -dimensional compact convex set satisfying the conditions ( i)-(iv ) of the theorem .now we present the main result for this geometric case .[ thm : casec_main ] consider the affine control system and -dimensional polytopes such that , , and . if is ribc w.r.t . through , then the following conditions hold : * is controllable .* there exists an -dimensional convex compact set such that , and the invariance conditions of are solvable .* there exists an -dimensional convex compact set such that , and the backward invariance conditions of are solvable .* the sets and in conditions ( ii ) , ( iii ) satisfy . moreover , if are simplicial polytopes , then the conditions ( i)-(iv ) are also sufficient for ribc .the necessity of ( i ) follows from theorem [ thm : casec_1 ] .then , the necessity of ( ii)-(iv ) follows from theorem [ thm : casec_2 ] ( notice that in theorem [ thm : casec_2 ] , and ) .then , assume the conditions ( i)-(iv ) are achieved , and are simplicial polytopes .we need to show is ribc w.r.t . through .let , and let be such that .this is possible since .define and .the dynamics in the new coordinates are .let , and represent the sets , and expressed in the new coordinates , respectively . clearly , , , and .the rest of the proof is the same as the proof of the sufficiency part of theorem [ thm : caseb_main ] . similarly to remark [ rem : suff ] , it can be verified that conditions ( i)-(iv ) of theorem [ thm : casec_main ] are also sufficient under the conditions mentioned in remark [ rem : suff ] .notice that the conditions of theorem [ thm : caseb_main ] ( for case b : ) and theorem [ thm : casec_main ] ( for case c : but ) are almost the same .the difference is that in theorem [ thm : casec_main ] we need to verify that satisfy , while in theorem [ thm : caseb_main ] this condition is automatically achieved since and for the geometric case b , .[ ex : casec_2 ] consider the system x + \left [ \begin{array}{rr } 0\\1\end{array } \right ] u \,,\ ] ] and polytopes , shown in figure [ fig : ex5 ] , where and .it is required to study whether is ribc w.r.t . through .first , we calculate the set of possible equilibria .we have , the axis .thus , in this example but ( case c ) .next , it can be easily verified that is controllable .now let , and .then , let . clearly , . by solving an lp program at each vertex of , we find that the control inputs , and satisfy the invariance conditions of at the vertices .thus , the invariance conditions of are solvable .similarly , it can be shown the backward invariance conditions of are solvable .therefore , the conditions ( i)-(iv ) of theorem [ thm : casec_main ] are achieved ( with ) .we conclude from theorem [ thm : casec_main ] that is ribc w.r.t . through in this example .in this section , we provide three examples to show the motivations behind the relaxed ibc notion , and to clarify the main results of the paper .[ ex1_sec ] consider a cart moving on a bounded table .let denote the position of the cart , denote the input force , denote the mass of the cart , and denote the coefficient of friction .the state space model of the system is : = \left [ \begin{array}{cc } 0 & 1 \\ 0 & -\frac{b}{m } \end{array } \right ] \left[\begin{array}{cc } x_1\\ x_2 \end{array}\right ] + \left [ \begin{array}{rr } 0\\\frac{1}{m}\end{array } \right ] u.\ ] ] suppose that in this example , we have strict safety constraints : , that should not be violated even in the transient period .these safety constraints define the polytope shown in figure [ fig : sec_ex1_2 ] , where , , , and . in the presence of these strict safety constraints , kalman s controllability can not be used to study mutual accessibility of states . instead , we first study ibc of the system w.r.t . . although is controllable , at the vertex , we have , and , where , whatever is selected .thus , the invariance conditions of are not solvable at , and from theorem [ thm : main_ibc ] , the system is not ibc w.r.t .+ next , we relax our control objective as follows .suppose that the nominal operation of the system requires the soft constraints and , resulting in the polytope shown in figure [ fig : sec_ex1_2 ] , where , , , and .starting from any initial position and speed in , can we reach any final position and speed in , without violating the strict safety constraints ( through ) ? to that end , we study relaxed ibc of the system w.r.t . through . in this example, we have and .hence , ( case b ) . since is controllable , it remains to identify the sets in theorem [ thm : caseb_main ] to show relaxed ibc .notice that , and so . by following the procedure in the first part of remark [ rem : inv ], we construct the invariant polyope , where and .one can also verify that the backward invariance conditions of are solvable .so , we have , and from theorem [ thm : caseb_main ] , the system is relaxed ibc w.r.t . through .9999 [ ex2_sec ] we consider the mechanical system shown in figure [ fig : sec_ex2 ] , in which we balance the center of mass above a pivot point . examples of balance systems may include persons balancing sticks on their hands , humans standing upright , personal transporters , and rockets , among others .let and denote the position and velocity of the cart , respectively , while and denote the angle and angular rate of the structure above the cart , respectively .assuming that and are close to zero , a linearized model of the system is = \left [ \begin{array}{cccc } 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & \frac{m^2l^2g}{\mu } & \frac{-cj_t}{\mu } & \frac{-\gamma j_tlm}{\mu } \\ 0 & \frac{m_tmgl}{\mu } & \frac{-clm}{\mu } & \frac{-\gamma m_t}{\mu } \end{array } \right ] \left[\begin{array}{cccc } x_1\\ x_2 \\ x_3 \\ x_4 \end{array}\right ] + \left [ \begin{array}{rrrr } 0\\0 \\ \frac{j_t}{\mu}\\ \frac{lm}{\mu } \end{array } \right ] u,\ ] ] where is the mass of the cart , and are the moment of inertia and the mass of the system to be balanced , respectively , is the distance between the cart and the center of mass of the balanced body , is the gravitational acceleration constant ( ) , and , are coefficients of viscous friction . also , is the total inertia , is the total mass , and .now suppose that it is required to study whether we can mutually connect the states of the system having positive angle , without changing the sign of the angle .in particular , it is required to study mutual accessibility of the states in through to that end , we study relaxed ibc of the system w.r.t . through .first , we calculate the set of possible equilibria . in this example, it can be verified that , the -axis , and that ( case a ) . from theorem [ thm :casea ] , the system is not relaxed ibc w.r.t . through . to clarify more the obtained conclusion ,let .it can be verified that , i.e. , and .since it is always the case that , then , for all .thus , starting at a point , the -component of the state trajectory is non - increasing as long as is non - negative , whatever is selected .in particular , let and .we have , and so starting from , we can not reach through .this clarifies the proof of theorem [ thm : casea ] .9999 [ ex3_sec ] consider the linear circuit , shown in figure [ fig : sec_ex3 ] , where , , and .the state space model of the system is = \left [ \begin{array}{cccc } -1 & -1 & 0 \\ 1 & 0 & -1 \\ 0 & 1 & 0 \end{array } \right ] \left[\begin{array}{cccc } x_1\\ x_2 \\x_3 \end{array}\right ] + \left [ \begin{array}{rrrr } 1\\0 \\ 0 \end{array } \right ] u.\ ] ] let , and .it is required to study mutual accessibility of the states of through .first , we calculate the set of possible equilibria .we have ( case b ) .second , it is easy to verify that is controllable .third , we identify a convex , compact set such that , and the invariance conditions of are solvable .following remark [ rem : inv ] , we solve a set of lmis to get a stabilizing feedback and a corresponding lyapunov function .we find that ] , and the lyapunov function for the backward dynamics is , where .\ ] ] let .it can be verified that the compact , convex set satisfies , and satisfies the backward invariance conditions of . collecting all of the above together, we conclude from theorem [ thm : caseb_main ] that the system is relaxed ibc w.r.t . through . + for instance, suppose that it is required to connect to in finite time through .figure [ sim](a ) shows the trajectory obtained by applying the traditional control law , where .one can see that the trajectory reaches the point , outside , which means it violates the strict safety constraints in this example . instead , by following our proposed method in remark [ rem : suff ], we first apply the control law to steer the system to a point near the origin through ( the blue trajectory ) , then use the control law to connect the points close to the origin through ( the green trajectory ) , and finally apply to steer the system to through ( the red trajectory ) .see figure [ sim](b ) .in this paper , we have extended the results of by studying the case where the given affine system is not in - block controllable with respect to the given polytope .in particular , we have introduced the notion of relaxed in - block controllability ( ribc ) which studies mutual accessibility of the states in the interior of a given polytope through the interior of a given bigger polytope by applying uniformly bounded control inputs . by exploring all the possible geometric cases ,we have provided necessary conditions for ribc in all these cases .moreover , we have shown when these conditions are also sufficient .several examples have been provided to clarify the main results of the paper .this appendix contains proofs included in this draft version for completeness .* proof of lemma [ lem : tec1 ] : * + let be arbitrary . since is controllable , it is well known that for any , is invertible , and the control input ,~t\in[0,t_f]\ ] ] steers the system from to in finite time . since is compact , exists , and so we can always identify a uniform upper bound such that for all ] . *proof of theorem [ thm : casea ] : * + since , then by lemma 4.1 of , there exists , the perpendicular subspace to , such that , for all and all .this implies for any and under any control input , is non - increasing as long as .then since is an n - dimensional polytope , we can identify two points such that .clearly , starting from , the system can not reach through .* sketch of the proof of theorem [ thm : caseb_1 ] : * + the idea of the proof is that since , then for any , there exists sufficiently large such that . since is accessible from by assumption and is linear , then is accessible from .alexis k. , nikolakopoulos g. , tzes a. ( 2011 ) . switching model predictive attitude control for a quadrotor helicopter subject to atmospheric disturbances. _ control engineering practice_. vol .19(10 ) , pp . 1195 - 1207 . alizadeh h. v. , helwa m. k. , boulet b. ( 2014 ) .constrained control of the synchromesh operating state in an electric vehicle s clutchless automated manual transmission ._ ieee conference on control applications _ , antibes , france , pp .623 - 628 .caines p.e ., shahid shaikh m. ( 2006 ) .optimality zone algorithms for hybrid systems : efficient algorithms for optimal location and control computation ._ hybrid systems : computation and control _ , lecture notes in computer science , vol .3927 , pp .123 - 137 .geyer t. , papafotion g. , morari m. ( 2005 ) .model predictive control in power electronics : a hybrid systems approach . _ the 44th ieee conference on decision and control , and the european control conference _ , seville , spain , pp .5606 - 5611 .habets l.c.g.j.m.,collins p.j ., van schuppen j.h .reachability and control synthesis for piecewise - affine hybrid systems on simplices ._ ieee trans .automatic control_. vol .51 ( 6 ) , pp . 938948 .heemels w.p.m.h ., camlibel m.k .null controllability of discrete - time linear systems with input and state constraints . _ the 47th conference on decision and control(cdc ) _ , cancun , mexico , pp .3487 - 3492 .helwa m.k ., broucke m.e .reach control of single - input systems on simplices using multi - affine feedback ._ the 21st international symposium on mathematical theory of networks and systems _ , groningen , netherlands .henrion d. , lofberg j. , kocvara m. , stingl m. ( 2008 ) . solving polynomial static output feedback problems with penbmi . _ the 44th ieee conf .dec . and con . , and the eur . con . conf ._ seville , pp .7581 - 7586 .juloski a. l. , heemels w. , ferrari - trecate g. , vidal r. , paoletti s. , and niessen j. ( 2005 ) comparison of four procedures for the identification of hybrid systems ._ hybrid systems : computation and control_. vol .3414 , pp .354 - 369 .schoellig a. , caines p. e. , egerstedt m. , malham e. ( 2007 ) . a hybrid bellman equation for systems with regional dynamics ._ the 46th conference on decision and control(cdc ) _ , new orleans , pp . 3393 - 3398 . toker o. , ozbay h. ( 1995 ) . on the np - hardness of solving bilinear matrix inequalities and simultaneous stabilization with static output feedback . _ the american control conference _ , seattle , usa , pp2525 - 2526 .yordanov b. , tumova j. , belta c. , cerna i. , barnat j. ( 2010 ) .formal analysis of piecewise affine systems through formula guided refinement . _ the 49th ieee conference on decision and control_. atlanta , pp .5899 - 5904 .
we consider affine systems defined on polytopes and study the cases where the systems are not in - block controllable with respect to the given polytopes . that are the cases in which we can not fully control the affine systems within the interior of a given polytope , representing the intersection of given safety constraints . instead , we introduce in this paper the notion of relaxed in - block controllability ( ribc ) , which can be useful for the cases where one can distinguish between soft and hard safety constraints . in particular , we study whether all the states in the interior of a given polytope , formed by the intersection of soft safety constraints , are mutually accessible through the interior of a given bigger polytope , formed by the intersection of hard safety constraints , by applying uniformly bounded control inputs . by exploring the geometry of the problem , we provide necessary conditions for ribc . we then show when these conditions are also sufficient . several illustrative examples are also given to clarify the main results .
graphical models are a class of statistical models which combine the rigour of a probabilistic approach with the intuitive representation of relationships given by graphs .they are composed by a set of _ random variables _ describing the quantities of interest and a _ graph _ in which each _ node _ or _ vertex _ is associated with one of the random variables in ( they are usually referred to interchangeably ) .the _ edges _ are used to express the dependence relationships among the variables in .the set of these relationships is often referred to as the _ dependence structure _ of the graph .different classes of graphs express these relationships with different semantics , which have in common the principle that graphical separation of two vertices implies the conditional independence of the corresponding random variables .the two examples most commonly found in literature are _ markov networks _ , which use undirected graphs , and _ bayesian networks_ , which use directed acyclic graphs . in principle, there are many possible choices for the joint distribution of , depending on the nature of the data .however , literature have focused mostly on two cases : the _ discrete case _ , in which both and the are multinomial random variables , and the _ continuous case _ , in which is multivariate normal and the are univariate normal random variables . in the former ,the parameters of interest are the _ conditional probabilities _ associated with each variable , usually represented as conditional probability tables ; in the latter , the parameters of interest are the _ partial correlation coefficients _ between each variable and its neighbours ( i.e. the adjacent nodes in ) .the estimation of the structure of the graph is called _ structure learning _ , and involves determining the graph structure that encodes the conditional independencies present in the data . ideally it should coincide with the dependence structure of , or it should at least identify a distribution as close as possible to the correct one in the probability space .several algorithms have been presented in literature for this problem , thanks to the application of many results from probability , information and optimisation theory . despite differences in theoretical backgrounds and terminology ,they can all be grouped into only three classes : _ constraint - based algorithms _ , that are based on conditional independence tests ; _ score - based algorithms _ , that are based on goodness - of - fit scores ; and _ hybrid algorithms _ , that combine the previous two approaches .for some examples see bromberg et al . , castelo and roverato , friedman et al . , larraaga et al . and tsamardinos et al . .on the other hand , the development of techniques for assessing the statistical robustness of network structures learned from data ( e.g. the presence of artefacts arising from noisy data ) has been limited .structure learning algorithms are commonly studied measuring differences from the true ( known ) structure of a small number of reference data sets .the usefulness of such an approach in investigating networks learned from real - world data sets is limited , since the true structure of their probability distribution is unknown .a more systematic approach to model assessment , and in particular to the problem of identifying statistically significant features in a network , has been developed by friedman et al . using bootstrap resampling and model averaging .it can be summarised as follows : 1 . for : 1 . sample a new data set from the original data using either parametric or nonparametric bootstrap ; 2 . learn the structure of the graphical model from .2 . estimate the probability that each possible edge , is present in the true network structure as where is the indicator function of the event ( i.e. , it is equal to if and otherwise ) .the empirical probabilities are known as _ edge intensities _ or _ arc strengths _ , and can be interpreted as the degree of _ confidence _ that is present in the network structure describing the true dependence structure of are in fact an estimator of the expected value of the random vector describing the presence of each possible edge in . as such, they do not sum to one and are dependent on one another in a nontrivial way . ] .however , they are difficult to evaluate , because the probability distribution of the networks in the space of the network structures is unknown . as a result, the value of the confidence threshold ( i.e. the minimum degree of confidence for an edge to be significant and therefore accepted as an edge of ) is an unknown function of both the data and the structure learning algorithm .this is a serious limitation in the identification of significant edges and has led to the use of ad - hoc , pre - defined thresholds in spite of the impact on model assessment evidenced by several studies .an exception is nagarajan et al . , whose approach will be discussed below .apart from this limitation , friedman s approach is very general and can be used in a wide range of settings .first of all , it can be applied to any kind of graphical model with only minor adjustments ( for example , accounting for the direction of the edges in bayesian networks , see sec .[ sec : geneprof ] ) .no distributional assumption on the data is required in addition to the ones needed by the structure learning algorithm .no assumption is made on the latter , either , so any score - based , constraint - based or hybrid algorithm can be used .furthermore , parallel computing can easily be used to offset the additional computational complexity introduced by model averaging , because bootstrap is embarrassingly parallel . in this paper, we propose a statistically - motivated estimator for the confidence threshold minimising the norm between the cumulative distribution function of the observed confidence levels and the cumulative distribution function of the confidence levels of the unknown network .subsequently , we demonstrate the effectiveness of the proposed approach by re - investigating two experimental data sets from nagarajan et al . and sachs et al .consider the empirical probabilities defined in eq .[ eq : bootconf ] , and denote them with . for a graph with nodes , .furthermore , consider the order statistic derived from .it is intuitively clear that the first elements of are more likely to be associated with non - significant edges , and that the last elements of are more likely to be associated with significant edges .the ideal configuration of would be that is the set of probabilities that characterises any edge as either significant or non - significant without any uncertainty .in other words , such a configuration arises from the limit case in which all the networks have exactly the same structure .this may happen in practise with a consistent structure learning algorithm when the sample size is large .a useful characterisation of and can be obtained through the empirical cumulative distribution functions of the respective elements , and in particular , corresponds to the fraction of elements of equal to zero and is a measure of the fraction of non - significant edges . at the same time, provides a threshold for separating the elements of , namely where is the _ quantile function _ . more importantly , estimating from data provides a statistically motivated threshold for separating significant edges from non - significant ones . in practise, this amounts to approximating the ideal , asymptotic empirical cumulative distribution function with its finite sample estimate .such an approximation can be computed in many different ways , depending on the norm used to measure the distance between and as a function of .common choices are the family of norms , which includes the euclidean norm , and csiszar s -divergences , which include kullback - leibler divergence .( left ) , the cumulative distribution function ( centre ) and the norm between the two ( right ) , shaded in grey . ]the norm appears to be particularly suited to this problem ; an example is shown in fig .[ fig : ecdf ] .first of all , note that is piecewise constant , changing value only at the points ; this descends from the definition of empirical cumulative distribution function .therefore , for the problem at hand eq .[ eq : l1 ] simplifies to which can be computed in linear time from .its minimisation is also straightforward using linear programming .furthermore , compared to the more common norm ^ 2 dx\ ] ] or the norm } \left\ { \left|f_{\mathbf{\hat{p}_{(\cdot)}}}(x ) - f_{\mathbf{\tilde{p}_{(\cdot)}}}(x ; t)\right| \right\},\ ] ] the norm does not place as much weight on large deviations compared to small ones , making it robust against a wide variety of configurations of . then the identification of significant edges can be thought of either as a _least absolute deviations estimation _ or an _ approximation _ of the form }{\operatorname{argmin } } \ ; l_{1}\left(t ; \mathbf{\hat{p}_{(\cdot)}}\right)\ ] ] followed by the application of the following rule : note that , even though edges are individually identified as as significant or non - significant , they are not identified independently of each other because is a function of the whole .a simple example is illustrated below .[ ex ] consider a graphical model based on an undirected graph with node set .the set of possible edges of contains elements : , , , , and .suppose that that we have estimated the following confidence values : then and and , respectively in black and grey ( left ) , and the norm ( right ) from example [ ex ] . ]the norm takes the form and is minimised for .therefore , an edge is deemed significant if its confidence is strictly greater than , or , equivalently , if it has confidence of at least ; only , and satisfy this condition .we tested the proposed approach on synthetic data sets using three established performance measures : _ sensitivity _ , _ specificity _ and _ accuracy_. _ sensitivity _ is given by the proportion of edges of the true network structure that have been correctly identified as significant ._ specificity _ is given by the proportion of the edges missing from the true network structure that have been correctly identified as non - significant ._ accuracy _ is given by the proportion of edges correctly identified as either significant or non - significant over the set of all possible edges . to that end, we generated data sets of varying sizes ( , , , , , , and ) from three discrete bayesian networks commonly used as benchmarks : * the alarm network , a network designed to provide an alarm message system for intensive care unit patient monitoring .its true structure is composed by nodes and edges ( of possible edges ) , and its probability distribution has parameters ; * the hailfinder network , a network designed to forecast severe summer hail in northeastern colorado . its true structure is composed by nodes and edges ( of possible edges ) , and its probability distribution has parameters ; * the insurance network , a network designed to evaluate car insurance risks .its true structure is composed by nodes and edges ( of possible edges ) , and its probability distribution has parameters . three different structure learning algorithms were considered : * the incremental association markov blanket ( iamb ) constraint - based algorithm .iamb was used to learn the markov blanket of each node as a preliminary step to reduce the number of its candidate parents and children ; a network structure satisfying these constraints is then identified as in the grow - shrink algorithm .conditional independence tests were performed using a shrinkage mutual information test with .such a test , unlike the more common asymptotic mutual information test , is valid and has been shown to work reliably even on small samples .an was also considered ; however , the results were not significantly different from and will not be discussed separately in this paper ; * the hill climbing ( hc ) score - based algorithm with the bayesian dirichlet equivalent uniform ( bdeu ) score function , the posterior distribution of the network structure arising from a uniform prior distribution .the equivalent sample size was set to .this is the same approach detailed in friedman et al . , although they considered only ( instead of ) bootstrap samples for each scenario ; * the max - min hill climbing ( mmhc ) hybrid algorithm , which combines the max - min parents and children ( mmpc ) and hc . the conditional independence test used in mmpc and the score functions used in hc are the ones illustrated in the previous points .the performance measures were estimated for each combination of network , sample size and structure learning algorithm as follows : 1 .a sample of the appropriate size was generated from either the alarm , the hailfinder or the insurance network ; 2 .we estimated the confidence values for all possible edges from and nonparametric bootstrap samples .since results are very similar , they will be discussed together ; 3 .we estimated the confidence threshold , and identified significant and non - significant edges in the network .note that the direction of the edges present in the network structure is effectively ignored , because the proposed approach focuses only those edges presence .significant edges were then used to build an averaged network structure ; 4 .we computed sensitivity , specificity and accuracy comparing the averaged network structure to the true one , which is known from literature .these steps were repeated times in order to estimate both the performance measures and their variability . .bars represent 95% confidence intervals , and the dotted vertical line is .,scaledwidth=70.0% ] .bars represent 95% confidence intervals , and the dotted vertical line is .,scaledwidth=70.0% ] all the simulations and the thresholds estimation were performed with the bnlearn package for r , which implements several methods for structure learning , parameter estimation and inference on bayesian networks ( including the approach proposed in sec .[ sec : approach ] ) .the average values of sensitivity , specificity , accuracy and for the networks across various sample sizes ( ) are shown in fig .[ fig : roc - iamb ] ( iamb ) , fig .[ fig : roc - hc ] ( hc ) and fig .[ fig : roc - mmhc ] ( mmhc ) . since the number of parameters is non - constant across the networks , a normalised ratio of the size of the generated sample to the number of parameters of the network ( i.e. ) is used as a reference instead of the raw sample size ( i.e. ) .intuitively , a sample of size of may be large enough to estimate reliably a small network with few parameters , say , but it may be too small for a larger network with . on a related note , denser networks( i.e. networks with a large number of edges compared to the number of nodes ) usually have a higher number of parameters than sparser ones ( i.e. networks with few edges ) . .bars represent 95% confidence intervals , and the dotted vertical line is .,scaledwidth=70.0% ] several interesting trends emerge from the estimated quantities .as expected , sensitivity increases as the sample size grows .this provides an empirical verification that the combination of hc and bde is indeed consistent , as proved by chickering .no analogous result exists for iamb or mmhc , although intuitively their sensitivity should improve as well with the sample size due to the consistency of the conditional independence tests used by those algorithms .moreover , even when is extremely low a substantial proportion of the network structure can be correctly identified .when is at least ( i.e. observation every parameters ) , hc successfully recovers from about ( for alarm and insurance ) to ( for hailfinder ) of the true network structure .in contrast , iamb and mmhc successfully recover from about to of hailfinder , but only about to of alarm and to of insurance .this difference in performance can be attributed to the sparsity - inducing effect of shrinkage tests , which increase specificity at the cost of sensitivity . for values of greater than ( i.e. more observations than parameters ) the increase in sensitivity slows down for all combinations of networks and algorithms , reaching a plateau .overall , sensitivity seems to have an hyperbolic behaviour , growing very rapidly for and then converging asymptotically to for .thus we expect it to increase linearly on a scale .the slower convergence rate observed for the insurance network compared to the other two networks is likely to be a consequence of its high edge density ( edges per node ) relative to alarm ( ) and hailfinder ( ) .slower convergence may also be an outcome of inherent limitations of structure learning algorithms in the case of dense networks . furthermore , both specificity and accuracy are close to for all the networks and the sample sizes considered in the analysis , even at very low ratios .such high values are a result of the low number of true edges in alarm , hailfinder and insurance compared to the respective numbers of possible edges .this is true in particular for the alarm and hailfinder networks .the lower values observed for the insurance network can be attributed again to the inherent limitations of structure learning algorithms in modelling dense networks .the sparsity - inducing effect of shrinkage tests is again evident for both iamb and mmhc ; both specificity and accuracy actually decrease slightly as grows and the influence of shrinkage decreases . ) for the alarm , hailfinder and insurance networks over .bars represent 95% confidence intervals . ]it is also important to note that , as shown in fig .[ fig : thr ] , the average value of the confidence threshold does not exhibit any apparent trend as a function of .in addition , its variability does not appear to decrease as grows .this suggests that the optimal depends strongly on the specific sample used in the estimation of the confidence values , even for relatively large samples .however , specificity , sensitivity and accuracy estimates appear on the other hand to be very stable ( all confidence intervals shown in fig . [fig : roc - iamb ] , fig .[ fig : roc - hc ] and fig .[ fig : roc - mmhc ] are very small ) . from fig .[ fig : thr ] , it is also apparent that the threshold estimate can be significantly lower than even for high values of .this behaviour is observed consistently across the three networks ( alarm , hailfinder , insurance ) .these results are in sharp contrast with ad - hoc thresholds commonly found in literature , which are usually large ( e.g. 0.8 in * ? ? ?a large threshold can certainly be useful in excluding noisy edges , which may result from artefacts at the measurement and dynamical levels and from finite sample - size effects .however , while a large ad - hoc threshold can certainly minimise false positives , it is also expected to accentuate false negatives .such a conservative choice can have a profound impact on the network topology , resulting in artificially sparse networks .the threshold estimator introduced in sec .[ sec : approach ] achieves a good trade - off between incorrectly identifying noisy edges as significant and disregarding significant ones . as an example , the difference in sensitivity , specificity and accuracy between the estimated threshold and several large , ad - hoc ones ( ) for hc is shown in fig .[ fig : deltas ] ( the corresponding plots for iamb and mmhc are similar , and are omitted for brevity ) .the threshold systematically outperforms the ad - hoc thresholds in terms of sensitivity , in particular for low values of .the difference progressively vanishes as grows .all thresholds have comparable levels of specificity and accuracy . and several ad - hoc ones ( ) for hc over .] on a related note , false negatives across ad - hoc thresholds may also be attributed to the fact that edges are considered as separate , independent entities as far the choice of the threshold is concerned i.e. a threshold is expected to identify as significant about in edges in the network . however , in a biological setting the structure of the network is an abstraction for the underlying functional mechanisms ; as an example , consider the signalling pathways in a transcriptional network .in such a context , edges are clearly not independent , but appear in concert along signalling pathways .this interdependence is accounted for in the proposed approach ( that is based on the full set of estimated condence values ) , but it is not commonly considered in choosing ad - hoc thresholds . for instance , edges appearing with individual confidence values far below the $ ] range may not necessarily be identified as significant by an ad - hoc threshold . however , the proposed approach recognises their interplay and correctly identifies them as significant .this aspect , along with the strong dependence between the optimal and the actual sample the network is learned from , may discourage the use of an a priori or ad - hoc confidence threshold in favour of more statistically - motivated alternatives .in order to demonstrate the effectiveness of the proposed approach on experimental data sets , we will examine two gene expression data sets from nagarajan et al . and sachs et al .all the analyses will be performed again with the bnlearn package .following imoto et al . , we will consider the edges of the bayesian networks disregarding their direction when determining their significance .edges identified as significant will then be oriented according to the direction observed with the highest frequency in the bootstrapped networks .while simplistic , this combined approach allows the proposed estimator to handle the edges whose direction can not be determined by the structure learning algorithm possibly due to score equivalent structures . in a recent study the interplay between crucial myogenic ( myogenin , myf-5 , myo - d1 ) , adipogenic ( c / ebp , ddit3 , foxc2 , ppar ) , and wnt - related genes ( lrp5 , wnt5a ) orchestrating aged myogenic progenitor differentiation was investigated by nagarajan et al . using clonal gene expression profiles in conjunction with bayesian network structure learning techniques .the objective was to investigate possible functional relationships between these diverse differentiation programs reflected by the edges in the resulting networks .the clonal expression profiles were generated from rna isolated across 34 clones of myogenic progenitors obtained across 24-month - old mice and real - time rt - pcr was used to quantify the gene expression .such an approach implicitly accommodates inherent uncertainty in gene expression profiles and justified the choice of probabilistic models . in the same study, the authors proposed a non - parametric resampling approach to identify significant functional relationships .starting from friedman s definition of confidence levels ( eq . [ eq : bootconf ] ) , they computed the _ noise floor distribution _ of the edges by randomly permuting the expression of each gene and performing bayesian network structure learning on the resulting data sets .an edge was deemed significant if .in addition to revealing several functional relationships documented in literature , the study also revealed new relationships that were immune to the choice of the structure learning techniques .these results were established across clonal expression data normalised using three different housekeeping genes and networks learned with three different structure learning algorithms . for the myogenic progenitors data from nagarajan et al . ( on the left ) , and the network structure resulting from the selection of the significant edges ( on the right ) .the vertical dashed line in the plot of represents the threshold . ]the approach presented in has two important limitations .first , the computational cost of generating the noise floor distribution may discourage its application to large data sets .in fact , the generation of the required permutations of the data and the subsequent structure learning ( in addition to the bootstrap resampling and the subsequent learning required for the estimation of ) essentially doubles the computational complexity of friedman s approach .second , a large sample size may result in an extremely low value of , and therefore in a large number of false positives . in the present study, we re - investigate the myogenic progenitor clonal expression data normalised using housekeeping gene gapdh with the approach outlined in sec .[ sec : approach ] and the iamb algorithm .it is important to note that this strategy was also used in the original study , hence its choice .the order statistic was computed from bootstrap samples .the empirical cumulative distribution function , the estimated threshold and the network with the significant edges are shown in fig .[ fig : myogenic ] . all edges identified as significant in the earlier study across the various structure learning techniques and normalisation techniqueswere also identified by the proposed approach ( see fig .3d in ) .in contrast to fig . [fig : myogenic ] , the original study using iamb and normalisation with respect to gapdh alone detected a considerable number of additional edges ( see fig .3a in ) .thus it is quite possible that the approach proposed in this paper reduces the number of false positives and spurious functional relationships between the genes .furthermore , the application of the proposed approach in conjunction with the algorithm from imoto et al . reveals directionality of the edges , in contrast to the undirected network reported by nagarajan et al . . in a landmark study ,sachs et al . used bayesian networks for identifying causal influences in cellular signalling networks from simultaneous measurement of multiple phosphorylated proteins and phospholipids across single cells .the authors used a battery of perturbations in addition to the unperturbed data to arrive at the final network representation . a greedy search score - based algorithm that maximises the posterior probability of the network andaccommodates for variations in the joint probability distribution across the unperturbed and perturbed data sets was used to identify the edges .more importantly , significant edges were selected using an arbitrary significance threshold of ( see fig .3 , ) .a detailed comparison between the learned network and functional relationships documented in literature was presented in the same study . for the flow cytometry data from sachs et al . ( on the left ) , and the network structure resulting from the selection of the significant edges ( on the right ) .the vertical dashed line in the plot of represents the threshold . ]we investigated the performance of the proposed approach in identifying significant functional relationships from the same experimental data .however , we limit ourselves to the data recorded without applying any molecular intervention , which amount to observations for variables .we compare and contrast our results to those obtained using an arbitrary threshold of 0.85 . the combination of perturbed and non - perturbed observations studied in sachs et al . can not be analysed with our approach , because each subset of the data follows a different probability distribution and therefore there is no single `` true '' network .analysis of the unperturbed data using the approach presented in sec .[ sec : approach ] reveals the edges reported in the original study .the resulting network is shown in fig . [fig : sachs ] along with and the estimated threshold . from the plot of can clearly see that significant and non - significant edges present widely different levels of confidence , to the point that any threshold between and results in the same network structure .this , along with the value of the estimated threshold ( ) , shows that the noisiness of the data relative to the sample size is low . in other words ,the sample is big enough for the structure learning algorithm to reliably select the significant edges .the edges identified by the proposed method were the same as those identified by using general stimulatory cues excluding the data with interventions ( see fig .4a in , supplementary information ) .in contrast to , using imoto et al . approach in conjunction with the proposed thresholding method we were able to identify the directions of the edges in the network .the directions correlated with the functional relationships documented in literature ( tab . 3, , supplementary information ) as well as with the directions of the edges in the network learned from both perturbed and unperturbed data ( fig .3 , ) .graphical models and network abstractions have enjoyed considerable attention across the biological and medical communities .such abstractions are especially useful in deciphering the interactions between the entities of interest from high - throughput observational data .classical techniques for identifying significant edges in the resulting graph rely on ad - hoc thresholding of the edge confidence estimated from across multiple independent realisations of networks learned from the given data .large ad - hoc threshold values are particularly common , and are chosen in an effort to minimise noisy edges in the resulting network . while useful in minimising false positives , such a choice can accentuate false negatives with pronounced effect on the network topology .the present study overcomes this caveat by proposing a more straightforward and statistically - motivated approach for identifying significant edges in a graphical model .the proposed estimator minimises the norm between the cumulative distribution function of the observed confidence levels and the cumulative distribution function of their asymptotic , ideal configuration .the effectiveness of the proposed approach is demonstrated on three synthetic data sets and on gene expression data sets across two different studies .however , the approach is defined in a more general setting and can be applied to many classes of graphical models learned from any kind of data .this work was supported by the bbsrc & technology board grant ts / i002170/1 ( marco scutari ) and the national library of medicine grant r03lm008853 ( radhakrishnan nagarajan ) .marco scutari would also like to thank adriana brogini for proofreading the paper and providing useful suggestions .
graphical models such as bayesian networks have been used successfully to capture dependencies between entities of interest from observational data sets . graphical abstractions can also provide system - level insights , hence of great interest across a spectrum of disciplines . identifying significant edges in the resulting graph have traditionally relied on the choice of an ad - hoc threshold , which can have a pronounced impact on the network topology and conclusions . in the present study , a statistically - motivated approach that obviates the need for an ad - hoc threshold is proposed for identifying significant edges . several aspects of the proposed approach are investigated across synthetic as well as experimental data sets . graphical models , model averaging , norm , gene networks .
positron emission tomography ( pet ) and single photon emission computed tomography ( spect ) are two modern imaging techniques with a wide range of medical applications .although these techniques were originally developed for the study of the _ functional _ characteristics of the brain , they are now used in many diverse areas of clinical medicine .for example a recent editorial in the new england journal of medicine emphasized the importance of pet in oncologic imaging .other medical applications of pet and spect are presented in .the first step in pet is to inject the patient with a dose of a suitable radiopharmaceutical .for example in brain imaging a typical such radiopharmaceutical is flurodeoxyglucose ( fdg ) , which is a normal molecule of glucose attached artificially to an atom of radioactive fluorine .the cells in the brain which are more active have a higher metabolism , need more energy , thus will absorb more fdg .the fluorine atom in the fdg molecule suffers a radioactive decay , emitting a positron . when a positron collides with an electron it liberates energy in the form of _ two _ beams of gamma rays travelling in _ opposite _ direction , which are picked by the pet scanner .spect is similar to pet but the radiopharmaceuticals decay to emit a _single _ photon . in both pet andspect the radiating sources are inside the body , and the aim is to determine the distribution of the relevant radiopharmaceutical from measurements made outside the body of the emitted radiation . if is the attenuation coefficient of the body , then it is straightforward to show that the intensity outside the body measured by a detector which picks up only radiation along the straight line is given by where is a parameter along , and denotes the section of between the point and the detector .the attenuation coefficient is precisely the function measured by the usual computed tomography .thus the basic mathematical problem in spect is to determine the function from the knowledge of the `` transmission '' function ( determined via computed tomography ) and the `` emission '' function ( known from the measurements ) . in petthe situation is simpler . indeed , since the sources eject particles _ pairwise _ in _ opposite _ directions and the radiation in opposite directions is measured _ simultaneously _ , equation ( [ lineint ] ) is replaced by where , are the two half lines of with endpoint .since , equation ( [ inew ] ) becomes we recall that the line integral of the function along is precisely what is known from the measurements in the usual computed tomography .thus since both and the integral of are known ( from the measurements of spect and of computed tomography respectively ) , the basic mathematical problem of pet is to determine from the knowledge of its line integrals .this mathematical problem is identical with the basic mathematical problem of computed tomography .\(i ) a point of a line making an angle with the is specified by the three real numbers , where is a parameter along , , is the distance from the origin to the line , , and .\(ii ) the above parameterization implies that , for a fixed , the cartesian coordinates can be expressed in terms of the _ local coordinates _ by the equations ( see section [ math ] ) a function rewritten in local coordinates will be denoted by , thus and will denote the attenuation coefficient and the distribution of the radiopharmaceutical , rewritten in local coordinates .( iii ) the line integral of a function is called its _ radon transform _ and will be denoted by . in order to compute , we first write in local coordinates and then integrate with respect to , the line integral of the function with respect to the weight appearing in equation ( [ lineint ] ) is called the _ attenuated radon transform _ of ( with the attenuation specified by ) and will be denoted by . in order to compute , we write both and in local coordinates and then evaluate the following integral the basic mathematical problem of both computed tomography and pet is to reconstruct a function from the knowledge of its radon transform , i.e. to solve equation ( [ radon ] ) for in terms of . the relevant formula is called the _ inverse radon transform _ and is given by where , and denotes principal value integral . a novel approach for deriving equation ( [ irt ] ) was introduced in , and is based on the analysis of the equation where is a complex parameter different than zero . the application of this approach to a slight generalization of equation ( [ fnovel ] )can be used to reconstruct a function from the knowledge of its attenuated radon transform , i.e. this approach can be used to solve equation ( [ art ] ) for in terms of and . the relevant formula , called the _ inverse attenuated radon transform _ ,was obtained by r. novikov by analysing , instead of equation ( [ fnovel ] ) , the equation in section [ math ] we first review the analysis of equation ( [ fnovel ] ) , and then show that if one uses the basic result obtained in this analysis , it is possible to construct immediately the inverse attenuated radon transform . in section [ nume ]we present a new numerical reconstruction algorithm for both pet and spect .this algorithm is based on approximating the given data in terms of cubic splines .we recall that both the exact inverse radon transform as well as the exact inverse attenuated radon transform involve the hilbert transform of the data functions .for example , the inverse radon transform involves the function existing numerical approaches use the convolution property of the fourier transform to compute the hilbert transform and employ appropriate filters to eliminate high frequencies .it appears that our approach has the advantage of simplifying considerably the mathematical formulas associated with these techniques .furthermore , accurate reconstruction is achieved , for noiseless data , with the additional use of an averaging or of a median filter .several numerical tests are presented in section [ tests ] .one of these tests involves the shepp logan phantom , see figure [ petphan](c ) . numerical algorithms based on the filtered back projection are discussed in , while algorithms based on iterative techniques can be found in .[ math ] we first review the basic result of .it will be shown later that using this result it is possible to derive both the inverse radon as well as the inverse attenuated radon transforms in a straightforward manner .define the complex variable by where , are the real cartesian coordinates , , and is a complex variable , .assume that the function has sufficient decay as .let satisfy the equation as well as the boundary condition as .let and denote the limits of as it approaches the unit circle from inside and outside the unit disc respectively , i.e. then where denotes the radon transform of , denotes in the local coordinates ( see the notation in section [ intro ] ) , denote the usual projection operators in the variable , i.e. and denotes the principal value integral .[ firstprop ] * proof*. before deriving this result , we first note that equation ( [ defz ] ) is a direct consequence of equation ( [ fnovel ] ) . indeed ,equation ( [ fnovel ] ) motivates the introduction of the variable defined by equation ( [ defz ] ) . taking the complex conjugate of equation ( [ defz ] ) we find equations ( [ defz ] ) and ( [ conz ] ) define a change of variables from to .using this change of variables to compute and in terms of and , equation ( [ fnovel ] ) becomes ( [ mueq ] ) .we now derive equation ( [ finmu ] ) .the derivation is based on the following two steps , which have been used extensively in the field of nonlinear integrable pdes , see for example .\(i ) in the first step ( sometimes called the direct problem ) , we consider equation ( [ mueq ] ) as an equation which defines in terms of , and we construct an integral representation of in terms of , for _ all complex _ values of .this representation is indeed , suppose that the function satisfies the equation as well as the boundary condition as . then pompieu s formula ( see for example ) implies in our case thus equation ( [ pomp ] ) becomes ( [ repre ] ) .\(ii ) in the second step ( sometimes called the inverse problem ) , we analyze the analyticity properties of with respect to , and we find an _ alternative _ representation for .this representation involves certain integrals of called spectral functions . for our problem , this representation is equation ( [ finmu ] ) . indeed ,since is an analytic function of for and since as , we can reconstruct the function if we know its `` jump '' across the unit circle : where thus we need to compute the limits of as tends to . as , substituting this expression in the definition of ( equation ( [ defz ] ) ) and simplifying , we find the right hand side of this equation can be rewritten in terms of the local coordinates , , , : let and denote two unit vectors along the line and perpendicular to this line , respectively . then or hence and are given by equations ( [ x1x2 ] ) .inverting these equations we find thus equation ( [ zz ] ) becomes substituting this expression in equation ( [ repre ] ) and using the fact that the relevant sign equals 1 , we find using the change of variables defined by equations ( [ x1x2 ] ) and ( [ taurho ] ) , and noting that the relevant jacobian is 1 , i.e. we find that the right hand side of equation ( [ musim ] ) equals in order to simplify this expression we split the integral over in the form and note that in the first integral , while in the second integral .thus , using the second set of equations ( [ p+- ] ) the expression in ( [ intint ] ) becomes finally , adding and subtracting the integral we find the first two terms in the right hand side of this equation equal , hence we find ( [ finmu]) .the derivation of equation ( [ finmu]) is similar . 0.3 cm using equation ( [ finmu ] ) it is now straightforward to derive both the inverse radon and the inverse attenuated radon transforms . in this respectwe note that the result of proposition [ firstprop ] can be rewritten in the form where equations ( [ finmu ] ) yield equation ( [ murad ] ) implies substituting this expression in equation ( [ fnovel ] ) we find replacing in this equation by the right hand side of equation ( [ capj ] ) we find equation ( [ irt ] ) .equation ( [ gnovel ] ) can be rewritten in the form where is defined by equation ( [ nu ] ) .hence \right ) = \frac{g}{\nu } \exp \!\!\left [ \partial_{\bar{z}}^{-1 } \left ( \frac{f}{\nu } \right ) \right],\ ] ] or = \partial_{\bar{z}}^{-1 } \left ( \frac{g}{\nu } \exp \!\!\left [ \partial_{\bar{z}}^{-1 } \left ( \frac{f}{\nu } \right ) \right ] \right).\ ] ] replacing in this equation by the right hand side of equation ( [ parbar ] ) we find for the computation of the right hand side of this equation we use again equation ( [ parbar ] ) , where is replaced by times the two exponentials appearing in the above relation .hence note that the term ] , i.e. we suppose that are known .moreover , in each interval ] , and for , or for . the definitions ( [ p+- ] ) become moreover = \exp\ ! \left [ \pm \frac{1}{2 } \hat{f}(\rho,\theta ) \right ] \left ( \cos \frac{h(\rho,\theta)}{2 \pi } - \mathrm{i } \sin \frac{h(\rho,\theta)}{2 \pi } \right ) , \\ & & \exp\ ! \left [ -p^\pm \hat{f}(\rho,\theta ) \right ] = \exp\ ! \left [ \mp \frac{1}{2 } \hat{f}(\rho,\theta ) \right ] \left ( \cos \frac{h(\rho,\theta)}{2 \pi } + \mathrm{i } \sin \frac{h(\rho,\theta)}{2 \pi } \right).\end{aligned}\ ] ] we introduce the following notation : using this notation and setting , after some calculations , equation ( [ jump ] ) becomes we now set thus equation ( [ rel2 g ] ) becomes we denote the right hand side of this equation by . taking the real part of in ( [ gx1x2 ] ) , we obtain where and are given by ( [ taurho ] ) and for the numerical calculation of the hilbert transform we write if or the integral in the right hand side of ( [ newnewh ] ) can be written thus , after some calculations , we obtain if and the integral in the right hand side of ( [ newnewh ] ) can be written and after some calculation we obtain \!\right\ } \!\ ! , \label{newint2}\end{aligned}\ ] ] where is the right hand side of ( [ newint1 ] ) . in order to calculate numerically for any , , , we use relations ( [ finalf ] ) and ( [ taurho]b ) . thus and consequently where and are given from ( [ taurho ] ) and from ( [ newhp ] ) .we can now calculate following the procedure outlined in the previous section .we then calculate using relation ( [ int ] ) if , alternatively the relation \ ] ] if . for the numerical calculation of the integrals appearing in ( [ int ] ) and ( [ intminus ] ) we use the gausslegendre quadrature with two functional evaluations at every step , i.e. where the abscissas , and the weights , are given by we also notice that we have tried subdivision of the interval into several intervals and the improvement is very minor .therefore we use just one interval , i.e. two function evaluations per quadrature , since the major increase in running time of the program implicit in using panel quadrature is not justified by the modest improvement in accuracy .for the numerical calculation of the integrals in ( [ finalp ] ) and ( [ capf ] ) we use again formula ( [ closed ] ) , resulting in spectral convergence . for the numerical calculation of the partial derivatives and in ( [ finalp ] ) we use the forward difference scheme for the first half of the interval ] , while the points are equally spaced in $ ] .the density plots presented below were drawn by using ` mathematica ` .the dark color represents zero ( or negative ) values while the white color represents the maximum value of the original ( or reconstructed ) function .first we tested the pet algorithm for the three different phantoms shown in figures [ petphan ] .figures ( a ) and ( b ) were taken from and , respectively .these figures depict the attenuation coefficient for a function modelling a section of a human thorax .the small circles represent bones and the larger ellipses the lungs .figure ( c ) is the well known shepp logan phantom , which provides a model of a head section .all these phantoms consist of different ellipses with various densities . using the radontransform ( [ radon ] ) , we computed the data function for 200 points for and 100 points for .this computation was carried out by using ` mathematica ` .we then used these data in the numerical algorithm to reevaluate .furthermore , in order to remove the effect of the gibbs wilbraham phenomenon , we applied an averaging filter as follows : we first found the maximum value ( ) of in the reconstructed image .we then set to zero those values of which were less than .finally we applied the averaging filter with averaging parameter .this filtering procedure was applied five times , with the additional elimination of those values of which were less than at the end of the procedure . in figures [ petphan1 ] and [ petphan2 ]we present the results before and after the filtering procedure , respectively .the reconstruction took place in a grid .-0.3 cm ( a ) 4.3 cm ( b ) 4.3 cm ( c ) 0.3 cm -0.3 cm ( a ) 4.3 cm ( b ) 4.3 cm ( c ) 0.3 cm -0.3 cm ( a ) 4.3 cm ( b ) 4.3 cm ( c ) we then tested the spect algorithm for the three different phantoms shown in figures [ spectphan ] .figures ( a ) and ( b ) were taken from . in these cases the function is given by figure [ petphan](a ) .figure ( c ) was taken from .the white ring represents the distribution of the radiopharmaceutical at the myocardium . in this case the function is given by figure [ petphan](b ) . by using the radon transform ( [ radon ] ) , and the attenuated radon transform ( [ art ] ) , we computed the data functions and for 200 values of and 100 points of ( again using ` mathematica ` ) .we consequently used these data in our program to re evaluate . in order to remove the effect of the gibbs wilbraham phenomenon , a median filter was used , with the additional elimination of those values of which were less than before and after the application of the filter .the results are shown in figures [ spectphan1 ] and [ spectphan2 ] , before and after the filtering procedure respectively .the reconstruction took place in a grid .-0.3 cm ( a ) 3.9 cm ( b ) 3.9 cm ( c ) 0.3 cm -0.3 cm ( a ) 3.9 cm ( b ) 3.9 cm ( c ) 0.3 cm -0.3 cm ( a ) 3.9 cm ( b ) 3.9 cm ( c ) for the above phantoms it seems that even a rough estimation of is sufficient for an accurate reconstruction .this means that , in order to compute numerically using ( [ capf ] ) , it is sufficient to use ten equally spaced points for , rather than .this reduces considerably the reconstruction time .v.m . was supported by a marie curie individual fellowship of the european community under contract number hpmf - ct-2002 - 01597 .we are grateful to professor b. hutton for useful suggestions .guillement , f. jauberteau , l. kunyansky , r. novikov , r.trebossen , on single photon emission computed tomography imaging based on an exact formula for the nonuniform attenuation correction , inv.prob . * 18 * , l11 ( 2002 ) .j. nuyts , j.a .fessler , a penalized likelihood image reconstruction method for emission tomography , compared to post smoothed maximum likelihood with mached spatial resolution , ieee trans.med .* 22 * , 1042 ( 2003 ) .
the modern imaging techniques of positron emission tomography and of single photon emission computed tomography are not only two of the most important tools for studying the functional characteristics of the brain , but they now also play a vital role in several areas of clinical medicine , including neurology , oncology and cardiology . the basic mathematical problems associated with these techniques are the construction of the inverse of the radon transform and of the inverse of the so called attenuated radon transform respectively . we first show that , by employing mathematical techniques developed in the theory of nonlinear integrable equations , it is possible to obtain analytic formulas for these two inverse transforms . we then present algorithms for the numerical implementation of these analytic formulas , based on approximating the given data in terms of cubic splines . several numerical tests are presented which suggest that our algorithms are capable of producing accurate reconstruction for realistic phantoms such as the well known shepp logan phantom .
nmr - based ensemble quantum information processing ( qip ) devices have provided excellent testbeds for controlling non - trivial numbers of qubits .a solid - state nmr qip architecture builds on this success by incorporating the essential features of the liquid - state devices while offering the potential to reach unit polarization and thus control more qubits . in this architecture , the abundant nuclear spins withpolarization p form a large heat - capacity spin - bath that can be either coupled to , or decoupled from , a dilute , embedded ensemble of spin - labelled isotopomers that comprise the qubit register .bulk spin - cooling procedures such as dynamic nuclear polarization are well known and capable of reaching polarizations near unity . this architecture is one realization within a large class of possible solid - state qip systems in which coherently controlled qubits can be brought into contact with an external system that behaves as a heat bath .the principles and methods applied in solid - state nmr qip will therefore apply to many other systems .an additional motivation is development of control techniques that future quantum devices will utilize . for this experiment, we develop a novel technique to implement the controlled qubit - bath interaction , and also report the first application of strongly - modulating pulses to solid - state nmr for high - fidelity , coherent qubit control .the three - qubit quantum information processor used here is formed by the three spin- nuclei of isotopically labelled malonic acid molecules , occupying a dilute fraction of lattice sites in an otherwise unlabeled single - crystal of malonic acid ( unlabeled , with the exception of naturally occurring isotopes at the rate of ) .the concentration of labelled molecules was .malonic acid also contains abundant spin- nuclei , which comprise the heat - bath .figure [ fig : fig1 ] shows the - decoupled , - nmr spectrum for the crystal ( and crystal orientation ) used in this work .the spectrum shows the nmr absorption peaks of both the qubit spins ( quartets ) and natural abundance spins ( singlets ) , the latter being inconsequential for qip purposes . the table in fig .[ fig : fig1 ] lists the parameters of the ensemble qubit hamiltonian obtained from fitting the spectrum , and also includes couplings involving the methylene protons calculated for this crystal orientation from the known crystal structure .experiments were performed at room temperature at a static magnetic field strength of t , where the thermal polarization is .+ - malonic acid spin system . ( below ) - decoupled , spectrum near the [ 010 ] orientation with respect to the static magnetic field .the blue - dashed line is the experimental nmr absorption spectrum , and the solid - red line is a fit .multiplet assignments are indicated by the labels , and .the central peaks in each multiplet correspond to natural abundance in the sample , which are inconsequential for qip purposes .the peak height differences in the - molecule peaks indicate the strong coupling regime , i.e. the - intramolecular dipolar couplings are significant compared to the relative chemical shifts .( above ) table showing the rotating - frame hamiltonian parameters ( chemical shifts along diagonal ; dipolar coupling strengths off - diagonal ; all values in khz ) obtained from the spectral fit .it also includes calculated dipolar couplings involving the methylene protons based on the atomic coordinates and the crystal orientation obtained from the spectral fit .[ fig : fig1],height=359 ]in this orientation , the methylene carbon has a dipolar coupling of khz to of the methylene pair , whereas no other - dipolar coupling in the system is larger than khz .therefore , a spin - exchange hamiltonian of the form + that couples the two nuclear species will generate dynamics dominated by the large - coupling at short times ( the are - dipolar couplings , indices run over , nuclei , respectively , and is the -axis pauli operator for spin ) . starting from the natural coupling hamiltonian , + we applied a multiple - pulse time - suspension sequence synchronously to both and spins to create the effective spin - exchange hamiltonian ( in the toggling frame ) , to lowest order in the magnus expansion of the average hamiltonian .application of the sequence for the - exchange period results in an approximate swap gate ( state exchange ) between the and spins . with an initial bulk polarization , this procedure yields a selective dynamic transfer of polarization to , where and ideally .we define the effective spin - bath temperature to be that which corresponds to the experimentally obtained under this procedure , and refer to this transfer as a refresh operation .we obtained experimentally , and found that repeated refresh operations showed no loss in efficiency given at least a ms delay for - equilibration .however , we observed a decay of as a function of the number of repetitions , due to accumulated control errors , which lead to an identical loss in the refresh polarization .the experiment consists of the first six operations of the partner - pairing algorithm ( ppa ) on three qubits : three refresh operations , and three permutation gates that operate on the qubit register .this is described in the quantum circuit diagram of fig .[ fig : fig2 ] . during the register operations ,the polarization is first rotated into the transverse plane , and then spin - locked by a strong , phase - matched rf field that both preserves the bulk polarization and decouples the - dipolar interactions . since - dipolar interactions are merely scaled by a factor under spin - locking , is allowed to equilibrate with the bulk nuclei via spin diffusion .this occurs on a timescale longer than the transverse dephasing time ( ) , but much shorter than the spin - lattice relaxation time ( ) of .hence , plays the role of the fast - relaxing qubit described in the protocol of schulman et al . .the first two register operations are swap gates ; the third is a three - bit compression ( ) gate that boosts the polarization of the first qubit , , at the expense of the polarizations of the other two qubits . ideally , the protocol builds a uniform polarization on all three qubits corresponding to the bath polarization ( first five steps ) , then selectively transfers as much entropy as possible from the first qubit to the other two ( last step ) . the last step ( ) leads to a polarization boost by a factor of on the first qubit .subsequently , the heated qubits can be re - cooled to the spin - bath temperature , and the compression step repeated , iteratively , until the asymptotic value of the first - bit polarization is reached .this limiting polarization depends only on the number of qubits and the bath polarization , and is ideally for three qubits( for qubits it is in the regime , and in the regime ( refs .the first six steps carried out here should yield a polarization of on , assuming ideal operations .+ bc ) gate is shown here decomposed as control - not gates and a control - control - not ( toffoli ) gate .the gate sequence corresponds to the first six steps of the partner - pairing algorithm on three qubits .the input state is a collective polarization of the bulk spins .the refresh operation is approximately in duration , whereas the register operations are between and ms in duration .thermal contact takes place during spin - locking pulses that begin just prior to the register operations , and extend an additional ms after each operation . can be thought of as an additional special purpose qubit in this experiment ; despite non - selective control ( due to bulk hydrogenation ) , the refresh and thermal contact operations could be performed using collective control .thus , serves as a fast - relaxing qubit and the bulk - bath as a large heat - capacity thermal bath .[ fig : fig2],height=302 ] the control operations performed herein are quantum control operations : state - independent unitary rotations in the hilbert space .however , it should be noted that the hbac gates are all permutations that map computational basis states to other computational basis states .therefore , gate fidelities were measured with respect to correlation with these known states , rather than the manifold of generic quantum states .we took advantage of this property to further optimize the control parameters of the gates ( register operations ) for the state - specific transformations of the protocol .these operations were carried out using numerically optimized control sequences referred to as strongly - modulating pulses .such pulses drive the system strongly at all times , such that the average rf amplitude is comparable to , or greater than , the magnitude of the internal hamiltonian .this allows inhomogeneities in the ensemble qubit hamiltonian to be efficiently refocused , so that ensemble coherence is better maintained throughout the gate operations .+ in this set of experiments , the qubit spins are initialized to infinite temperature ( a preceding broadband excitation pulse is followed by a dephasing period in which dipolar fields effectively dephase the polarization ) . following the fifth step , polarizations ( in units of ) of , and ( )are built up on , and , respectively .the final operation yields , a boost of compared to the average polarization ( ) following step five . despite control imperfections that effectively heat the qubits at each step, we are able to cool the qubit ensemble well below the effective spin - bath temperature . +the results are summarized in fig .[ fig : fig3 ] ; in ( a ) are shown the spectral intensities corresponding to spin polarizations following each of the six steps , and in ( b ) the integrated intensities are graphed in comparison with the ideal values .we note that the overall fidelity of the experiment , , implies an error per step of .this error rate is only about a factor of two larger than the average error per two - qubit gate obtained in a benchmark liquid - state nmr qip experiment .furthermore , the state - correlation fidelity of the gate over the polarizations on all three qubits is . from fig .[ fig : fig3](b ) , it can be seen that the fidelity of the refresh operation drops off roughly quadratically in the number of steps ; this is consistent with the loss of bulk polarization due to pulse imperfections both in the multiple - pulse refresh operations and in the spin - locking sequence .since the broadband pulses have been optimized for flip - angle in these sequences , we suspect that the remaining errors are mainly due to switching transients that occur in the tuned rf circuitry of the nmr probehead , and to a lesser extent off - resonance and finite pulse - width effects that modify the average hamiltonian .similar effects lead to imperfect fidelity of the control . with suitable improvements to the resonant circuit response and by incorporating numerical optimization of the multiple - pulse refresh operations, we expect that several iterations of the protocol could be carried out and that the limiting polarization of could be approached in this system .the same methodologies should also be applicable in larger qubit systems with similar architecture .for a -qubit system using the ppa , a bath polarization would be sufficient , in principle , to reach a pure state on one qubit .such bulk nuclear polarizations are well within reach via well - known dynamic nuclear polarization techniques ; for example , unpaired electron spins at defects ( ) in a field of t and at temperature k are polarized to . +this work demonstrates that solid - state nmr qip devices could be used to implement active error correction .given a bath polarization near unity , the refresh operation implemented here would constitute the dynamic resetting of a chosen qubit .this would allow a new nmr - based testbed for the ideas of quantum error correction and for controlled open - system quantum dynamics in the regime of high state purity and up to qubits .+ c spectra and their integrated intensities .( a ) readout spectra obtained following each of the six steps in the protocol .the integrated peak intensities for each multiplet correspond to the ensemble spin polarizations .the natural abundance signal that appears at each refresh step ( adding to the intensity of the central peaks ) should be ignored ; we are only interested in the part of the signal arising from the - qubit molecules , which can be seen clearly in the and spectral regions .( b ) bars indicate ideal qubit polarizations at each step ; experimental values obtained from integration of the above spectra are shown as shaded bands , whose thickness indicates experimental uncertainty.[fig : fig3],height=377 ]nmr experiments were carried out at room temperature on a bruker avance solid - state spectrometer operating at a field of t , and home - built dual channel rf probe .the sample coil had an inner diameter of mm , and the employed broadband pulse lengths were and for and , respectively .the sample was a mm single crystal of malonic acid grown from aqueous solution with a molecular fraction of labelled molecules .spectra were obtained by signal averaging for scans .the proton spin - lattice relaxation time was , so the delay between scans was set to .design of the strongly - modulating pulses followed very closely the methodology described in ref . , and penalty functions were adjusted to favor average rf amplitudes comparable to or greater than the magnitude of the rotating - frame hamiltonian .these pulses were optimized and simulated over a -point distribution of rf amplitude corresponding to the measured distribution over the spin ensemble ( in rf amplitude ) .the time - suspension sequence applied synchronously to and was a -pulse subsequence of the cory -pulse sequence .the delays between pulses were adjusted so that the total length of the sequence was . - locking / decoupling was carried out at rf amplitude of khz .the spectra in fig .[ fig : fig3 ] were obtained by applying a broadband pulse to read out the spin polarizations .the readout pulses were preceded by a ms delay in which decoupling was on but any off - diagonal terms in the spin density matrix would significantly dephase ( ) .the absolute value of the refresh polarization was determined by comparing the initial refresh polarization on and the thermal equilibrium polarization measured in a separate experiment .these yield the ratio of to , and to using the fact that .
the counter - intuitive properties of quantum mechanics have the potential to revolutionize information processing by enabling efficient algorithms with no known classical counterparts . harnessing this power requires developing a set of building blocks , one of which is a method to initialize the set of quantum bits ( qubits ) to a known state . additionally , fresh ancillary qubits must be available during the course of computation to achieve fault tolerance . in any physical system used to implement quantum computation , one must therefore be able to selectively and dynamically remove entropy from the part of the system that is to be mapped to qubits . one such method is an `` open - system '' cooling protocol in which a subset of qubits can be brought into contact with an external large heat - capacity system . theoretical efforts have led to an implementation - independent cooling procedure , namely heat - bath algorithmic cooling ( hbac ) . these efforts have culminated with the proposal of an optimal algorithm , the partner - pairing algorithm ( ppa ) , which was used to compute the physical limits of hbac . we report here the first experimental realization of multi - step cooling of a quantum system via hbac . the experiment was carried out using nuclear magnetic resonance ( nmr ) of a solid - state ensemble three - qubit system . it demonstrates the repeated repolarization of a particular qubit to an effective spin - bath temperature and alternating logical operations within the three - qubit subspace to ultimately cool a second qubit below this temperature . demonstration of the control necessary for these operations is an important milestone in the control of solid - state nmr qubits and toward fault - tolerant quantum computing .
reciprocity evaluates the tendency of vertex pairs to form mutual connections between each other and is an important object to study in complex networks , such as email networks , see e.g. newman et al . , world wide web , see e.g. albert et al . , world trade web , see e.g. gleditsch , social networks , see e.g. wasserman and faust , and cellular networks , see e.g. jeong et al . . in networks that aggregate temporal information, reciprocity provides a measure of the simplest feed - back process occurring on the network , i.e. , the tendency of one stimulus , a vertex , to respond to another stimulus , another vertex .reciprocity is important because most complex networks are directed and it is the main quantity characterizing feasible dyadic patterns , namely possible types of connections between two nodes .one example is the email network . just because user b s email address appears in user a s address bookdoes not necessarily mean that the reverse is also true , although it often is , see e.g. newman et al .another example is the social network .reciprocity captures a basic way in which different forms of interaction take place on a social network like twitter .when two users a and b interact as peers , one expects that messages will be exchanged between them in both directions .however , if user a sends messages to user b , who is a celebrity or news source , it is likely that user b will not send messages in return , see e.g. cheng et al .therefore , it is not enough to just understand the _ edge density _ of a directed network , the _ reciprocal density _ needs to be studied as well . in garlaschelli and loffredo , it was discovered that detecting nontrivial patterns of reciprocity can reveal mechanisms and organizing principles that help explain the topology of the observed network .they also proposed a measure of reciprocity and studied how strong it is for different complex networks , and found that reciprocity is strongest in the world trade web .people often treat complex networks as undirected for simplicity , and reciprocity can help quantify the information loss induced by projecting a directed network into an undirected one . using the knowledge of reciprocity , significant directed information can be retrieved from an undirected projection , and the error introduced when a directed network is treated as undirected may be estimated , see e.g. garlaschelli and loffredo .directed networks consisting of nodes can be modeled by directed graphs on vertices , where a graph is represented by a matrix with each . here , means there is a directed edge from vertex to vertex ; otherwise , .we assume that so that there are no self - loops .give the set of such graphs the probability ,\ ] ] where and are parameters , and is the appropriate normalization . note that and , defined in , respectively represent the directed _ edge density _ and the _ reciprocal density_. in the literature , and are sometimes referred to as the _ single edge _ and the _reciprocal edge_. this belongs to the class of exponential random graph models called models of holland and leinhardt .further extensions include models , see e.g. lazega and van duijn and van duijn et al .more general types of exponential models have also been introduced and studied .see besag , newman , rinaldo et al . , robins et al . , snijders et al . , wasserman and faust , and fienberg for history and a review of developments .the exponential random graph models have popular counterparts in statistical physics : a hierarchy of models ranging from the _ grand canonical ensemble _ , the _ canonical ensemble _ , to the _ microcanonical ensemble _ , with particle density and energy density in place of and , and temperature and chemical potential in place of and . in the grand canonical ensemble, the reciprocal model ( [ 1 ] ) in this case , no prior knowledge of the graph is assumed . in the canonical ensemble ,partial information of the graph is given .for instance , the edge density of the graph is close to or the reciprocal density is close to . in the microcanonical ensemble , completeinformation of the graph is observed beforehand , say in the reciprocal model , both the edge density and the reciprocal density are specified .it is well - known that models in this hierarchy have a very simple relationship involving legendre transforms and , more importantly , the _ free energy density _ ( of the grand canonical ensemble ) , the _ conditional free energy density _ ( of the canonical ensemble ) , and the _ entropy density _ ( of the microcanonical ensemble ) encode important information of a random graph drawn from the model .see illustration below . as one goes down the hierarchy ,the model is understood from varying perspectives : the free energy and conditional free energy densities characterize the macroscopic and mesoscopic configurations of the system respectively , while the entropy density describes the degree to which the probability of the system is spread out over different possible microstates .various objects of interest are obtained by differentiating these densities with respect to appropriate parameters and phases are determined by analyzing the singularities of the derivatives . in particular , they serve as a measure of how close the system is to equilibrium , namely perfect internal disorder , and their monotonicity sheds light on the relative likelihood of each configuration following the philosophy that the higher the entropy the greater the disorder . since real - world networksare often very large in size , the infinite - size asymptotics of these quantities have received exponentially growing attention in recent years . see e.g. aristoff and zhu , chatterjee and dembo , chatterjee and diaconis , kenyon et al . , kenyon and yin , lubetzky and zhao , radin and sadun , radin et al . , radin and yin , yin , yin et al . , and zhu .it may be worth pointing out that most of these papers utilize the theory of graph limits as developed by lovsz and coworkers . .1trueinthe hierarchy .1truein the rest of this paper is organized as follows . in section [ microcanonical ] we derive the exact expression for the normalization constant ( partition function ) of the reciprocal model ( the grand canonical ensemble ) and analyze the asymptotic features of its associated microcanonical ensemble .our main results are : an exact expression for the limiting entropy density ( theorem [ exact ] ) and some discussion on its monotonicity ( remark [ mono1 ] ) . in section [ canonical ] we investigate the asymptotic features of two canonical ensembles associated with the reciprocal model , one conditional on the edge density and the other conditional on the reciprocal density .our main results are : exact expressions for the two limiting conditional free energy densities ( theorem [ exact2 ] ) and some discussion on their monotonicity ( remark [ mono23 ] ) . in section [ grand ]we take another look at the reciprocal model and examine its asymptotic features .our main results are : a joint central limit theorem describing convergence of the edge density and the reciprocal density ( proposition [ clt ] ) , exact scalings for the limiting normalization constant ( theorem [ scaling ] ) and the mean of the limiting probability distribution in the sparse regime ( proposition [ sparsemean ] ) .lastly , in section [ discussion ] we extend our analysis to more general reciprocal models whose sufficient statistics , besides single edge and reciprocal edge , also include reciprocal -star and reciprocal triangle .large deviations techniques are used throughout this paper .we refer the readers to the works of chatterjee and diaconis and chatterjee and varadhan for more details of this framework .after extracting the exponential factor in the reciprocal model ( [ 1 ] ) , each possible configuration of the directed graph is weighted equally .this amounts to taking as iid bernoulli random variables having values and each with probability . denote the associated probability measure and the associated expectation by and respectively .define and by letting go to zero , we are interested in the limit the quantity in will be called the _ limiting entropy density_. via the theory of large deviations , it is directly connected to the _ limiting free energy density _ [ lyapunov ] recall that by assumption , .thus this implies that \\ & = 2^{n(n-1)}\mathbb{e}_n\left[e^{\beta_{1}\sum_{1\leq i < j\leq n}(x_{ij}+x_{ji } ) + 2\beta_{2}\sum_{1\leq i < j\leq n}x_{ij}x_{ji}}\right ] \nonumber \\ & = 2^{n(n-1)}\prod_{1\leq i< j\leq n}\mathbb{e}_n\left[e^{\beta_{1}(x_{ij}+x_{ji } ) + 2\beta_{2}x_{ij}x_{ji}}\right ] \nonumber \\ & = \left(1 + 2e^{\beta_{1}}+e^{2\beta_{1}+2\beta_{2}}\right)^{\binom{n}{2}}. \nonumber\end{aligned}\ ] ] hence we draw the conclusion . [ cor ] from the proof of theorem [ lyapunov ] , \\=\frac{1}{2}\log\left(\frac{1}{4}+\frac{1}{2}e^{\beta_{1}}+\frac{1}{4}e^{2\beta_{1}+2\beta_{2}}\right),\end{gathered}\ ] ] which up to a constant is essentially ( [ chi ] ) , and is finite for any and differentiable in both and .the result then follows from grtner - ellis theorem in large deviations theory , see e.g. dembo and zeitouni , which states that the entropy may be obtained as the legendre transform of the free energy .\(i ) note that and + , which implies that if ] .\(ii ) note that , which implies that if .\(iii ) note that which implies that if .[ exact ] for ] , it is easy to see that the supremum in ( [ sup ] ) can not be obtained at , and must attain its extremum at finite . at optimality , dividing by , we get substitute this back into , which implies that the conclusion thus follows .it is straightforward to compute that this is consistent with the law of large numbers and the maximal entropy principle . along the erds - rnyi curve , , is the entropy of a bernoulli random variable and is minus the rate function of the large deviations for the edge density .obtained from theorem [ exact ] . on the right hand side , we specify the regions of monotonicity as obtained in remark [ mono1 ] . in region , is decreasing in both and ; in region , is increasing in and decreasing in ; in region , is increasing in both and ; in region , is decreasing in and increasing in .the boundaries are given by and .,width=228,height=171 ] obtained from theorem [ exact ] .on the right hand side , we specify the regions of monotonicity as obtained in remark [ mono1 ] . in region , is decreasing in both and ; in region , is increasing in and decreasing in ; in region , is increasing in both and ; in region , is decreasing in and increasing in . the boundariesare given by and .,width=228,height=171 ] [ mono1 ] let us analyze the monotonicity of the limiting entropy density .on one hand , which implies that if and only if , which is equivalent to . on the other hand , which implies that if and only if , which is equivalent to , i.e. , is increasing in below the erds - rnyi curve and decreasing in above the erds - rnyi curve .( see for a similar phenomenon across the erds - rnyi curve in the ( undirected ) edge - triangle model . )as in aristoff and zhu , kenyon and yin and zhu , we are interested in the asymptotic features of constrained models .the probability measure is given by 1_{|e(x)-\epsilon|<\delta}\ ] ] if conditional on the edge density , and by 1_{|r(x)-r|<\delta}\ ] ] if conditional on the reciprocal density , where , \\ & \psi_{n,\delta}(\beta_{1},r ) = \frac{1}{n^{2}}\log \mathbb e_n\left [ \exp\left(n^2\beta_{1}e(x)\right)1_{|r(x)-r|<\delta}\right ] .\nonumber\end{aligned}\ ] ] we shrink the interval around ( or ) by letting go to zero : the quantities in ( [ conditional ] ) will be called the _ limiting conditional free energy densities_. [ exact2 ] for any , , where for any , , by using varadhan s lemma , see e.g. dembo and zeitouni , by , the optimal in satisfies which is equivalent to when , has one solution . when , since has two solutions when , one solution of is positive and the other is negative .we check that and and thus the optimal . when , both solutions of are positive .we check that and and thus the optimal .this is indeed the optimizer following the mean value theorem , since and + . by , the optimal in satisfies which has one solution is indeed the optimizer following the mean value theorem , since and .obtained from theorem [ exact2 ] . on the right hand side , we specify the regions of monotonicity as obtained in remark [ mono23 ] . is always increasing in . in region , is increasing in ; in region , is decreasing in . the boundary is specified in remark [ mono23].,width=228,height=171 ] obtained from theorem [ exact2 ] . on the right hand side , we specify the regions of monotonicity as obtained in remark [ mono23 ] . is always increasing in . in region , is increasing in ; in region , is decreasing in . the boundary is specified in remark [ mono23].,width=228,height=171 ] [ mono23 ] let us analyze the monotonicity of the two limiting conditional free energy densities .we have \frac{\partial r^*}{\partial \beta_2}=r^ * , \\ \frac{\partial\psi(\beta_{1},r)}{\partial\beta_{1}}=\epsilon^{\ast}+\left[\beta_1+\frac{\partial \lambda(\epsilon^*,r)}{\partial \epsilon^*}\right]\frac{\partial \epsilon^*}{\partial \beta_1}=\epsilon^*. \nonumber\end{aligned}\ ] ] therefore and are increasing in and respectively .moreover , we have \frac{\partial r^{\ast}}{\partial\epsilon } + \frac{\partial\lambda(\epsilon , r^{\ast})}{\partial\epsilon } \\ & = -\log\left(\frac{\epsilon - r^{\ast}}{1+r^{\ast}-2\epsilon}\right ) .\nonumber\end{aligned}\ ] ] therefore is increasing in if and only if .this is equivalent to when ; while for , this is equivalent to which can be simplified to and can be further simplified to or alternatively similarly , \frac{\partial\epsilon^{\ast}}{\partial r } + \frac{\partial\lambda(\epsilon^{\ast},r)}{\partial r } \\ & = -\frac{1}{2}\log\left(\frac{r(1+r-2\epsilon^{\ast})}{(\epsilon^{\ast}-r)^{2}}\right ) .\nonumber\end{aligned}\ ] ] therefore is increasing in if and only if .this is equivalent to or alternatively obtained from theorem [ exact2 ] .on the right hand side , we specify the regions of monotonicity as obtained in remark [ mono23 ] . is always increasing in . in region , is decreasing in ; in region , is increasing in . the boundary is specified in remark [ mono23].,width=228,height=171 ] obtained from theorem [ exact2 ] . on the right hand side , we specify the regions of monotonicity as obtained in remark [ mono23 ] . is always increasing in . in region , is decreasing in ; in region , is increasing in . the boundary is specified in remark [ mono23].,width=228,height=171 ]a crucial observation on the reciprocal model is that the probability measure ( [ 1 ] ) may be alternatively written as and is equivalent to an erds - rnyi type measure which assigns the following joint distribution iid for every pair with : this tractable feature of the model has been partially used in earlier sections of the paper where we study the microcanonical and canonical ensembles . in this sectionwe will explore further the consequences of the iid structure on the grand canonical ensemble .as seen in corollary [ cor ] , the entropy and the free energy are related by the legendre transform .an explicit connection between and is given in theorem [ exact ] ( see ( [ eqni ] ) and ( [ eqnii ] ) ) .the next proposition , which easily follows from ( [ alt ] ) and ( [ observe ] ) , calculates the mean of the edge and reciprocal densities when the parameters are fixed .[ mean ] for any , in the reciprocal model , the number of directed edges from a given vertex is binomial with parameter given by ( [ binom1 ] ) and the number of reciprocal edges from a given vertex is binomial with parameter given by ( [ binom2 ] ) .this leads to a host of results in large deviations theory .for example , where the rate function \right\}\\ & = \epsilon\log \frac{\epsilon}{e^{\beta_1}+e^{2\beta_1 + 2\beta_2}}+(1-\epsilon)\log \frac{1-\epsilon}{1+e^{\beta_1}}+\log \left(1 + 2e^{\beta_1}+e^{2\beta_1 + 2\beta_2}\right ) .\nonumber\end{aligned}\ ] ] note that when , ( [ crate ] ) reduces to the rate function under the uniform measure , coinciding with ( [ rate ] ) .we can further study the fluctuations of the edge and reciprocal densities around their mean , i.e. , the central limit theorem .[ clt ] under the grand canonical measure ( [ 1 ] ) , in distribution as , where where note that when , proposition [ clt ] gives the central limit theorem for and under the uniform measure and we have the drift term in proposition [ clt ] is due to the definition of and in .if one defines and as instead , then proposition [ clt ] will hold with minor modifications : though definitions ( [ es ] ) and lead to a difference of the drift term in the central limit theorem , they are indistinguishable as regards the limiting entropy and free energy densities . for any , \\ & = \frac{\mathbb{e}_n\left[e^{n^2\left((\frac{\theta_{1}}{n}+\beta_{1})e(x)+(\frac{\theta_{2}}{n}+\beta_{2})r(x)\right)}\right ] e^{-\frac{e^{\beta_{1}}+e^{2\beta_{1}+2\beta_{2}}}{1 + 2e^{\beta_{1}}+e^{2\beta_{1}+2\beta_{2 } } } \theta_{1}n -\frac{e^{2\beta_{1}+2\beta_{2}}}{1 + 2e^{\beta_{1}}+e^{2\beta_{1}+2\beta_{2}}}\theta_{2}n } } { \mathbb{e}_n\left[e^{n^2\left(\beta_1e(x)+\beta_2r(x)\right)}\right ] } \nonumber \\ & = \frac{\left(1 + 2e^{\frac{\theta_{1}}{n}+\beta_{1}}+e^{\frac{2\theta_{1}}{n}+\frac{2\theta_{2}}{n}+2\beta_{1}+2\beta_{2}}\right)^{\frac{n(n-1)}{2 } } e^{-\frac{e^{\beta_{1}}+e^{2\beta_{1}+2\beta_{2}}}{1 + 2e^{\beta_{1}}+e^{2\beta_{1}+2\beta_{2 } } } \theta_{1}n -\frac{e^{2\beta_{1}+2\beta_{2}}}{1 + 2e^{\beta_{1}}+e^{2\beta_{1}+2\beta_{2}}}\theta_{2}n } } { \left(1 + 2e^{\beta_{1}}+e^{2\beta_{1}+2\beta_{2}}\right)^{\frac{n(n-1)}{2 } } } \nonumber \\ & = \left(1+\frac{a}{n}+\frac{b}{n^{2}}+o(n^{-3})\right)^{\frac{n(n-1)}{2 } } e^{-\frac{e^{\beta_{1}}+e^{2\beta_{1}+2\beta_{2}}}{1 + 2e^{\beta_{1}}+e^{2\beta_{1}+2\beta_{2 } } } \theta_{1}n -\frac{e^{2\beta_{1}+2\beta_{2}}}{1 + 2e^{\beta_{1}}+e^{2\beta_{1}+2\beta_{2}}}\theta_{2}n } , \nonumber\end{aligned}\ ] ] where since we have this implies that \\ & \rightarrow\exp\bigg\{-\frac{e^{\beta_{1}}+e^{2\beta_{1}+2\beta_{2}}}{1 + 2e^{\beta_{1}}+e^{2\beta_{1}+2\beta_{2 } } } \theta_{1 } -\frac{e^{2\beta_{1}+2\beta_{2}}}{1 + 2e^{\beta_{1}}+e^{2\beta_{1}+2\beta_{2}}}\theta_{2 } \nonumber \\ & \qquad\qquad + \frac{1}{2}\left(\frac{e^{\beta_{1}}+2e^{2\beta_{1}+2\beta_{2 } } } { 1 + 2e^{\beta_{1}}+e^{2\beta_{1}+2\beta_{2 } } } -2\left(\frac{e^{\beta_{1}}+e^{2\beta_{1}+2\beta_{2}}}{1 + 2e^{\beta_{1}}+e^{2\beta_{1}+2\beta_{2}}}\right)^{2}\right)\theta_{1}^{2 } \nonumber \\ & \qquad\qquad + \left(\frac{2e^{2\beta_{1}+2\beta_{2 } } } { 1 + 2e^{\beta_{1}}+e^{2\beta_{1}+2\beta_{2 } } } -\frac{2e^{2\beta_{1}+2\beta_{2}}(e^{\beta_{1}}+e^{2\beta_{1}+2\beta_{2}})}{\left(1 + 2e^{\beta_{1}}+e^{2\beta_{1}+2\beta_{2}}\right)^{2}}\right)\theta_{1}\theta_{2 } \nonumber \\ & \qquad\qquad + \frac{1}{2}\left(\frac{2e^{2\beta_{1}+2\beta_{2 } } } { 1 + 2e^{\beta_{1}}+e^{2\beta_{1}+2\beta_{2 } } } -2\left(\frac{e^{2\beta_{1}+2\beta_{2}}}{1 + 2e^{\beta_{1}}+e^{2\beta_{1}+2\beta_{2}}}\right)^{2}\right)\theta_{2}^{2 } \bigg\ } \nonumber\end{aligned}\ ] ] as .since convergence of moment generating functions implies convergence in distribution , the proof is complete .similar to the analysis in yin and zhu , we can also study directed graphs where the parameters depend on the number of vertices .assume that and , where and are fixed , is positive and as . with some abuse of notation, we will still denote the associated normalization constant and probability measure by and respectively .from the proof of theorem [ lyapunov ] , which yields the following asymptotics for the normalization . [ scaling ] ( i )when and , .\(ii ) when and , .\(iii ) when and , .\(iv ) when and , .\(v ) when and , .\(vi ) when and , .\(vii ) when and , .\(viii ) when and , .since many networks data are sparse in the real world , we are more interested in the situation where a random graph sampled from this modified model is sparse , i.e. , the probability that there is an edge between vertex and vertex goes to as .one natural question to ask is for what set of parameters will this happen ? anda natural follow - up question is what is the speed of the graph towards sparsity when this indeed happens ?we give some concrete answers to these questions .[ sparsemean ] for any , \(i ) when , .\(ii ) when , .\(iii ) when , . from proposition[ mean ] , as only when and .the rest of the proof easily follows .we have studied directed graphs whose sufficient statistics are given by edge and reciprocal densities .now let us generalize these ideas and analyze directed graphs whose sufficient statistics also include densities of _ reciprocal -stars _ and _ reciprocal triangles_. reciprocal triangles are sometimes called _cyclic triads _ in the literature , see e.g. robins et al . . they are used to model the situation where you have three vertices , and and there are bilateral relations between and , and , and and , i.e. , .similarly , reciprocal -stars have generated significant interest as well .we define the densities of reciprocal triangles and reciprocal -stars respectively as and as for the less complicated reciprocal model investigated earlier , we are interested in the _ limiting free energy density _ \nonumber\end{aligned}\ ] ] for the grand canonical ensemble and the _ limiting entropy density _ for the microcanonical ensemble , where .in contrast to the reciprocal model discussed in sections [ microcanonical]-[grand ] , in this generalized model , different pairs of vertices are no longer independent , and this sophistication renders a concrete analysis of the model rather difficult .we hope the partial answers presented in this section will provide insight into its intrinsic structure and help us better understand the nature of reciprocity .previously , we derive the limiting free energy density ( [ chi ] ) and then obtain the limiting entropy density ( [ sup ] ) using the legendre transform . we take an opposite approach here .below we find an expression for the limiting entropy density ( [ microlimit ] ) and then apply the inverse legendre transform to develop an expression for the limiting free energy density ( [ macrolimit ] ) .we examine the limiting entropy density first .a key observation is that we can define so that are iid random variables with and . then densities of reciprocal edges , reciprocal triangles , and reciprocal -stars may be alternatively written as using chatterjee and varadhan s large deviations results for the erds - rnyi random graph , see e.g. chatterjee and varadhan , this gives ^{2}\rightarrow[0,1],g(x , y)=g(y , x ) \\r(g)=r , t(g)=t , s(g)=s}}\frac{1}{2}i_{\frac{1}{4}}(g ) , \nonumber\end{aligned}\ ] ] where ^{2}}g(x , y)dxdy , \\ & t(g):=\iiint_{[0,1]^{3}}g(x , y)g(y , z)g(z , x)dxdydz , \nonumber \\ & s(g):=\int_{0}^{1}\left(\int_{0}^{1}g(x , y)dy\right)^{p}dx , \nonumber\end{aligned}\ ] ] and ^{2}}i_{\frac{1}{4}}(g(x , y))dxdy ] and .this shows that may be equivalently viewed as the limiting free energy density of an undirected model whose sufficient statistics are given by ( undirected ) edge , triangle , and -star densities .the parameters allow one to adjust the influence of these different local features on the limiting probability distribution and thus expectedly should impact the global structure of a random graph drawn from the model .it is therefore important to understand if and when the supremum in ( [ formulatwo ] ) is attained and whether it is unique .many people have delved into this area .a particularly significant discovery was made by chatterjee and diaconis , who showed that the supremum in ( [ formulatwo ] ) is always attained and a random graph drawn from the model must lie close to the maximizing set with probability vanishing in .when , yin further showed that the -parameter space would consist of a single phase with first - order phase transition(s ) across one ( or more ) surfaces , where all the first derivatives of exhibit ( jump ) discontinuities , and second - order phase transition(s ) along one ( or more ) critical curves , where all the second derivatives of diverge .the second special situation is when , ^{2}\rightarrow[0,1 ] \\g(x , y)=g(y , x)}}\left\{\eta(\beta_{1},r(g))+\beta_{2}r(g)+\beta_{4}s(g)-\frac{1}{2}\left(i_{\frac{1}{4}}(g)-2\log 2\right)\right\}. \nonumber\end{aligned}\ ] ] we can derive the euler - lagrange equation for this variational problem , and it is given by where .solving for and then integrating over , we get following similar arguments as in kenyon et al . , we conclude that can take only finitely many values .the optimal graphon is multipodal and phase transitions are expected .the authors are very grateful to the anonymous referees for their invaluable suggestions that greatly improved the quality of this paper .mei yin s research was partially supported by nsf grant dms-1308333 .she appreciated the opportunity to talk about this work in the special session on spectral theory , disorder , and quantum many body physics at the 2015 ams central spring sectional meeting , organized by peter d. hislop and jeffrey schenker .lazega , e. and m. a. j. van duijn .position in formal structure , personal characteristics and choices of advisors in a law firm : a logistic regression model for dyadic network data ._ * 19 * , 375 - 397 .
reciprocity is an important characteristic of directed networks and has been widely used in the modeling of world wide web , email , social , and other complex networks . in this paper , we take a statistical physics point of view and study the limiting entropy and free energy densities from the microcanonical ensemble , the canonical ensemble , and the grand canonical ensemble whose sufficient statistics are given by edge and reciprocal densities . the sparse case is also studied for the grand canonical ensemble . extensions to more general reciprocal models including reciprocal triangle and star densities will likewise be discussed . reciprocity , entropy , free energy , directed network , exponential random graph
the multi - level monte carlo method proposed in approximates the expectation of some functional applied to some stochastic processes like e. g. solutions of stochastic differential equations ( sdes ) at a lower computational complexity than classical monte carlo simulation , see also .multi - level monte carlo approximation is applied in many fields like mathematical finance , for sdes driven by a lvy process or by fractional brownian motion .the main idea of this article is to reduce the computational costs further by applying the multi - level monte carlo method as a variance reduction technique for some higher order weak approximation method . as a result, the computational effort can be significantly reduced while the optimal order of convergence for the root mean - square error is preserved .+ + the outline of this paper is as follows .we give a brief introduction to the main ideas and results of the multi - level monte carlo method in section [ section2:mlmc - simulation - original ] .based on these results , in section [ sec3:improved - mlmc - estimator ] we present as the main result a modified multi - level monte carlo algorithm that allows to reduce the computational costs significantly .depending on the relationship between the orders of variance reduction and of the growth of the costs , there exists a reduction of the computational costs by a factor depending on the weak order of the underlying numerical method . as an example , the modified multi - level monte carlo algorithm is applied to the problem of weak approximation for stochastic differential equations driven by brownian motion in section [ sec4:numerical - examples - sdes ] .let be a probability space with some filtration and let denote a stochastic process on the interval ] and claim that then , we can calculate from the bias and we have to solve the minimization problem under the constraint that . as a result of this, we obtain the following values for and : + + let and , for and for some ,1[\, ] holds provided that , and . in case of and then ( [ main - prop - improvement - aussage1 ] ) holds if in addition and .further , for it holds if and .+ b. in case of and and if , , we get and if and .+ c. in case of and we obtain if .if the parameter ,1[\, ] follows . from the lower bound ( [ proof - main - prop - lower - bound - ieq - alg ] ) for and the upper bound ( [ proof - main - prop - upper - bound - ieq - alg ] ) for we get the estimate in the following , we make us of and , i.e. we have and as .+ + multiplying both sides of ( [ proof - main - prop - difference - ieq - alg ] ) with and taking into account the assumptions and results in h_{l_p}^{-\frac{\min \{\beta-\gamma,\beta_{l_p}-\gamma_{l_p}\}}{2}}\ , .\label{proof - main - prop - difference2-ieq - alg } \ ] ] as a result of ( [ proof - main - prop - difference2-ieq - alg ] ) follows that in the case of there exists some such that for all ,\varepsilon_0] ] if and .finally , follows from ( [ proof - main - prop - upper - bound - ieq - alg ] ) . + +in case of and , we have to compare the dominating terms as .therefore , we get from the lower bound that and from the upper bound making use of these two estimates ( [ proof - main - prop - lower - bound - ieq - simp ] ) and ( [ proof - main - prop - upper - bound - ieq - simp ] ) , this results in the estimate ( [ main - prop - improvement - aussage3 ] ) where because we require that .+ + in general , it follows that from the upper bound ( [ proof - main - prop - upper - bound - ieq - alg ] ) for and any , .further , there is an asymptotically optimal choice for the parameter ,1[\, ] such that ,1 [ \ , } c \varepsilon^{-2-\frac{\gamma-\beta}{p } } \frac{q^{\frac{\beta-\gamma}{2p}}}{1-q}\ ] ] for all .solving this minimization problem leads to which is asymptotically the optimal choice for ,1[\, ] with step size .further , we denote by the approximation at time . in case of the multi - level monte carlo estimator we apply on each level the euler - maruyama scheme on the grid given by and where and for .the euler - maruyama scheme converges with order in the mean - square sense and with order in the weak sense to the solution of the considered sde at time .+ + on the other hand , for the modified multi - level monte carlo estimator the euler - maruyama scheme is applied on levels whereas on level a second order weak stochastic runge - kutta ( srk ) scheme ri6 proposed in is applied . the srk scheme ri6 on level is defined on the grid by , where and for with stages where and based on independent random variables with .thus , we have and for the modified multi - level monte carlo estimator in the following .further , for both schemes the variance decays with the same order as the computational costs increase , i. e. .then , the optimal order of convergence attained by the multi - level monte carlo method is due to theorem [ main - theorem - giles ] . for the presented simulations , we denote by mlmc em the numerical results for based on the euler - maruyama scheme only and by mlmc srk the results for based on the combination of the euler - maruyama scheme and the srk scheme ri6 .+ + as a first example , we consider the scalar linear sde with given by using the parameters and .we choose and apply the functionals and , see figure [ bild - test1 ] .the presented simulations are calculated using the prescribed error bounds for . in figure [ bild - test1 ]we can see the significantly reduced computational effort for the estimator ( mlmc srk ) compared to the estimator ( mlmc em ) in case of a linear and a nonlinear functional .( left ) and ( right).,title="fig:",width=207 ] ( left ) and ( right).,title="fig:",width=207 ] the second example is a nonlinear scalar sde with given by we apply the functional .then , the approximated expectation is given by . here , the results presented in figure [ bild - test2 ] ( left ) are calculated for applying the prescribed error bounds for . here, the improved estimator performs much better than also for nonlinear functionals and a nonlinear sde .finally , we consider a nonlinear multi - dimensional sde with a dimensional solution process driven by an dimensional brownian motion with non - commutative noise : with initial condition .then , the approximated first moment of the solution is given by for .the simulation results calculated at for the error bounds for are presented in figure [ bild - test2 ] ( right ) .again , in the multi - dimensional non - commutative noise case the proposed estimator needs significantly less computational effort compared to the estimator which reveals the theoretical results in proposition [ main - prop - improvement ] .in this paper we proposed a modification of the multi - level monte carlo method introduced by m. giles which combines approximation methods of different orders of weak convergence .this modified multi - level monte carlo method attains the same mean square order of convergence like the originally proposed method that is in some sense optimal .however , the newly proposed multi - level monte carlo estimator can attain significantly reduced computational costs . as an example , there is a reduction of costs by a factor for the problem of weak approximation for sdes driven by brownian motion in case of .this has been approved by some numerical examples for the case of and where four times less calculations are needed compared to the standard multi - level monte carlo estimator . here , we want to point out that there also exist higher order weak approximation schemes , e. g. in case of sdes with additive noise , that may further improve the benefit of the modified multi - level monte carlo estimator .future research will consider the application of this approach to , e.g. , more general sdes like sdes driven by lvy processes or fractional brownian motion and to the numerical solution of spdes .further , the focus will be on numerical schemes that feature not only high orders of convergence by also minimized constants for the variance estimates .
the multi - level monte carlo method proposed by m. giles ( 2008 ) approximates the expectation of some functionals applied to a stochastic process with optimal order of convergence for the mean - square error . in this paper , a modified multi - level monte carlo estimator is proposed with significantly reduced computational costs . as the main result , it is proved that the modified estimator reduces the computational costs asymptotically by a factor if weak approximation methods of orders and are applied in case of computational costs growing with same order as variances decay . and multi - level monte carlo , monte carlo , variance reduction , weak approximation , stochastic differential equation + msc 2000 : 65c30 , 60h35 , 65c20 , 68u20
traditionally , firms had few avenues when trying to market their products . and the most important of these avenues television , newspapers , billboards are notoriously inflexible and inefficient from the firms point of view . essentially , a firm has to pay to reach even those people who would never form a part of its target demographic ( ) . from the consumers point of view, they are continuously bombarded with advertisements of products , a vast majority of which do not interest them .in such a scenario , there is even a possibility that a significant fraction of consumers might just _ tune - off _ and become insensitive to every advertisement .the idea of _ direct marketing _ tried to overcome some of these problems by the construction of a database of the buying patterns and other relevent information of the population , and then targeting only those who are predisposed to get influenced by a particular marketing campaign ( ) .however , targeting the most responsive customers individually can be expensive and thus limits the reach of direct marketing .moreover , it precludes the possibility of positive externalities such as a favorable shift in preferences of a demographic segment previously thought to be unresponsive .the penetration of internet and the emergence of huge online social networks in the last decade has radically altered the way that people consume media and print , leading to an ongoing decline in importance of conventional channels and consequently , marketing through them .this radical shift has brought in its wake a host of opportunities as well as challenges for the advertisers . on the one hand, firms finally have the possibility to reach in a cost - effective way not only the past responsive customers , but indeed all the potentially responsive ones .the importance of this new marketing medium is witnessed by the fact that most of the big corporations , particularly those providing services or producing consumer goods , now have dedicated _ fan - pages _ on social networks to interact with their loyal customers .these , in turn , help the firms spread their new marketing campaign to a large fraction of the network , reliably and at a fraction of the cost incurred through traditional channels . on the other hand , new firms without a loyal fan - base have found it a hit - or - miss game to gain attention through the new medium . even though the marketing through network is mostly a miss for these firms , but when it is a hit , it is a spectacular one .this makes it tempting for firms to keep waiting for that spectacular hit while their marketing budget inflates beyond the point of no return .the _ fat - tail _ uncertainty of viral marketing makes it inherently different from conventional marketing and calls for a fundamentally different approach to decision - making : which individuals , and how many , to initially target in the online network ?what amount of resources to spend on these initially targeted _pioneers _ ? and most importantly ,when to stop , admit the inefficacy of the current campaign and develop a new one ? in this paper , we introduce a generalized diffusion dynamic on configuration model , which serves as a very useful approximation of an online social network , particularly when one does not have an access to a detailed information about the network structure .the diffusion dynamic that we study on this underlying random network is essentially this : any individual in the network influences a random subset of its neighbours , the distribution of which depends on the effectiveness of the marketing campaign .we illustrate large - network - limit results on this model , rigorously proved in .the empirical distribution of the number of friends that a person influences in the course of a marketing campaign is taken as a measure of the effectiveness of the campaign .we present a condition depending on network degree - distribution ( the emperical distribution of the number of friends of a network member ) and the effectiveness of a marketing campaign which , if satisfied , will allow , with a non - negligible probability , the campaign to go viral when started from a randomly chosen individual in the network . given this condition , we present an estimate of the fraction of the population that is reached when the campaign does go viral .we then show that under the same condition , the fraction of good pioneers in the network , i.e. , the individuals who if targeted initially will lead the campaign to go viral , is non - negligible as well , and we give an estimate of this fraction .we analyze in detail the process of influence propagation on configuration model having two types of degree - distribution : poisson and power law .three examples illustrating the dynamic of influence propagation on these two networks are considered : ( 1 ) bernoulli transmissions ; ( 2 ) node percolation ; ( 3 ) coupon - collector transmissions .based on the above analysis , we offer a practical decision - making guide for marketing on online networks which we think would be particularly useful to firms with no prior access to detailed network structure .specifically , we consider the nave strategy of picking some number of pioneers at random from the population , spending some fixed amount of resources on each of them and waiting to see if the campaign goes viral , picking another batch if it does not .for this strategy , we suggest what statistical data the firm should collect from its pioneers , and based on these , how to estimate the effectiveness of the campaign and make a cost - benefit analysis .while the public imagination is captured by a new viral video of a relatively unknown artist , researchers have been trying to understand this phenomenon much before the emergence of online social networks .it was first studied in the context of the spread of epidemics on complex networks , whence the term _ viral _ marketing originates ( , ) .the impact of social network structure on the propagation of social and economic behavior has also been recognized ( , ) and there is growing evidence of its importance ( ) . in the context of viral marketing , broadly speaking , two approaches have developed in trying to exploit the network structure to maximize the probability of marketing campaign going viral for each dollar spent .the first approach tries to locate the most influential individuals in the network who can then be targeted to _ seed _ the campaign ( ) .this idea has been developed into a machine learning approach which relies on the availability of large databases containing detailed information regarding the network structure and the past instances of influence propagation to come up with the best predictor of the most influential individuals who should be targeted for future campaigns ( ) .our approach , although fundamentally based on the analysis of the most influential network members , whom we call pioneers , differs in its philosophy of how to apply this to make marketing decisions . we do not rely on locating the pioneers by data - mining the network since the tastes of online network members shift at a rapid rate and the past can be an unreliable predictor for the current campaign .moreover , the network database is not necessarily accessible to every firm .therefore , we favor a strategy which enables one to gain exposure to positive fat - tail events while covering his / her back . however , since we suggest a way to measure the current campaign s effectiveness based on its ongoing diffusion in the network , it can be used to develop better predictors even when the network information is freely accessible .the second approach that has become popular in this context does not focus on locating influential network members but instead on giving incentives to members to act as a conduit for the diffusion of the campaign ( ) .various mechanisms for determining the optimal incentives have been proposed and analyzed on random networks ( ) as well as on a deterministic network ( ) .this approach can be particularly effective for web - based service providers , e.g. , movie - renting business , where the non - monetary incentive of using the service freely or cheaply for some period of time can motivate people to proactively advertise to their friends .however , it is not always possible to come up with non - monetary expensive while offering monetary incentives is not cost - effective . in such cases ,our approach can offer a more cost - effective alternative by leveraging the inherent tendency of a social network to percolate information without external incentives .a variety of marketing strategies have been conceived combining the two broad approaches that we described above .we hope that our approach would enrich the spectrum and further help in understanding and exploiting the phenomenon of viral marketing .in this section , we introduce our model and informally describe the results which are rigorously proved in .consider that the only information available to you about an online social network is the number of friends that a subset of network members have , a realistic assumption if you are dealing with the biggest and the most important social networks out there .in such a case , the best you can do is to work with a uniform random network which agrees with the statistics that you can obtain from the available information .such a uniform random network is obtained by constructing what is known as _ configuration model _ ( cm ) ; cf .this random network is realized by attaching half - edges to each vertex corresponding to its degree ( which represents here , the number of friends ) and then uniformly pair - wise matching them to create edges .we assume this model of the social network throughout the paper and will use interchangeably the terms `` social graph '' and `` random network '' meaning precisely the cm .we call the vertices of this graph `` nodes '' or `` users '' and graph neighbours `` friends '' .we consider a marketing campaign started from some initial target called _ pioneer _ in this network .the most natural propagation dynamic to assume in the absence of any other information is that a person influences a random subset of its friends who further propagate the campaign in the same manner .the number of friends that a person influences depends on a particular campaign . to model this dynamic, we enhance the configuration model by partitioning the half - edges into _ transmitter _ half - edges , those through which the influence can flow and _ receiver _ half - edges which can only receive influence .so , if a person a influences his friend b in the network , then in our representation , a has a transmitter half - edge matched to the transmitter or receiver half - edge of b. let and denote the empirically observed distributions of total degree and transmitter degree respectively .empirical receiver degree distribution , , is therefore .then we have the following large - network - limit results , rigorously proved in , but only informally stated here .[ cl1 ] starting from a randomly selected pioneer , the campaign can go viral , i.e. , reach a strictly positive fraction of the population , with a strictly positive probability if and only if > \mathbb{e}[d^{(t)}+d].\ ] ] note that > \mathbb{e}[d^{(t)}+d] ] , , which we tacitly assume throughout the paper ] for the existence of a ( unique ) connected component of the underlying social graph , called _ big component _ , encompassing a strictly positive fraction of its population ; cf .obviously , our campaign can go viral only within this big component .call _ good pioneers _ the pioneers from which the campaign can go viral . [ c1 ] if ( [ e.va ] ) is satisfied then the population reached is , more or less , the same irrespective of the good pioneer chosen initially .let denote the population reached by the campaign when started from a good pioneer and the set of good pioneers .if ( [ e.va ] ) is satisfied then the set of good pioneers also forms a strictly positive fraction of the population .the next claim gives the estimates on the size of and .let x^2-\mathbb{e}[d^{(r ) } ] x-\mathbb{e}[d^{(t)}x^d]\ ] ] and x^2-\mathbb{e}[d^{(t)}x^{d^{(t ) } } ] - \mathbb{e}[d^{(r)}x^{d^{(t)}}]x.\ ] ] if condition ( [ e.va ] ) is satisfied then and have unique zeros in .call them and respectively .denote also by ] the probability generating function ( pgf ) of and , respectively .[ c.alpha ] if ( [ e.va ] ) is satisfied and denotes the size of network population , then for large , and note that can be interpreted as the probability that the campaign goes viral when started from a randomly chosen pioneer . in the appendix we sketch the main arguments allowing to prove the above claims ; see for formal statements and proofsrecall also from that under assumption ( [ e.ca ] ) the size of the big network component satisfies for large where is the unique zero of ^2-xg'_d(x)\ ] ] in , with denoting the derivative of the pgf of .let us consider the results of section [ s.model ] in the context of a few illustrative network examples .let us assume some arbitrary distribution of the degree satisfying ( [ e.ca ] ) ( to guarantee the existence of the big component of the social graph ) .suppose that each user decides independently for each of its friends with probability ] and x^2-g'_d(1-p(1-x)) ] .bernoulli transmissions imply the set of good pioneers and influenced population of the same size .the power - law degree with leads to positive fraction of good pioneers and influenced component for all , while for the poisson degree distribution one observes the phase transition at .that is , the fractions of good pioneers and the influenced component are strictly positive if and only if .figure [ f.pbpl ] shows again the model with bernoulli transmissions on cm with poisson and power - law degree distribution , this time however for \approx 1.35 ] .note that the influenced components have the same size as for bernoulli transmissions , however good components are smaller .the critical values of for the phase transition are also the same as for bernoulli transitions .note that estimation of the node percolation model is more difficult than the bernoulli transmissions because of higher variance of the estimators .finally , figure [ f.pccpl ] shows that the coupon collector dynamics ( `` absentminded users '' ) on cm produces bigger sets of good pioneers than the influenced population .what does the analysis presented up to now suggest in terms of strategy for a firm which is just about to start a new marketing campaign on an online social network without having any prior information about the network structure ?if the fraction of good pioneers in the network is non - negligible , the firm has a strictly positive probability of picking a good pioneer even when it picks a pioneer uniformly at random from the network .now when is the fraction of good pioneers non - negligible ?since the firm has no prior information about the network structure and the campaign effectiveness , the best it can do is to collect information from its pioneers regarding the number of friends that they have ( total degree ) and the number of friends they influence in this campaign ( transmitter degree ) , and then assume that the network is a uniform random network having the sampled total degree and transmitter degree distributions . the collected information , denote it by , , allows to estimate various quantities relevant to the potential development of the ongoing campaign , as we did in [ ss.numerical ] .more precisely the results presented in section [ ss.claims ] suggest the following approach . [[ network - fragmentation ] ] network fragmentation + + + + + + + + + + + + + + + + + + + + + the first and foremost question is whether the network is not too fragmented to allow for viral marketing .this is related to condition ( [ e.ca ] ) . in order to answer this questionone considers the following estimator of ] if the value of this estimator is sharply larger than zero then the firm can assume that there is a realistic chance of picking a good pioneer via random sampling and make the campaign go viral .otherwise , the previous phase of the campaign can be considered as non - efficient .[ [ cost - benefit - analysis ] ] cost - benefit analysis + + + + + + + + + + + + + + + + + + + + + if the firm deems the campaign to be effective , it can then , exactly as we did in [ sss.estimation ] , come up with the estimates of the relative fractions of good pioneers and population vulnerable to influnce , and do a cost - benefit analysis .what we have described is an outline which can be used by the firms to come up with a rational methodology for making decisions in the context of viral marketing .and bernoulli transmissions with probability .the set of good pioneers and the influenced population are of the same size .their fraction is strictly positive for .[ f.pb ] ] and bernoulli transmissions with probability .the set of good pioneers and the influenced population are of the same size .their fraction is strictly positive for .[ f.pb ] ] and bernoulli transmissions with probability .the set of good pioneers and the influenced population are of the same size .their fraction is strictly positive for .[ f.pb ] ] ( corresponding to \approx 2,\,4,\,6 ] and bernoulli transmissions with probability .the set of good pioneers and the influenced population are of the same size .their fraction is strictly positive for all whenever .[ f.plb ] ] ( corresponding to \approx 2,\,4,\,6 ] ( and ) and bernoulli transmissions .the set of good pioneers and the influenced population are of the same size for each model .one observes the phase transition in both models , at and , respectively .[ f.pbpl ] ] \approx 1.35 ] ( and ) . the influenced component and the critical values for are equal to these for the cm with bernoulli transmissions .the set of good pioneers is smaller than the influenced population .we do not observe the phase transition for the power - law model since .[ f.pnppl ] ] \approx 2 ] ( and ) .the set of good pioneers is bigger than the influenced population .[ f.pccpl ] ] \approx 2 ] , which agrees with the right - hand - side of ( [ e.alphabar ] ) .
with the growing importance of corporate viral marketing campaigns on online social networks , the interest in studies of influence propagation through networks is higher than ever . in a viral marketing campaign , a firm initially targets a small set of pioneers and hopes that they would influence a sizeable fraction of the population by diffusion of influence through the network . in general , any marketing campaign might fail to go viral in the first try . as such , it would be useful to have some guide to evaluate the effectiveness of the campaign and judge whether it is worthy of further resources , and in case the campaign has potential , how to hit upon a good pioneer who can make the campaign go viral . in this paper , we present a diffusion model developed by enriching the generalized random graph ( a.k.a . configuration model ) to provide insight into these questions . we offer the intuition behind the results on this model , rigorously proved in , and illustrate them here by taking examples of random networks having prototypical degree distributions poisson degree distribution , which is commonly used as a kind of benchmark , and power law degree distribution , which is normally used to approximate the real - world networks . on these networks , the members are assumed to have varying attitudes towards propagating the information . we analyze three cases , in particular ( 1 ) bernoulli transmissions , when a member influences each of its friend with probability ; ( 2 ) node percolation , when a member influences all its friends with probability and none with probability ; ( 3 ) coupon - collector transmissions , when a member randomly selects one of his friends times with replacement . we assume that the configuration model is the closest approximation of a large online social network , when the information available about the network is very limited . the key insight offered by this study from a firm s perspective is regarding how to evaluate the effectiveness of a marketing campaign and do cost - benefit analysis by collecting relevant statistical data from the pioneers it selects . the campaign evaluation criterion is informed by the observation that if the parameters of the underlying network and the campaign effectiveness are such that the campaign can indeed reach a significant fraction of the population , then the set of good pioneers also forms a significant fraction of the population . therefore , in such a case , the firms can even adopt the nave strategy of repeatedly picking and targeting some number of pioneers at random from the population . with this strategy , the probability of them picking a good pioneer will increase geometrically fast with the number of tries .
recent theoretical and experimental advances in dealing with small quantum systems has led to a growing interest in their mechanics and thermodynamics . a certain amount of progress has been connected with studies of the jarzynski equality and related fluctuation theorems .recent attention is mainly focused on the quantum version of these results .quantum analogues of the jarzynski equality were first studied by kurchan and tasaki . since then various topics connected with the fluctuation relations and the range of their validity and applicability were investigated .there exist many ways to approach the jarzynski equality .most of them are based on a dynamical description within an infinitesimal time scale . making use of the perturbation approach , the author of ref . quantum fluctuation and work - energy theorems that focus on the time - reversal symmetry .we will advocate here a different approach applicable for systems which can be described by discrete quantum operations .the formalism of quantum operations is one of the basic tools in studying dynamics of open quantum systems .fluctuation theorems for open quantum systems were recently considered in refs . . in particular ,some results have been shown to be valid in the case of unital quantum operations , while the general case of quantum systems with time evolution described by nonunital stochastic maps remained not fully understood .the main goal of this study is to relax the assumption of unitality and to generalize previous results for the entire class of stochastic maps , also called quantum channels .another task of the work is to introduce a model discrete quantum dynamics acting on a -dimensional system , which forms a useful generalization of the amplitude damping channel acting on a two - level system .this nonunital map channel and its extensions describe effects of energy loss in quantum systems due to an interaction with an environment .investigation of possible effects due to deviations from unitality of the map become relevant in the context of possible experimental tests of quantum fluctuation theorems. experimental study of fluctuation relations is easier in the classical regime .original formulations of the jarzynski equality and the crooks theorem were tested in experiments . on the other hand ,experimental investigation of quantum fluctuation relations is still forthcoming , although some possible experimental schemes were already discussed .existing proposals often deal with a single particle undergoing an unitary time evolution .furthermore , current efforts to construct devices able to process quantum information might offer new possibilities to test quantum fluctuation relations . notably , quantum systems are very sensitive to interaction with an environment . in this regard ,fluctuations in systems with an arbitrary form of quantum evolution deserve theoretical analysis .therefore , we do not focus our attention on a specific class of unital channels , but we study the most general form of arbitrary quantum operations .the original formulations of the jarzynski equality and the tasaki crooks fluctuation theorem remain valid under the assumption that changes of the system state are represented by a unital quantum operation .attention to bistochastic maps is natural , when we deal with the tasaki crooks fluctuation theorem .indeed , its formulation involves both the forward quantum channel and its adjoint . if the latter channel preserves the trace , then the former one is necessarily unital .meantime , nonunital quantum channels are of interest in various respects . in this workwe provide a formulation of the jarzynski equality for arbitrary quantum operations .the contribution of our paper is twofold .first , we formulate a generalization of the jarzynski equality for a nonunital quantum channel .second , we investigate the problem for which the standard jarzynski equality remains valid nonunital quantum channels . this paper is organized as follows . in section [ sec2 ], we introduce basic definitions and recall relevant results .the special case of unital channels and bistochastic maps is analyzed in sec .[ sec3 ] . in sec .[ sec4 ] , we characterize nonunitality of an arbitrary quantum stochastic map while in sec .[ sec5 ] we generalize the corresponding jarzynski equality for this class of maps and derive eq .( [ jareq0 g ] ) a key result of the paper .several examples of nonunital quantum channels acting on two and three - level system are analyzed in sec .we investigate also a general model of nonunitary dynamics described in an arbitrary finite - dimensional hilbert space which can be considered as a generalization of the amplitude damping channel .let denote the space of linear operators on -dimensional hilbert space .by and , we respectively mean the real space of hermitian operators and the set of positive ones . for arbitrary , we define their hilbert schmidt inner product by this product induces the norm . for any , we put as a unique positive square root of .the eigenvalues of counted with multiplicities are the singular values of , written .for all real , the schatten norm is defined as this family includes the trace norm for , the hilbert schmidt ( or frobenius ) norm for , and the spectral norm for . for all , we have this relation is actually a consequence of theorem 19 of the classical book of hardy , littlewood , and polya . for any state of the -level system we are going to use the bloch - vector representation , as it might be linked to experimental data . by , , we denote the generators of which satisfy and the factor in eq .( [ lij2 ] ) is rather traditional and may be chosen differently .each traceless operator can be then represented in terms of its bloch vector as thus , we represent a traceless hermitian by means of the corresponding -dimensional real vector . for the case , the generators are the standard pauli matrices , where . in the case , the eight gell - mann matrices are commonly used . in -dimensional space ,the completely mixed state is expressed as where is the identity operator on . for a given density matrix ,the operator is traceless , whence a bloch representation of follows refs . .let us consider a linear map that takes elements of to elements of .this map is called positive if whenever . to describe physical processes , linear maps have to be completely positive .let be the identity map on , where the space is assigned to a reference system .the complete positivity implies that the map is positive for any dimension of the auxiliary space .the authors of ref . examined an important question , whether the dynamics of open quantum systems is always linear .further , we will consider only completely positive linear maps .a completely positive map can be written by an operator - sum representation , here , the kraus operators map the input space to the output space .when physical process is closed and the probability is conserved , the map preserves the trace , .this relation satisfied for all is equivalent to the following constraint for the set of the kraus operators : here denotes the identity operator on the input space . by the cyclic property and the linearity of the trace , formula ( [ prtr ] )implies for all . to each linear map , one assigns its adjoint map , . for all and ,the adjoint map is defined by for a completely positive map ( [ opsm ] ) , its adjoint is written as . if this adjoint is trace preserving , the kraus operators of eq .( [ opsm ] ) satisfy the condition in other words , we have . in this case, the map is said to be unital .if a quantum map is completely positive and the kraus operators satisfy properties ( [ prtr ] ) and ( [ prtr1 ] ) the map is called bistochastic , as it can be considered as an analog to the standard bistochastic matrix , which acts in the space of probability vectors .a quantum map can be characterized using the norm let us quote here one of useful results concerning the norm of a map .if a map is positive , then see bhatia , item 2.3.8 . in terms of the completely mixed state ( [ cmab ] ), we have .the jamiokowski isomorphism leads to another convenient description of completely positive maps .we recall its formulation for the symmetric case , if both dimensions are equal , .the principal system is extended by an auxiliary reference system of the same dimension .let be an orthonormal basis in .making use of this basis in both subspaces we define a maximally entangled normalized pure state for any linear map , we assign an operator which acts on the extended space . the matrix is usually called _ dynamical matrix _ or _choi matrix _ . for any , the action of the map can be recovered from by means of the relation , in which is the transpose operator to .the complete positivity of is equivalent to the positivity of the dynamical matrix . the map preserves the trace , if and only if its dynamical matrix satisfies in a shortened notation , we will write the dynamical matrix and the rescaled one as and , respectively . substituting the completely mixed state into eq .( [ chjis ] ) , we obtain in the subsequent section we will examine a nonunitality operator , closely related with the partial trace ( [ cmis ] ) .we will consider the case , in which a thermostatted system is operated by an external agent .it is assumed that this agent acts according to a specified protocol .hence , the hamiltonian of the system is time dependent . to formulate the jarzynski equality , a special kind of averaging procedureis required .initially , we describe this procedure for arbitrary two hermitian operators .let us consider operators and . in terms of the eigenvalues and the corresponding eigenstates ,spectral decompositions are expressed as the eigenvalues in both decompositions are assumed to be taken according to their multiplicity . in this regard ,we treat and as the labels for vectors of the orthonormal bases and .let evolution of the system in time be represented by a quantum channel .if the input state is described by an eigenstate , then the output of the channel is .suppose that we measure the observable in this output state .the outcome occurs with the probability this quantity can also be interpreted as the conditional probability of the outcome given that the input state is .the trace - preserving condition implies that the standard requirement on conditional probabilities is thus satisfied for any quantum channel .furthermore , we suppose that the input density matrix has the form where .according to bayes s rule , one defines the joint probability distribution with elements this is the probability that we find the system in the -th eigenstate of at the input and in the -th eigenstate of at the output .let be a function of two eigenvalues .following ref . , we define the corresponding average double angular brackets in the left - hand side denote the averaging over the ensemble of possible pairs of measurement outcomes .a pair of single angular brackets denotes an expectation value of an observable in a state , in consistence with the standard notation common in quantum theory , .more general forms of the described scenario were considered in refs . .the jarzynski equality relates an averaged work with the difference between the equilibrium free energies .since the notion of work pertains to a process , it can not be represented as a quantum observable . a more detailed discussion of the notion of work in the context of quantum fluctuation theorems was recently provided by van vliet .in any case , the energy can be measured twice , at the initial and the final moments .the difference between outcomes of these two measurements describes the work performed on the system in a particular realization .therefore , the averaging of the form ( [ favm ] ) is used with respect to two hermitian operators : the initial and the final hamiltonians and .fluctuation theorems are usually obtained under the assumption that the work is determined by projective measurements at the beginning and the end of each run of the protocol . in several casesone applies , however , much broader classes of quantum measurements. recently venkatesh _ et al . _ analyzed fluctuation theorems for protocols in which generalized quantum measurements are used .in this paper we discuss the most general case of a discrete nonunitary dynamics and consider arbitrary measurements which are error - free in the following sense : with each outcome of a generalized measurement , we can uniquely identify the corresponding eigenstate of an actual hamiltonian .the system under investigation is initially prepared in the state of the thermal equilibrium with a heat reservoir .it is convenient to denote the inverse temperatures of the reservoir at the beginning and at the end of the protocol , by and , respectively . in principle , these two temperatures may differ , but in the following we will eventually discuss the case in which both temperatures are equal .the initial density matrix reads where is the corresponding partition function .we further suppose that the transformation of states of the system is represented by a quantum channel , which maps the set of density matrices of size onto itself .in general , the final density matrix differs from the matrix corresponding to the equilibrium at the final moment . here , the partition function corresponds to the state of the thermal equilibrium with the final hamiltonian .eigenvalues of the hamiltonians and will be denoted by and , respectively .let channel be unital .using notation ( [ favm ] ) for a function of two eigenvalues , we then obtain this result was recently derived in ref . and earlier by tasaki under a weaker assumption of a unitary evolution .formula ( [ tasth ] ) directly leads to the jarzynski equality formulated for unital quantum channels . in the approachconsidered the term is naturally identified with the external work performed on the principal system during the process . in the case , formula ( [ tasth ] ) gives where the equilibrium free energies read .expression ( [ jareq0 ] ) relates , on average , the nonequilibrium external work with the difference between the equilibrium free energies , .thus the above statement can be interpreted as a version of the original jarzynski equality , which holds for an arbitrary unital quantum channel . some other approaches to obtaining the quantum jarzynski equalitywere recently considered by vedral and albash _ et al .furthermore , formula ( [ jareq0 ] ) was derived in directly from eq .( [ tasth ] ) for any bistochastic channel . in the followingwe shall relax the unitality condition and generalize this reasoning for nonunital quantum maps .in this section , we introduce a notion useful to analyze the jarzynski equality for quantum stochastic maps . in order to characterize deviation from unitality, we are going to use the following operator . for any trace - preserving map , one assigns a traceless operator where and .this operator is hermitian , i.e. , , whenever the map is hermiticity preserving .let us derive a useful statement about the above nonunitality operator . for given two operators , and real parameters , , we introduce the density matrices functional forms of such a kind pertain to equilibrium in the gibbs canonical ensemble .we will consider average of the type ( [ favm ] ) with respect to the density matrices ( [ htva ] ) and ( [ htvb ] ) at the input and output , respectively .the following statement holds true .[ lm1 ] let , , and let and be real numbers .if the input state is described by density matrix ( [ htva ] ) , then the average defined in eq .( [ favm ] ) reads * proof . *using the linearity of the map and eq . ( [ htxd ] ) ,we obtain taking in eq .( [ pabji ] ) and using eq .( [ sump ] ) , we represent the left - hand side of eq .( [ prp1 ] ) in the form the latter term is easily rewritten as the right - hand side of eq .( [ prp1 ] ) . if the operator is proportional to , we have by the trace preservation . in this case , the right - hand side of eq .( [ htxd ] ) becomes zero .then relation ( [ prp1 ] ) is reduced to the previous result given for unital channels in ref . . a deviation from unitality can be quantified by norms of the operator ( [ htxd ] ) . in the following , the case will be considered .that is , the input and output hilbert spaces have the same dimension .note that two different quantum channels may lead to the same nonunitality observable .this hermitian operator is traceless and belongs to the space of dimensionality . due to the jamiokowski isomorphism , the set of quantum channels is isomorphic with the set of their dynamical matrices satisfying eq .( [ trpn ] ) .as the latter set has real dimensions , there is no one - to - one correspondence between quantum channels and the nonunitality operators .let us estimate the hilbert schmidt norm of the operator from above .the following bound holds .[ lem1]proposition [ lm2 ] let be -dimensional hilbert space . if is positive and trace preserving , then * proof .* as the map is positive and trace preserving , we have and .hence , the squared hilbert schmidt norm is expressed as by positivity , we obtain .lemma 3 of ref . states that for all . combining these points with eq .( [ fsnrm ] ) finally leads to the claim ( [ scnrm1 ] ) follows from eqs .( [ pnrm1 ] ) and ( [ scnrm ] ) . using eq .( [ cmis ] ) we rewrite eq .( [ scnrm1 ] ) in the form in terms of the map norm , one characterizes a deviation of the partial trace of the rescaled dynamical matrix from the completely mixed state .if the map is unital one has and . using eq .( [ pnrm1 ] ) , the right - hand side of eq .( [ scnrm1 ] ) vanishes for unital maps . on the other hand, the condition immediately leads to the relation .the latter is equivalent to , since the norm can not be equal to zero for a nonzero matrix , which completes the reasoning .[ lem1]corollary [ lm3 ] let be positive and trace preserving .if , then is unital , i.e. . it is instructive to compare this result with the russo dye theorem .one of its formulations says that if a positive map is unital , then ( see , e.g. , point 2.3.7 of ref . ) . in a certain sense ,corollary [ lm3 ] is a statement in the opposite direction .namely , if a trace - preserving positive map obeys , then it is necessarily unital .note that this conclusion pertains to all trace - preserving positive maps and not only to completely positive ones .although legitimate quantum operations are completely positive , positive maps without complete positivity are often used as an auxiliary tool in the theory of quantum information .for instance , one of the basic methods to detect quantum entanglement is formulated in terms of entanglement witnesses and positive maps .due to ( [ scnrm1 ] ) , a deviation from unitality is characterized by the difference between the norm and unity .it is possible to find an upper bound for this quantity in terms of the dimension of the hilbert space . from eq .( [ npqr ] ) , we obtain . combining this with eqs .( [ pnrm1 ] ) and ( [ scnrm1 ] ) we get this inequality is valid for all trace - preserving positive maps , including quantum channels with the same input and output spaces . rescaling the above bound by the dimensionality , we have thus , the hilbert schmidt norm of the operator is strictly less than .the left - hand side of eq .( [ rscmn ] ) can be interpreted as the hilbert schmidt distance between and .this distance is maximal if inequality ( [ rscmn ] ) is saturated so the output state is pure .this is the case for the map generated by kraus operators , where the is an orthonormal basis in .such a map represents a complete contraction to a pure state : for any state one has .taking as a ground state one can describe in this way the process of spontaneous emission in atomic physics .systems near the thermal equilibrium can be treated as ergodic in the following sense : any quantum state can be reached , directly or indirectly , from any other state . in this regard ,the completely contracting channel has opposite properties , as for any initial state only a single state can be reached during the process . using representation ( [ blrx ] ) , the nonunitality operator can be represented in terms of its generalized bloch vector with components .therefore we arrive at a handy expression for the nonunitality operator , , where denotes the -dimensional vector of generators of .we also obtain an upper bound for the modulus of the bloch vector , it follows from eq .( [ rscmn ] ) and the expression for the squared hilbert schmidt norm . in the case , the bound ( [ rscmnt ] ) gives for all quantum channels , as in the normalization used in this work the set of one - qubit states forms the bloch ball of radius one .the right - hand side of eq .( [ rscmnt ] ) tends to for large .we now apply eq .( [ prp1 ] ) in a physical set - up corresponding to the context of the jarzynski equality .one assumes that a thermostatted system is acted upon by an external agent , which operates according to the prescribed protocol .the principal system is assumed to be prepared initially in the state of the thermal equilibrium with a heat reservoir . as before we denote the initial and the final inverse temperatures of the reservoir by and .therefore , the input state is described by density matrix ( [ inidm ] ) . according to the actual process, the final density matrix may differ from the state ( [ findm ] ) , which corresponds to the equilibrium at the final moment .considering the same system , we also assume that both dimensions are equal , . by substitutions ,relation ( [ prp1 ] ) leads to the equality for unital quantum channels , the term is zero , so formula ( [ tasthg ] ) forms an extension of the previous result ( [ tasth ] ) and for a unitary evolution it reduces to the result of tasaki . the right - hand side of eq .( [ tasthg ] ) depends not only on equilibrium properties of the system but also on the realized process .the authors of ref . emphasized such a feature in connection with nonequilibrium relations for the exponentiated internal energy and heat . note also that concrete details of the realized quantum process are represented by means of a single operator . to formulate fluctuation relation ( [ tasthg ] ) , no additional characterization of the map is required . in this regard, we need not to specify a kind of coupling between the principal system and its environment .the quantity can be identified with the external work performed on the principal system during a process . for the case ,formula ( [ tasthg ] ) leads to a generalized form of the jarzynski equality for arbitrary , nonunital quantum channels , this is the central result of the present work .the correction term characterizes a deviation induced by the nonunitality of a map . in general, this term can be positive , equal to zero , or negative .if the channel is unital , then the operator and the correction term are equal to zero , so the standard form ( [ jareq0 ] ) of the jarzynski equality is recovered .it is essential to note that the correction term may vanish also for nonunital quantum channels , provided . for a convex function , the jensen inequality implies . combining this with eq .( [ jareq0 g ] ) gives this inequality provides a lower bound on the average work performed on a driven quantum system . if the correction term is strictly negative , the right - hand side of eq .( [ lbww ] ) is strictly larger than .the latter bound is commonly known and takes place for quasistatic processes . on the other hand , positivity of the correction termwill reduce this bound .it is an evidence for the fact that the averaged external work may , in principle , be less than , provided the macroscopic process investigated is sufficiently far from unitality .it is instructive to discuss limiting cases of high and low temperatures . for sufficiently high temperatures ,if with some typical value , the correction term can be expanded as since the nonunitality operator is traceless , the expansion starts with the first - order term with respect to . if , the right - hand side of eq .( [ hhtm ] ) also vanishes in the first order . within this approximation , the standard form ( [ jareq0 ] ) andits consequences remain valid . for very low temperatures ,the correction term can be expressed in terms of the ground - state energy , .if this state is nondegenerate , we approximately write neglected terms are of the order of , where and is a typical distance between nearest - neighbor levels , for instance , the energy difference between the ground state and the first excited state . up to a high accuracy the deviation from the unitalityis represented by a single matrix element , as probabilities of excited states becomes negligible for low temperatures . in general , this matrix element characterizes a difference of the matrix element from the equiprobable value .if the ground state is not involved in the undergoing process , the correction term vanishes .thus , in the low - temperature limit the standard form ( [ jareq0 ] ) may be adequate , even if the process itself is generally far from equilibrium .we also observe that the right - hand side of eq .( [ lltm ] ) does not depends on the temperature .the above results can be put into the context of the heat transfer between two quantum systems .the composite hilbert space is a tensor product of the hilbert spaces and of individual systems .let us rewrite eq .( [ prp1 ] ) so the initial state of the composite system reads where and .using an observable , we may rewrite operator ( [ abvr ] ) as assume now that the evolution of the composite system is represented by a quantum channel . by corresponding substitutions in eq .( [ prp1 ] ) , we obtain where .the operator vanishes for unital channels and the right - hand side of eq .( [ abvr2 ] ) becomes equal to unity as discussed in ref .we now consider the following situation .two separated systems are initially prepared in equilibrium with the inverse temperatures and , respectively .then the combined system is initially described by the tensor product . making use of eq .( [ abvr2 ] ) we obtain following ref . we introduce the quantity as the terms and are average variations of self - energies of the two subsystem , quantity ( [ abvs ] ) describes a contribution of these variations into a change of the total entropy . combining eq .( [ abvr3 ] ) with the jensen inequality finally gives a bound .if variations of the inverse temperatures are sufficiently small and the contributions of interaction energy are negligible then the quantity provides an estimate of changes of the total entropy of the system . if the correction term is strictly negative , then .negativity of the correction term also implies .since contributions on the interaction energy are small enough , a perturbative description of the process is reasonable . on the other hand , positivity of the correction term implies .in such a case , contributions of the interaction can be relevant , so quantity ( [ abvs ] ) does not provide a legitimate estimate for changes of the total entropy .in this section we discuss some simple nonunital quantum channels and analyze the correction term present in the jarzynski equality ( [ jareq0 g ] ) .analyzed channels describe the effects of the energy loss from an interacting quantum system and can be considered as a generalization of the amplitude damping channel .consider the simplest case representing a one qubit system .let a magnetic moment with spin and a charge be in contact with a thermal bath at the inverse temperature .the corresponding hamiltonian reads here , is the bohr magneton , is the vector of an external field , and is the vector of the three pauli matrices .suppose that the time evolution of the system is represented by the amplitude damping channel described by the kraus operators with ] .the map is described by a set of three kraus operators .it contains a single diagonal matrix , and two nondiagonal matrices , for this channel we find the nonunitality observable . as the choice leads to the identity map , we assume in the following that or . for a massive particle with spin and charge , we write the final hamiltonian , let us take the axis such that the component commutes with , i.e. , . in their common eigenbasis ,the two other components of the spin are expressed as in this case , an expression for the correction term is more complicated than eq .( [ cort2 ] ) . for sufficiently high temperatures , when is small , one has a simple approximation , where is the unit vector of the axis .if , the correction term vanishes in the first order with respect to . in this respect, expression ( [ cort3 ] ) is analogous to eq .( [ cort2 ] ) . on the other hand , for a generic value of the temperature ,the correction term typically differs from zero .when we fix and , then the correction term in the first order becomes maximal for .as in the above case , this choice gives a complete contraction to some pure state .in the low - temperature limit , we can rewrite eq .( [ cort3 ] ) in the form where is the angle between and . as mentioned above , the right - hand side of eq .( [ cort3l ] ) neglects contributions of order with very large values of the exponent .consider the following process defined for an arbitrary -dimensional space .let and be two sets of indices such that and .the map is described by a set of the kraus operators .the first of them is chosen to be diagonal in the eigenbasis of the hamiltonian , for given and arbitrary , we define further operators , for which and . for all impose a restriction , whence and .hence , condition ( [ prtr ] ) is satisfied , i.e. , the considered quantum operation is trace preserving . for brevity , we put positive numbers for each .an explicit form of all kraus operators allows us to find the image of the identity operator , therefore the nonunitality observable is diagonal with elements for and elements for . the correction term in eq .( [ jareq0 g ] ) can be then written as note that the right - hand side of eq .( [ xxnn ] ) represents the correction term for arbitrary . in this case, we merely replace with the diagonal matrix element with respect to the hamiltonian eigenbasis .the correction term is not uniquely defined by a given quantum channel .hence , effects of nonunitality in the jarzynski equality and related fluctuation relations in some cases may be modeled by a generalized amplitude damping channel in the described form .it is possible , provided the diagonal element of the operator can be represented in terms of the above introduced numbers and .the above two damping channels acting on and systems are particular cases of the general scheme .for instance , matrices ( [ kk12 ] ) are obtained for and with , .matrices ( [ kk123 ] ) and diagonal are recovered by setting and with , , , .another version of the damping channel for a three - level system is described for and .taking , , , we obtain the kraus matrices , these operators lead to the diagonal matrix . therefore operators ( [ kk1233 ] ) allow a controlled shift of the population between the levels of the system .in this work we formulated jarzynski equality ( [ jareq0 g ] ) for a quantum system described by an arbitrary stochastic map . this is a direct generalization of earlier results obtained for unital maps , for which the maximally mixed state is preserved .we derived a correction term which compensates the nonunitality of the map and attempted to estimate its relative size .furthermore , it was shown that the correction term vanishes if the nonunitality observable is perpendicular , in the sense of the hilbert schmidt scalar product , to the hamiltonian of the system . hence ,expression ( [ jareq0 ] ) obtained previously remains valid also for certain cases of nonunital maps provided the nonunitality does not influence the average energy of the system .the results are exemplified on a simple model of the damping channel .for a two - level system , the correction term depends on the nonunitality measured by the length of the translation vector and its orientation with respect to the vector of magnetic field .the latter determines the hamiltonian of the system .when other parameters are fixed , the translation vector is the longest in the case of complete contraction to a pure state . for the considered two - level example, such a map leads to the maximal relative size of the correction term . however ,if the translation vector is perpendicular to the field , the correction term vanishes irrespectively of the length of the translation vector . as a by - product of our study we introduced the nonunitality operator associated with a given quantum operation and analyzed its properties .some useful bounds for its norm have been established .furthermore , we presented a broad class of nonunitary dynamics acting in the set of quantum states of an arbitrary finite dimension , which can serve as a generalization of the one - qubit amplitude damping channel .the authors are very grateful to peter hnggi for a fruitful correspondence , several valuable comments , and access to his unpublished notes .we are thankful to daniel terno for useful discussions . financial support by the polish national science centre , grant no. dec-2011/02/a / st1/00119 ( k. . ) , is gratefully acknowledged .j. liphardt , s. dumont , s. b. smith , i. tinoco , and c. bustamante , equilibrium information from nonequilibrium measurements in an experimental test of jarzynski s equality .science * 296 * , 18321835 ( 2002 ) .s. toyabe , t. sagawa , m. ueda , e. muneyuki , and m. sano , experimental demonstrations of information - to - energy conversion and validation of the generalized jarzynski equality .nature phys .* 6 * , 988992 ( 2010 ) .
jarzynski equality and related fluctuation theorems can be formulated for various setups . such an equality was recently derived for nonunitary quantum evolutions described by unital quantum operations , i.e. , for completely positive , trace - preserving maps , which preserve the maximally mixed state . we analyze here a more general case of arbitrary quantum operations on finite systems and derive the corresponding form of the jarzynski equality . it contains a correction term due to nonunitality of the quantum map . bounds for the relative size of this correction term are established and they are applied for exemplary systems subjected to quantum channels acting on a finite - dimensional hilbert space .
the idea of a common data format for visible - wavelength and infrared astronomical interferometry ( hence referred to collectively as `` optical interferometry '' ) arose through discussions at the june 2000 nsf - sponsored meeting in socorro .since 2001 , t.p . and j.s.y .have been responsible , under the auspices of the international astronomical union ( iau ) , for coordinating discussion on this topic , and for producing and maintaining the specification document for a format based on the flexible image transport system ( fits ) .the motivation for a creating a data format specific to optical interferometry was two - fold .firstly , existing formats designed for radio interferometry such as uv - fits and fits - idi do not describe optical data adequately .secondly , the more popular uv - fits format is poorly defined its content has evolved and the format is partly defined by reference to the behaviour of the aips software and based on the deprecated fits `` random group '' conventions .the principal shortcoming of the radio interferometry formats is that fourier data is stored as complex visibilities .the noise models for radio and optical interferometry are quite different ; optical interferometers must usually integrate incoherently to obtain useful signal - to - noise .thus the `` good '' observables in the optical are calibrated fringe power spectra ( squared visibility amplitudes ) and bispectra ( triple product amplitudes and closure phases ) , rather than complex visibilities . a file format based on complex visibilitiescan only encode ( optical ) closure phases if corresponding visibility amplitude and closure phase measurements are made simultaneously , and even then the uncertainties on the closure phases are not stored ( radio interferometry analysis software assumes zero closure phase error ) .there is also no way of saving bispectrum ( triple product ) amplitudes .preliminary drafts of the specification for an optical interferometry format were discussed by the community at the 198th meeting of the american astronomical society in june 2001 , and the august 2001 meeting of the iau working group on optical / ir interferometry .the discussion continued by email amongst various interested parties , and the document was revised .a public pre - release of the format specification and accompanying example c code was made in march 2002 , followed by a second pre - release of the document in april 2002 .this draft was discussed at the august 2002 iau working group meeting .comments were presented by participants from the european southern observatory and the interferometry science center ( now the michelson science center ) . since the iau working group meeting there were two further pre - releases of the format specification , prior to the first `` production '' release .the standard was frozen on 7 april 2003 ( release 5 of the format specification ) , meaning that subsequent changes would require increments of the revision numbers for the changed tables .the history of the format specification is summarized in table [ tab : history ] . the standard is now supported by the majority of optical interferometer projects , including coast , npoi , iota , chara , vlti , pti , and the keck interferometer .this paper contains the formal definition of the standard , in sec .[ sec : start ] onwards . as earlier versions of the format specification have only been distributed via the world - wide - web ,this paper formalises the standard and serves as a published reference for it. further discussion explaining the design decisions and providing explicit pointers to existing software for reading and writing the format can be found in , which is intended as a `` companion '' to the official format specification given here . a first application of the exchange format , a `` beauty contest '' to compare the performance of a number of image reconstruction software packages , is described by .the contest demonstrated that four different international groups could faithfully reconstruct images from simulated datasets in the oifits format .no alterations have been made to the standard since release 5 of the format specification .the text in sec .[ sec : start ] onwards is substantially the same as that in release 5 , apart from minor changes to clarify certain points and to conform to the journal style .the revision numbers of all tables defined by the standard remain at unity .by defining and maintaining a common data format , we hope to encourage the development of common software for analysis of data from optical stellar interferometers , and to facilitate the exchange of data between different research groups . in this way, the limited fourier coverage of current instruments can be ameliorated by combining data from several interferometers .an example of this is given in the paper by .the format is intended to support the storage of calibrated , time - averaged data .such data can be prepared from the raw fringe measurements without using information about the detailed structure of the target ( i.e. without doing any astrophysical interpretation ) , yet once the data is in the format , it can be analysed without knowing the details of the instrument .calibrated data from different interferometers can be treated in the same way ( provided there are no residual systematic errors ) .the standard includes the information needed to perform `` imaging '' from a calibrated data - set . hereimaging refers loosely to any process for making inferences about the sky brightness distribution from the data .the standard explicitly allows extra binary tables or columns , beyond those it defines , to be used to store additional information . in this way, the standard can be used for other purposes besides imaging , e.g. for astrometry or as an archive format .we suggest that the common data format be referred to as the `` oi exchange format '' , or the `` exchange format '' when the context is clear .a suitable very short moniker is `` oifits '' ( or `` oi - fits '' ) .in what follows we use the fits binary table nomenclature of keywords and column headings .the values associated with the keywords can be considered as scalars , while each column can be simply an array , or an array of `` pointers '' to other arrays .the following data types are used in the standard : i = integer ( 16-bit ) , a = character , e = real ( 32-bit ) , d = double ( 64-bit ) , l = logical . in the tablesbelow , the number in parentheses is the dimensionality of the entry .the table names given below correspond to the values of the extname keyword .other mandatory keywords describing the structure of the fits binary tables ( see * ? ? ?* ) have been omitted from this document . also describes various extensions to binary tables that are not part of the fits standard .none of these are currently used in this format .the definitions of all tables have been `` frozen '' since april 2003 .all revision numbers are currently equal to one .any future changes will require increments in the revision numbers of the altered tables .the conventions described here are generally identical to those used in radio interferometry and described in standard textbooks ( e.g. * ? ? ?the baseline vector between two interferometric stations a and b whose position vectors are and is defined as . is the east component and is the north component of the projection of the baseline vector onto the plane normal to the direction of the phase center , which is assumed to be the pointing center .the basic observable of an interferometer is the complex visibility , which is related to the sky brightness distribution by a fourier transform : and are displacements ( in radians ) in the plane of the sky ( which is assumed to be flat ) .the origin of this coordinate system is the phase center . is in the direction of increasing right ascension ( i.e. the axis points east ) , and is in the direction of increasing declination ( the axis points north ) . with and defined , the above equation defines the sign convention for complex visibilities that should be used in the data format .the visibility is normalized by the total flux from the target , which is assumed to remain constant over the time spanned by the measurements in the file .neither the field of view over which the `` total '' flux is collected , or the field of view over which fringes are detected ( i.e. the limits of the above integral ) , can be inferred from the data file .the squared visibility is simply the modulus squared of the complex visibility : the triple product , strictly the bispectrum , is the product of the complex visibilities on the baselines forming a closed loop joining three stations a , b and c. in the following expression , is the projection of ab and is the projection of bc ( and hence is the projection of ac ) : the data are assumed to be complex triple products averaged over a large number of `` exposures '' . in such a case , the noise can be fully described in terms of a gaussian noise ellipse in the complex plane .photon , detector and background noise tend to lead to noise ellipses that are close to circular . on the other hand , fluctuating atmospheric phase errors across telescope apertures typically cause fluctuations in the amplitude of the triple product which are much larger than the fluctuations in the phase .thus the `` atmospheric '' contribution to the noise ellipse is elongated along the direction of the mean triple product vector in the argand diagram , as shown in fig .[ fig : t3noise ] .such noise needs to be characterised in terms of the variance perpendicular to the mean triple product vector and the variance parallel to .we can parameterize the perpendicular variance in terms of a `` phase error '' .the phase error gives an approximate value for the rms error in the closure phase in degrees .we denote as the `` amplitude error '' . in many cases ,the observer may be interested primarily in the closure phase and not the triple product amplitude , and therefore may choose not to calibrate the amplitude .such a case can be indicated in the above notation as an infinite amplitude error and a finite phase error .the data format specifies that such a case should be indicated by a null value for the amplitude ( the amplitude error value is then ignored ) .there was much discussion on the ( now defunct ) ` oi-data.nrl.navy.mil ` email list of the representation to use for complex visibilities in the standard .a number of different classes of data can be represented as complex visibilities , including several varieties of differential phase data . in all cases the standardshould only be used to store averaged data .thus , as with triple products , we must consider the shape of the noise ellipse in the complex plane .it has been demonstrated that both circularly - symmetric noise , and noise ellipses elongated parallel to or perpendicular to the mean vector can occur in practice .thus far there has been no evidence for noise ellipses elongated parallel to the real or imaginary axes , although examples of some classes of data have yet to be presented .hence an amplitude / phase representation of complex visibilities , mirroring that used for triple products , has been adopted in the current version of the standard .a valid exchange - format fits file must contain one ( and only one ) oi_target table , plus one or more of the data tables : oi_vis , oi_vis2 , or oi_t3 .each data table must refer to an oi_wavelength table that is present in the file. there may be more than one of each type of data table ( e.g. oi_vis2 ) . one or more oi_array tables ( or equivalent e.g. for aperture masking , in future releases of the standard ) may optionally be present . where multiple tables of the same extname are present, each should have a unique value of extver ( this according to the fits standard however the example c code and j.d.m.s idl software do not require extver to be present ) . the tables can appear in any order .other header - data units may appear in the file , provided their extnames do not begin with `` oi _ '' .reading software should not assume that either the keywords or the columns in a table appear in a particular order .this is straightforward to implement using software libraries such as cfitsio .any of the tables may have extra keywords or columns beyond those defined in the standard .it would facilitate the addition of new keywords and columns in future releases of the standard if the non - standard keywords and column names were given a particular prefix e.g. `` ns _ '' , to avoid conflicts .as defined , this table is aimed at ground - based interferometry with separated telescopes .alternative tables could be used for other cases .these must have at least an arrname keyword , for cross - referencing purposes .each oi_array - equivalent table in a file must have a unique value for arrname .each data table must refer ( via the insname keyword ) to a particular oi_wavelength table describing the wavelength channels for the measurements .each data table may optionally refer , via the arrname keyword , to an oi_array table .the value in the time column shall be the mean utc time of the measurement in seconds since 0h on date - obs .note this may take negative values , or values seconds , and hence the epoch of observation for the particular table is not restricted to date - obs .the value in the mjd column shall be the mean utc time of the measurement expressed as a modified julian day. it might be appropriate to use the mjd values instead of the time values when dealing with long time - spans , but the standard makes no stipulation in this regard .the exchange format will normally be used for interchange of time - averaged data .the `` integration time '' is therefore the length of time over which the data were averaged to yield the given data point .ucoord , vcoord give the coordinates in meters of the point in the uv plane associated with the vector of visibilities .the data points may be averages over some region of the uv plane , but the current version of the standard says nothing about the averaging process .this may change in future versions of the standard. it may be useful to allow for some optional tables .for example , there might be one that contains instrument specific information , such as the backend configuration .another optional table could contain information relevant to astrometry .the extnames of additional tables should not begin with `` oi _ '' .development of the format was performed under the auspices of the iau working group on optical / ir interferometry , and has the strong support of its members .we would like to thank peter r. lawson ( chair of the working group ) for his encouragement and support .the authors would like to thank david buscher , christian hummel , and david mozurkewich for providing material for the format specification and the oifits website .we would like to take this opportunity to thank all those in the community who have contributed to the definition of the format and to its take - up by many interferometer projects .cotton , w. d. , et al .1990 , going aips : a programmer s guide to the nrao astronomical image processing system ( national radio astronomy observatory , charlottesville , va ) , 14.714.10 , http://www.aoc.nrao.edu/aips/goaips.html lawson , p. r. , cotton , w. d. , hummel , c. a. , monnier , j. d. , zhao , m. , young , j. s. , thorsteinsson , h. , meimon , s. c. , mugnier , l. , besnerais , g. l. , thibaut , e. , & tuthill , p. g. 2004 , in , vol . 5491 , new frontiers in stellar interferometry , ed . w. traub , j. d. monnier , & m. schller , 2125 jun .2004 , glasgow ( spie press ) , 886 pauls , t. a. , young , j. s. , cotton , w. d. , & monnier , j. d. 2004 , in , vol . 5491 , new frontiers in stellar interferometry , ed . w. traub , j. d. monnier , & m. schller , 2125 jun .2004 , glasgow ( spie press ) , 1231 noise model for triple product .the figure is an argand diagram showing the mean triple product with its associated noise ellipse , elongated parallel to .see text for explanation.,width=377 ] ll 2002/03/25 & pre - release of document and example code release 1 of format specification + 2002/04/25 & minor correction to document release 2 of format specification + 2002/11/26 & post - iau - wg - meeting release of document and example release 3 of format specification + 2003/02/17 & release of document , c and idl code to fix problem in oi_target table release 4 of format specification + 2003/04/07 & freeze of format specification .revision numbers of all tables set to unity release 5 of format specification + 2005 & this paper , no changes to exchange format or table revision numbers +
this paper describes the oi exchange format , a standard for exchanging calibrated data from optical ( visible / infrared ) stellar interferometers . the standard is based on the flexible image transport system ( fits ) , and supports storage of the optical interferometric observables including squared visibility and closure phase data products not included in radio interferometry standards such as uv - fits . the format has already gained the support of most currently - operating optical interferometer projects , including coast , npoi , iota , chara , vlti , pti , and the keck interferometer , and is endorsed by the iau working group on optical interferometry . software is available for reading , writing and merging oi exchange format files .
during commissioning of the leda rfq , we found that the beam behaved in the high energy beam transport ( hebt ) much as predicted .thus the actual rfq beam must have been close to that computed by the parmteqm code .the hebt included only limited diagnostics but we were able to get additional information on the rfq beam distribution using quadrupole scans .an good understanding of the rfq beam and beam behavior in the hebt will be helpful for the upcoming beam halo experiment .the problems with the quad scan measurements were the strong space effects and the almost complete lack of knowledge of the longitudinal phase space .also , our simulation codes , which served as the models for the data fitting , did not accurately reproduce the measured beam profiles at the wire scanner .the hebt transports the rfq beam to the beamstop and provides space for beam diagnostics . here, we discuss hebt properties relevant to beam characterization . _ design has weak focusing ._ ideally , the hebt would have closely - space quadrupoles at the upstream end until the beam is significantly debunched , i.e. , for about one meter .after this point , we could use any kind of matching scheme with no fear of spoiling the beam distribution with space - charge nonlinearities .our hebt design uses four quadrupoles , which is the minimum that provides adequate focusing for the given length .any fewer than four quadrupoles results in the generation of long gaussian - like tails in the beam , which would be scraped off in the hebt ._ good tune is important . _if a tune has a small waist in the upstream part of the hebt , the beam will also acquire gaussian - like tails .simulations showed that good tunes existed for our four - quadrupole beamline and were stable ( slight changes in magnet settings or input beam did not lead to beam degradation ) .beam size control ._ in our design , increasing the strength of the last quadrupole ( q4 ) increases the beam size in both and by about the same amount .this is because there is a crossover in just downstream of q4 and a ( virtual ) crossover just upstream of q4 in .if the beam turns out to not be circular , this can be adjusted by q3 , which moves the upstream crossover point . _ emittance growth in hebt ._ simulations showed that the transverse emittances grew by about 30% in the hebt .however , this did not affect final beam size . at the downstream end of the hebt and in the beamstop ,the beam is in the zero - emittance regime ( very narrow phase - space ellipses ) .simulations with trace 3-d , which has no nonlinear effects , and a 3-d particle code that included nonlinear space - charge predicted almost identical final beam sizes .near the beamstop entrance , there is a collimator with a size less than 3 times the rms beam size .initial runs showed beam hitting the top and bottom of the the collimator , indicating the beam was too large in .this was fixed by readjusting q3 and slightly reducing q4 to reduce the beam size .after these adjustments , beam losses were negligible .this indicated the hebt was operating as predicted and the rfq beam was about as predicted .there were no long tails generated in the hebt that were being scraped off .thus our somewhat risky design , having only four quadrupoles , worked as designed .only the first two quadrupoles were used . for characterizing the beam in , q1 , which focuses in ,was varied and the beam was observed at the wire scanner , which was about 2.5 m downstream . the value of the q2 gradient was chosen so that the beam was contained in the direction for all values of q1 . for characterizing ,q2 was varied .as the quadrupole strength is increased , the beam size at the wire scanner goes through a minimum . at the minimum, there is a waist at approximately the wire - scanner position . for larger quadrupole strengths , the waist moves upstream in the beamline .quadrupole scans were done a number of times for a variety of beam currents for both the and directions .the minimum beam size at the wire scanner was near 2 mm , which was almost equal to the size of the steering jitter .approximately ten quadrupole settings were used for each scan .data were recorded and analyzed off line . to determine the phase - space properties of the beam at the exit of the rfq, we needed a model that could predict the beam profile at the wire scanner , given the beam at the rfq exit .we parameterized the rfq beam with the courant - snyder parameters , , and in the three directions .we used the simulation codes trace 3-d and linac as models for computing rms beam sizes in our fitting .the trace 3-d code is a sigma - matrix ( second moments ) code that includes only linear effects but is 3-d .the linac code is a particle in cell ( pic ) code that has a nonlinear - space charge algorithm .figure [ t1 ] shows the rms beam size in the direction as a function of q1 gradient .the experimental numbers are averages from a set of quad scan runs .the other curves are simulations using the trace 3-d , linac , and impact codes .the impact code is a 3-d pic code with nonlinear space charge .the initial beam ( at the rfq exit ) for all simulations is the beam determined by the fit to the linac model .( this is why there is little difference between the experimental points and the linac simulation . )there are significant differences among the codes in the predictions of the the rms beam size .table 1 shows emittances we obtained when fitting to the trace 3-d and linac models .& & + prediction ( parmteqm ) & 0.245 & 0.244 + measured ( trace 3-d fit ) & 0.400 & 0.401 + measured ( linac fit ) & 0.253 & 0.314 +since only the impact code has nonlinear 3-d space charge , we would expect that this code would be the most accurate and should be used to fit to the data .both nonlinear and 3-d effects are large in the quad scans .however , we found that the impact code ( as well as linac ) could not predict well the beam profile at the wire scanner .figure [ f3 ] shows the projections onto the axis for two points of the quad scan , corresponding to a q1 gradients of 7.52 and 11.0 t / m .the agreement for 11 t / m , which is to the right of the minimum of the quad scan curve , is especially poor .we see that the experimental curve ( solid ) has a narrower peak , with more beam in the tail than the impact simulation predicts .figure [ f1 ] shows the phase space just after q2 for two points in the quad scan .after q2 , space charge has little effect and the beam mostly just drifts to the end ( there is little change in the maximum value of ) .the graph on the left is for a q1 value to the left of the quad scan minimum ( 9.5 t / m ) .the graph at the right shows the situation to the right of the minimum ( 10.9 t / m ) .the distribution in the left graph is diverging , while the one on the right is converging .it is this convergence that apparently leads to the strange tails we seen in the experimental profiles at the wire scanner .figure [ f2 ] shows similar graphs a little before the wire scanner , 2.35 m downstream of the rfq .we see how the tails in the projection form for the case of the quad scan points to the right of the minimum , which correspond to larger quad gradients . while this appears to explain the narrow - peak - with - enhanced - tails seen in the wire scans , the effect is much smaller than in the experiment .we studied various effects looking to better reproduce the profiles seen at the wire scanner , all with negative results .we studied the effects of mesh sizes , boundary conditions , particle number , and time step sizes with no significant change in results .we investigated the possibility that there were errors associated with using normalized variables ( ) in a code , which impact is . for high - eccentricity ellipses , this could be problem .however , transforming distributions to unnormalized coordinates , which are appropriate to a code , did not noticeably change the results .we used for input the beam generated by the rfq simulation code parmteqm .we also used generated beams , which were specified by the courant - snyder parameters . using the courant - snyder parameters of the parmteqm beam yielded similar results .varying these parameters in various ways did not make the beam look any closer to the experimentally observed one .we tried various distortions of the input beam such as enhancing the core or tail and distorting the phase space by giving each particle a kick in direction proportional to or .these changes had little effect , even for very severe distortions .kicks proportional to were more effective .these are more like space - charge effects in that the distortion is larger near the origin and smaller near the tails .in general , we found that any structure we put into the input beam tended to disappear because of the strong nonlinear space - charge forces at the hebt front end .multipole errors were investigate using a version of marylie with 3-d space charge. we could generate tails that looked like the experimentally observed ones , but this took multipoles that were about 500 times as large as were measured when the quadrupoles were mapped .quadrupole rotation studies also yielded negative results .we investigated various currents and variations in space charge effects along the beamline , as could be generated by neutralization or unknown effects .we had practically no knowledge of the beam in the longitudinal direction except that practically all of the beam is very near the 6.7 mev design energy .since the transverse beam seems to be reasonably predicted by the rfq simulation code , we do not expect the longitudinal phase space to be much different from the prediction .we tried various longitudinal phase - space variations and none led to profiles at the wire scanner that looked similar to the experimental ones .in the upstream part of the hebt the beam size profiles ( and as functions of ) for the quad scan tune are not much different from those of the normal hebt tune . the differences occurs quite a way downstream . but here , space charge effects are small and are unlikely to explain the differences we see in the beam profiles at the wire scanner .this is a mystery that is still unresolved .if we succeed in simulating profiles at the wire scanners that look more like the ones seen in the measurement , then it will be reasonable to fit the data to the 3-d impact simulations . in that case, we will use all the wire - scanner data , taking into account the detailed shape of the profile and not just the rms value of the beam width , as we did for the trace 3-d and linac fits . while we were able to use a personal computer to run the hpf version of impact for most of the work described here, the fitting to the impact model will have to be done on a supercomputer .we thank robert ryne and ji qiang for providing the impact code and for help associated with its use .smith , jr . and j.d .schneider , `` status update on the low - energy demonstration accelerator ( leda ) , '' this conference . l.m .young , et al ., `` high power operations of leda , '' this conference .gilpatrick , et al . , leda beam diagnostics instrumentation : measurement comparisons and operational experience , " submitted to the beam instrumentation workshop 2000 , cambridge , ma , may 8 - 11 , 2000 .schulze , et al ., `` beam emittance measurements of the leda rfq , '' this conference . w.p .lysenko , j.d .gilpatrick , and m.e .schulze , `` high energy beam transport beamline for leda , '' 1998 linear accelerator conference .
quadrupole scans were used to characterize the leda rfq beam . experimental data were fit to computer simulation models for the rms beam size . the codes were found to be inadequate in accurately reproducing details of the wire scanner data . when this discrepancy is resolved , we plan to fit using all the data in wire scanner profiles , not just the rms values , using a 3-d nonlinear code .
natural systems often exhibit irregular and complex behavior that at first look erratic but in fact posses scale invariant structure ( e.g. , * ? ? ?* ; * ? ? ?in many cases this nontrivial structure points to long - range temporal correlations , which means that very far events are actually ( statistically ) correlated with each other .long - range correlations are usually characterized by scaling laws where the scaling exponents quantify the strength of these correlations .however , it is clear that the two - point long - range correlations reveal just one aspect of the complexity of the system under consideration and that higher order statistics is needed to fully characterized the statistical properties of the system .the two - point correlation function is usually used to quantify the scale invariant structure of time series ( long - range correlations ) , while the -point correlation function quantifies the higher order correlations . in some cases the -point correlation function is trivially related to the two - point correlation function , and the scaling exponents of different moments are linearly dependent on the second moment scaling exponent .these kind of processes are termed `` linear '' and `` monofractal '' , since just a single exponent that determines the two - point correlations ( and thus the linear correlations ) quantifies the entire spectrum of order scaling exponents . in other cases ,the relation between the -point correlation function has nontrivial relation to the two - point correlation function , and a ( nontrivial ) spectrum of scaling exponents is needed to quantify the statistical properties of the system ; these processes are called `` nonlinear '' and `` multifractal '' .the classification into linear and nonlinear processes is important for the understandings of the underlying dynamics of natural time series and for models development .moreover , the nonlinear properties of natural time series may have practical diagnosis use ( e.g. , * ? ? ? * ) .recently a simple measure for nonlinearity of time series was suggested . given a time series ,the correlations in the magnitude series ( volatility ) may be related ( in some cases ) to the nonlinear properties of the time series ; basically , when the magnitude series is correlated the time series is nonlinear .it was also shown that the scaling exponent of the magnitude series may be related , in some cases , to the multifractal spectrum width . however , these observations are empirical and the reasons underly these observations still remain unclear . herewe develop an analytical relation between the scaling exponent of the original time series and the scaling exponent of the magnitude time series for linear series .we show that when the original time series is nonlinear , the corresponding scaling exponent of the magnitude series is larger ( or in some cases equal ) compared to that of linear series and that the correlations in the magnitude series increase as the nonlinearity of the original series increases .these relations may help to identify nonlinear processes and quantify the nonlinearity strength . based on these resultswe suggest a generic model for multifractality by multiplying random signs with long - range correlated noise , and show that the multifractal spectrum width and the volatility exponent increase as these correlations become stronger .the paper is organized as follows : in section [ sec : background ] we present some background regarding non - linear processes and magnitude ( volatility ) series correlations . in section [ sec : linear ] we develop an analytical relation between the original time series scaling exponent and the magnitude series exponent ; we confirm the analytical relation using numerical simulation .we then study in section [ sec : multifractal ] the relation between volatility correlations and the multifractal spectrum width of several multifractal models , and introduce a simple model that generates multifractal series by explicitly inserting long range correlations in the magnitude series .a summary of the results is given in section [ sec : summary ] .the long range correlations of a time series can be evaluated using the two - point correlation function ( stands for expectation value ) ; when is long - range correlated and stationary the two - point correlation function is ( ) .it is possible to get good estimation of the scaling exponent using various methods , such as the power spectrum , fluctuation analysis ( fa ) , detrended fluctuation analysis ( dfa) , wavelet transform , and others ; see for more details .the different techniques characterize the linear two point correlations in a time series with a scaling exponent which is related to the scaling exponent .in this study we use the fa method for the analytical derivations since this method is relatively simple . in the fa methodthe sequence is treated as steps of a random walk ; then the variance of its displacement , , is found by averaging over different time windows of length .the scaling exponent of the series ( also referred to as the hurst exponent ) can be measured using the relation where is the variance ; the scaling exponent is related to the correlation exponent by . a more complete description of stochastic process with a zero mean is given by its multivariate distribution : .it is equivalent to the knowledge stored in the correlation functions of different orders : , , , , etc . in many casesit is useful to use the cummulants of different orders which are related to the order correlation function by : and so on . for a linear process( sometimes referred to as `` gaussian '' process ) , all cummulants above the second are equal to zero ( wicks theorem ) .thus , in this case , the two - point correlation fully describes the process , since all correlation functions ( of positive and even order ) may be expressed as products of the two - point correlation function .processes that are nonlinear ( or `` multifractal '' ) have nonzero high order cummulants .the nonlinearity of these processes may be detected by measuring the multifractal spectrum using advanced techniques , such as the wavelet transform modulus maxima or the multifractal dfa ( mf - dfa ) . in mf - dfawe calculate the order correlation function of the profile and the partition function is . for timeseries that obey scaling laws the partition function is .thus , the `` spectrum '' of scaling exponents characterizes the correlation functions of different orders . for a linear series , the exponents will all give a single value for all . a known example for the use of volatility correlations ( defined below )is econometric time series .econometric time series exhibit irregular behavior such that the changes ( logarithmic increments ) in the time series have a white noise spectrum ( uncorrelated ) . nonetheless , the _ magnitudes _ of the changes exhibit long - range correlations that reflect the fact that economic markets experience quiet periods with clusters of less pronounced price fluctuations ( up and down ) , followed by more volatile periods with pronounced fluctuations ( up and down ) . this type of correlation is referred to as `` volatility correlations '' . given a time series , the magnitude ( volatility ) series may be defined as . the scaling exponent ofthe magnitude series is the volatility scaling exponent .correlations in the magnitude series are observed to be closely related to nonlinearity and multifractality in this paper we refer to `` volatility '' with two small differences .first , we consider the _ square _ of the series elements rather than their absolute values .according to our abservations , this transformation has negligible effect on the scaling exponent , but it substantially simplifies the analytical treatment .second , for simplicity , we also consider the series itself rather than the increment series .that is : the volatility series is defined as rather than .note that in most applications the absolute values of the increment series are considered instead of the absolute values of the series itself , since the original series is mostly nonstationary ( defined below ) ; here we overcome this problem by first considering stationary series .series with are _ stationary _ , that is , their correlation function depends only on the _ difference _ between points and , i.e. , .their variance is a finite constant , that does not increase with the sequence length .sequences with are _ non stationary _ and have a different form of correlation function where the correlation function depends also on the absolute indices and , ; see .scaling exponents of nonstationary series ( or series with polynomial trends ) may be calculated using methods that can eliminate constant or polynomial trends from the data .we proceed to study the relation between the volatility correlation exponent and the original scaling exponent for linear processes , both numerically and analytically .we generate artificial long - range correlated linear sequences with different values of in the range ] with intervals of .thus , looking only at the positive frequencies , the minimal frequency ( without loss of generality ) is .the variance of the signal is the total area under the power spectrum : assuming , for the variance is , thus , the variance diverges logarithmically for .* non - stationarity : * for the variance _ diverges _ with the sequence length , because of the singularity in the power spectrum , and the sequence is non - stationary . for divergence is power - law , i.e. , while at the divergence is logarithmic .* finite size effects : * for the variance converges to a finite constant so the sequence is stationary , but as this convergence becomes slower .this means that as , larger and larger sequence lengths are required so that the variance will indeed converge to a constant value ( see fig .[ fig : ro_large ] ) .this argument also holds for other values of the correlation functions , , although in a more moderate way .the strong finite size effects around and the non - stationarity at have to be taken into account when calculating the magnitude series scaling exponent .this is done by dividing the volatility fluctuation function by the variance of the sequence given in eq .( [ equ : var_finite_n ] ) . for finite size effects disappear and converges to its theoretical value ( see fig . [ fig : vol_linear ] ) .this convergence is extremely slow and becomes weaker as we approach . for completeness , we show in fig . [fig : ro_small ] the correlation coefficients for . in this regime the sequences exhibits short range anti - correlations as can be seen in fig .[ fig : ro_small ] .the expression of the correlation function for is approximately note that in the dfa notation the expressions for should be larger by one than those of ; however , since here we consider the increments , the series exponent is reduced back by one , thus compensating the dfa integration .fa and dfa actually measure the scaling of the fluctuations for window sizes ranging from 1 to t. fluctuations for windows of size t are given by , while fluctuations for windows of size 1 are actually the variance of the sequence .thus , the scaling exponent is approximated by /\ln t=[\ln \av{x_t^2}^{1/2}-\ln var(u_i)^{1/2}]/\ln t=\ln [ \av{x_t}/var(u_i)]/2\ln t$ ] .therefore , in cases where the variance is not constant , the fluctuation function should be normalized by the variance .
previous studies indicate that nonlinear properties of gaussian time series with long - range correlations , , can be detected and quantified by studying the correlations in the magnitude series , i.e. , the `` volatility '' . however , the origin for this empirical observation still remains unclear , and the exact relation between the correlations in and the correlations in is still unknown . here we find analytical relations between the scaling exponent of linear series and its magnitude series . moreover , we find that nonlinear time series exhibit stronger ( or the same ) correlations in the magnitude time series compared to linear time series with the same two - point correlations . based on these results we propose a simple model that generates multifractal time series by explicitly inserting long range correlations in the magnitude series ; the nonlinear multifractal time series is generated by multiplying a long - range correlated time series ( that represents the magnitude series ) with uncorrelated time series [ that represents the sign series . our results of magnitude series correlations may help to identify linear and nonlinear processes in experimental records . # 1#1 = 2 = 2 = 2
many future technologies will be based on quantum systems manipulated to achieve engineering outcomes .quantum feedback control forms one of the key design methodologies that will be required to achieve these quantum engineering objectives .examples of quantum systems in which quantum control may play a key role include the quantum error correction problem ( see ) which is central to the development of a quantum computer and also important in the problem of developing a repeater for quantum cryptography systems , spin control in coherent magnetometry ( see ) , control of an atom trapped in a cavity ( see ) , the control of a laser optical quantum system ( see ) , control of atom lasers and bose einstein condensates ( see ) , and the feedback cooling of a nanomechanical resonator ( see ) .attention is now turning to more general aspects of quantum control , particularly in the development of systematic quantum control theories for quantum systems .for example in and it was shown that the linear quadratic gaussian ( lqg ) optimal control approach to controller design can be extended to linear quantum systems .also , in , it was shown that the optimal control approach to controller design can be extended to linear quantum systems .these theoretical results indicate that systematic optimal control methods of modern control theory have the potential of being applied to quantum systems .such modern control theory methods have the advantage that they are strongly model based and provide systematic methods of designing multivariable control systems which can achieve excellent closed loop performance and robustness .experimental demonstrations of some of these theoretical results now appear viable .for example , ref . presents the first experimental demonstration of the design and implementation of a coherent controller from within this formalism .one particular systematic approach to control is the lqg optimal control approach to design .lqg optimal control is based on a linear dynamical model of the plant being controlled which is subject to gaussian white noise disturbances ; e.g , see . in lqg optimal control ,a dynamic linear output feedback controller is sought to minimize a quadratic cost functional which encapsulates the performance requirements of the control system .a feature of the lqg optimal control problem is that its solution involves the use of a kalman filter which provides estimates of the internal system variables .furthermore , in many applications integral action is required in order to overcome low frequency disturbances acting on the system being controlled .this issue is addressed here by using a version of lqg optimal control referred to as integral lqg control which forces the controller to include integral action ; see . in this paper, we consider the application of systematic methods of lqg optimal control to the archetypal quantum optical problem of locking the resonant frequency of an optical cavity to that of a laser .homodyne detection of the reflected port of a fabry - perot cavity is used as the measurement signal for an integral lqg controller . in our case, the linear dynamic model used is obtained using both physical considerations and experimentally measured frequency response data which is fitted to a linear dynamic model using subspace system identification methods ; e.g. , see . the integral lqg controller design is discretized and implemented on a dspace digital signal processing ( dsp ) system in the laboratory and experimental results were obtained showing that the controller has been effective in locking the optical cavity to the laser frequency .we also compare the step response obtained experimentally with the step response predicted using the identified model .this paper is structured as follows : in section [ sec : problem ] the quantum optical model of an empty cavity is formulated in a manner consistent with the lqg design methodology ; section [ sec : sysid - cavity ] outlines the subspace system identification technique used to arrive at the linear dynamic model for the cavity system ; section [ sec : lqg ] presents the lqg optimal controller design methodology as applied to the problem of locking the frequencies of a laser and of an empty cavity together ; section [ sec : expt ] presents experimental results ; and we conclude in section [ sec : conclusion ] .a schematic of the frequency stabilization system is depicted in fig . [fig : hjp - laser1-fig1 ] . the cavity can be described in the heisenberg picture by the following quantum stochastic differential equations ; e.g. , see and section 9.2.4 of : here , the annihilation operator for the cavity mode is denoted by and the annihilation operator for the coherent input mode is denoted by , both defined in an appropriate rotating reference frame . here is quantum noise .we have written where , and quantify the strength of the couplings of the respective optical fields to the cavity , including the losses .the input to the cavity is taken to be a coherent state with amplitude and which is assumed to be real without loss of generality . denotes the frequency detuning between the laser frequency and the resonant frequency of the cavity .the objective of the frequency stabilization scheme is to maintain .the detuning is given by where is the resonant frequency of the cavity , is the laser frequency , is the optical path length of the cavity , is the speed of light in a vacuum and is a large integer indicating that the longitudinal cavity mode is being excited . the cavity locking problem is formally a nonlinear control problem since the equations governing the cavity dynamics in ( [ cavity-1 ] ) contain the nonlinear product term .in order to apply linear optimal control techniques , we linearize these equations about the zero - detuning point .let denote the steady state average of when such that .the perturbation operator satisfies the linear quantum stochastic differential equation ( neglecting higher order terms ) the perturbed output field operator is given by which implies .we model the measurement of the quadrature of with homodyne detection by changing the coupling operator for the laser mode to , and measuring the real quadrature of the resulting field .the measurement signal is then given by where is the intensity noise of the input coherent state .as we shall see in section [ sec : lqg ] , the lqg controller design process starts from a state - space model of the plant , actuator and measurement , traditionally expressed in the form : where represents the vector of system variables ( we shall not use the control engineering term `` states '' to avoid confusion with quantum mechanical states ) , is the input to the system , is the measured output , and is a gaussian white noise disturbance acting on the system . also , represents the system matrix , is the input matrix , is the output matrix , and and are the system noise matrices .the cavity dynamics and homodyne measurement are expressed in state - space form in terms of the quadratures of the operators as & = & \left [ \begin{array}{cc } -\frac{\kappa}{2 } & 0 \\ 0 & -\frac{\kappa}{2 } \end{array } \right ] \left [ \begin{array}{c } \tilde{q } \\ \tilde{p } \end{array } \right ] + \left [ \begin{array}{c } 0 \\ 2\alpha \end{array } \right ] \delta \nonumber \\ & & - \sqrt{\kappa}_0 \left [ \begin{array}{cc } \cos \phi & -\sin \phi \\\sin \phi & \cos \phi \end{array } \right ] \left [ \begin{array}{c } q_0 \\ p_0 \end{array } \right ] \nonumber \\ & & - \sqrt{\kappa}_1 \left [ \begin{array}{cc } 1 & 0 \\ 0 & 1 \end{array } \right ] \left [ \begin{array}{c } q_1 \\ p_1 \end{array } \right ] \nonumber \\ & & - \sqrt{\kappa}_l \left [ \begin{array}{cc } 1 & 0 \\ 0 & 1 \end{array } \right ] \left [ \begin{array}{c } q_l \\p_l \end{array } \right ] ; \nonumber \\y & = & k_2 \sqrt{\kappa}_0 \left [ \begin{array}{cc } \cos \phi & \sin \phi \end{array } \right ] \left [ \begin{array}{c } \tilde{q } \\ \tilde{p } \end{array } \right ] \nonumber \\ & & + \ ; k_2\left [ \begin{array}{cc } 1 & 0 \end{array } \right ] \left [ \begin{array}{c } q_0 \\ p_0 \end{array } \right ] + w_2 \nonumber \\ & = & z+ \ ; k_2\left [ \begin{array}{cc } 1 & 0 \end{array } \right ] \left [ \begin{array}{c } q_0 \\ p_0 \end{array } \right ] + w_2 \label{cavity - quad - linear } \end{aligned}\ ] ] with noise quadratures for ( all standard gaussian white noises ) . here, is the homodyne detector output in which we have included an electronic noise term .also , represents the transimpedance gain of the homodyne detector , including the photodetector quantum efficiency .the state - space model of the cavity given in ( [ cavity - quad - linear ] ) is incomplete as it does not include an explicit model , including the actuation mechanism , for the dynamics of the detuning , .the dynamics of the detuning and actuation mechanism are sufficiently complex that direct measurement is a more experimentally reasonable approach than _ a priori _ modeling of the system . following these measurements ,an approach called subspace system identification is used to obtain the complete state - space model .it is at this point that the controller design process diverges from traditional , pre-1960s control techniques .specifically , in the traditional approach , a controller is designed using root - locus or frequency response methods , based on measurements of the plant transfer function . in our modern control approach ,the subspace identification method determines a state - space model from the input - output frequency response data and generates the system matrices , and in ( [ eqn : cavity - sys ] ) ; see .the system matrices are used in the lqg design process as we shall see in section [ sec : lqg ] . the transfer function of the cavity and measurement system ( or alternatively the `` plant '' ) is identified under closed - loop conditions .we use an analog proportional - integral ( pi ) controller to stabilize the system for the duration of the measurement .the frequency response data thus obtained for the plant is plotted in fig .[ fig : plant_sysid ] . from the data, at least three resonances can be clearly identified , occurring at frequencies of about 520 , 2100 and 5000 hz respectively .an -order anti - aliasing filter with a corner frequency of 2.5 khz ( chosen because it is far greater than the unity - gain bandwidth of the controller and far less than the 50 khz sampling frequency ) was placed immediately prior to the digital lqg controller .this is treated mathematically as augmenting the plant such that the augmented plant has a frequency response that is the product of the plant data gathered previously and the anti - aliasing filter which is identified separately .the frequency response data obtained is then fitted to a order model using a subspace identification method ; see .the algorithm accommodates arbitrary frequency spacing and is known to provide good results with flexible structures .this makes it suitable for our application which includes a piezo - electric actuator coupled to the cavity mirror .[ fig : hjp - laser1-sysid ] compares the gain ( in db ) and the phase ( in ) of the measured frequency data for the augmented plant with that of its identified system model .the lqg optimal control approach to controller design begins with a linear state - space model of the form ( [ eqn : cavity - sys ] ) . note that this model involves the use of gaussian white noise disturbances although a more rigorous formulation of the lqg optimal control problem involves the use of a wiener process to describe the noise rather than the white noise model ( [ eqn : cavity - sys ] ) ;e.g. , see .however , for the purposes of this paper , a model of the form ( [ eqn : cavity - sys ] ) is most convenient . in the model ( [ eqn : cavity - sys ] ) , the term corresponds to the process noise and the term corresponds to the measurement noise .the lqg optimal control problem involves constructing a dynamic measurement feedback controller to minimize a quadratic cost functional of the form dt \right]\ ] ] where and are symmetric weighting matrices .the term in the cost functional ( [ eq : lqr ] ) corresponds to a requirement to minimize the system variables of interest and the term corresponds to a requirement to minimize the size of the control inputs .the matrices and are chosen so that the cost functional reflects the desired performance objectives of the control system .the great advantage of the lqg optimal control approach to controller design is that it provides a tractable systematic way to construct output feedback controllers ( even in the case of multi - input multi - output control systems ) .also , numerical solutions exist in terms of algebraic riccati equations which can be solved using standard software packages such as matlab ; e.g. , see .a feature of the solution to the lqg optimal control problem is that it involves a kalman filter which provides an optimal estimate of the vector of system variables based on the measured output .this is combined with a `` state - feedback '' optimal control law which is obtained by minimizing the cost functional ( [ eq : lqr ] ) as if the vector of system variables was available to the controller .note that the lqg controller design methodology can not directly handle some important engineering issues in control system design such as robustness margins and controller bandwidth .these issues can however be taken into account in the controller design process by adding extra noise terms to the plant model ( over and above the noise that is present in the physical system ) and by suitably choosing the quadratic cost functional ( [ eq : lqr ] ) .the dynamics of the cavity and measurement system can be subdivided as shown in fig .[ fig : sys ] and comprises an electro - mechanical subsystem and an electro - optical subsystem ( the optical cavity and homodyne detector ) .the control objective is to minimize the cavity detuning , which is not available for measurement . instead, the measurement signal is the output of the homodyne detector and to include in the performance criterion , we need to relate to , the mean value of .it can be seen from ( [ cavity - quad - linear ] ) that the transfer function of the optical cavity from to is a first - order low - pass filter with a corner frequency of .physically , this arises from the well - known ( see for example and the references therein ) transfer function of the optical cavity from to a phase shift , which is then measured by the homodyne detector . in the experimental systemdescribed herein hz , which is well beyond the frequency range of interest for the integral lqg controller .hence we can consider to be proportional to under these conditions , and therefore minimizing variations in can be regarded as being equivalent to minimizing variations in .the lqg performance criterion to be used for our problem is chosen to reflect the desired control system performance .that is , ( i ) to keep the cavity detuning small ( ideally zero ) , and ( ii ) to limit the control energy .however , these requirements are not sufficient to generate a suitable controller as the system is subject to a large initial dc offset and slowly varying disturbances .our application requires the elimination of such effects .this can be achieved by using integral action and is the reason for our use of the integral lqg controller design method .we include integral action by adding an additional term in the cost function which involves the integral of the quantity .moreover , we include the `` integral state '' as another variable of the system .the new variable is also fed to the kalman filter , which when combined with an optimal state - feedback control law leads to an integral lqg optimal controller .this controller will then meet the desired performance requirements as described above ; e.g. , see .[ fig : sysloop ] shows the integral lqg controller design configuration .the overall system can be described in state - space form as follows : ; \label{sysid}\end{aligned}\ ] ] where \;\;\mathrm{and}\;\ ; \tilde{y } = \left [ \begin{array}{c } y_1 \\ y_2 \end{array } \right].\ ] ] here the matrices are constructed from the matrices as follows : , \quad \tilde{b } = \left [ \begin{array}{c } b \\ 0 \end{array } \right],\ ; \mathrm{and } \;\ ; \tilde{c } = \left [ \begin{array}{cc } c & 0\\0 & i\end{array } \right].\ ] ] section [ sec : sysid - cavity ] outlines the technique used to determine . in equation ( [ sysid ] ) , the quantity represents mechanical noise entering the system which is assumed to be gaussian white noise with variance .the quantity represents the sensor noise present in the system output , which is assumed to be gaussian white noise with variance .the quantity is included to represent the sensor noise added to the quantity .this is assumed to be gaussian white noise with variance and is included to fit into the standard framework for the lqg controller design .the parameters and are treated as design parameters in the lqg controller design in sec .[ sec : lqg - values ] .the integral lqg performance criterion can be written as : dt \right ] \label{lqg - cost}\ ] ] where we choose the matrices and such that where and are also treated as design parameters .the first term of the integrand in ( [ lqg - cost ] ) ensures that the controlled variable goes to zero , while the second term forces the integral of the controlled variable to go to zero .also , the third term serves to limit the control input magnitude .the expectation in ( [ lqg - cost ] ) is with respect to the classical gaussian noise processes described previously , and the assumed gaussian initial conditions . given our system as described by ( [ sysid ] ) , the optimal lqg controller is given by ( e.g. , see ) where is the solution of the following matrix riccati equation and \tilde{c}.\end{aligned}\ ] ] here is an optimal estimate of the vector of plant variables obtained via a steady state kalman filter which can be described by the state equations .\label{lqg - filter}\ ] ] for the case of uncorrelated process and measurement noises , the steady state kalman filter is obtained by choosing the gain matrix in ( [ lqg - filter ] ) as where is the solution of the matrix riccati equation here \quad \mathrm{and } \quad v_2 = \left [ \begin{array}{cc } \epsilon_2 ^ 2 & 0 \\ 0 & \epsilon_3 ^ 2 \end{array } \right]\end{aligned}\ ] ] define the covariance of the process and measurement noises respectively . in designing the lqg controller ,the parameters ( the mechanical noise variance ) , ( the sensor noise variance of ) , ( the variance of the sensor noise added to ) , ( the control energy weighting in the lqg cost function ) and ( the integral output weighting in the lqg cost function ) were used as design parameters and were adjusted for good controller performance .this includes a requirement that the control system have suitable gain and phase margins and a reasonable controller bandwidth .the specific values used for the design are shown in table [ tab : designvalues ] : .design parameter values [ cols="^,<",options="header " , ] these parameter values led to a order lqg controller which is reduced to a order controller using a frequency - weighted balanced controller reduction approach .we reduce the order of the controller to decrease the computational burden ( and hence time - delay ) of the controller when it is discretized and implemented on a digital computer .the new lower order model for the controller is determined via a certain controller reduction technique which minimizes the weighted frequency response error between the original controller transfer function and the reduced controller transfer function ; see .this is illustrated in fig .[ fig : cont_fr ] which shows bode plots of the full - order controller and the reduced order controller .formally , the reduced order controller is constructed so that the quantity is minimized . here , , is the plant transfer function matrix and is the original full controller transfer function matrix . also , is the reduced dimension controller transfer function matrix .the notation refers to the norm of a transfer function matrix which is defined to be the maximum of ] refers to the maximum singular value of the matrix .note that this approach to controller reduction does not guarantee the stability of the closed loop system with the reduced controller .we check for closed loop stability stability separately after the controller reduction process .the reduced controller is then discretized at a sampling rate of 50 khz and the corresponding bode plot of the discrete - time loop gain is shown in fig .[ fig : loop_gain ] .this discretized controller provides good gain and phase margins of 16.2 db and respectively . .the controller provides a gain margin of 16 db and phase margin.,width=328 ]the discrete controller is implemented on a dspace ds1103 power pc dsp board .this board is fully programmable from a simulink block diagram and possesses 16-bit resolution .the controller successfully stabilizes the frequency in the optical cavity , locking its resonance to that of the laser frequency , ; see .this can be seen from the experimentally measured step response shown in fig .[ fig : stepresp ] this step response was measured by applying a step disturbance of magnitude 0.1 v to the closed - loop system as shown in figure [ fig : e_to_r ] . here is the step input signal and is the resulting step response signal which was measured .in this paper , we have shown that a systematic modern control technique such as lqg integral control can be applied to a problem in experimental quantum optics which has previously been addressed using traditional approaches to controller design . from frequency response data gathered ,we have successfully modeled the optical cavity system , and used an extended version of the lqg cost functional to formulate the specific requirements of the control problem .a controller was obtained and implemented which locks the resonant frequency of the cavity to that of the laser frequency . to improve on the current system, one might consider using additional actuators such as a phase modulator situated within the cavity or an additional piezo actuator to control the driving laser .additional sensors which could be considered include using a beam splitter and another homodyne detector to measure the other optical quadrature and an accelerometer or a capacitive sensor to provide additional measurements of the mechanical subsystem .in addition , it may be useful to control the effects of air turbulence within the cavity and an additional interferometric sensor could be included to measure the optical path length adjacent to the cavity .such a measurement would be correlated to the air turbulence effects within the cavity .all of these additional actuators and sensors could be expected to improve the control system performance provided they were appropriately exploited using a systematic multivariable control system design methodology such as lqg control .one important advantage of the lqg technique is that it can be extended in a straightforward way to multivariable control systems with multiple sensors and actuators .moreover , the subspace approach to identification used to determine the plant is particularly suited to multivariable systems .it is our intention to further the current work by controlling the laser pump power using a similar scheme as the one used in this paper .this work is expected to pave the way for extremely stable lasers with fluctuations approaching the quantum noise limit and which could be potentially used in a wide range of applications in high precision metrology , see .
this paper considers the application of integral linear quadratic gaussian ( lqg ) optimal control theory to a problem of cavity locking in quantum optics . the cavity locking problem involves controlling the error between the laser frequency and the resonant frequency of the cavity . a model for the cavity system , which comprises a piezo - electric actuator and an optical cavity is experimentally determined using a subspace identification method . an lqg controller which includes integral action is synthesized to stabilize the frequency of the cavity to the laser frequency and to reject low frequency noise . the controller is successfully implemented in the laboratory using a dspace dsp board .
it is common in the physics literature to find more or less involved statements on the existence of the basic objects and of their representations .relevant examples today may be strings or branes , as atoms and quarks were in the past .as a practical or more conscious attitude , the positions range from two extremes . on one side ,existence is granted or denied to the items if they can be detected ( like atoms or quarks ) or disproved , like the ether or the planet vulcan , without further thoughts about the reality of their mathematical representative . in the other extreme existenceis assigned even to the representations themselves , with varying degrees of commitment to the philosophical concepts . in a way this parallels the extremes of the oldest philosophical debate , between rationalists and empiricists , the most radical forms of which are naive realism and constructivism .practitioners of physics are usually closer to the former view .nevertheless one can not always ignore the philosophical background , and certainly not concerning the foundations of quantum mechanics , where the interpretation , which is a basic ingredient , has always been a matter of debate , often called the problem of realism .this debate can have also practical implications e.g. for quantum information or quantum cosmology . in any case , thinking about these abstract matters certainly opens our vision and improves our questions and understanding .our purpose is to provide a clear and simple conceptual ( philosophical ) picture of these important questions , which could also be useful in practice , clean of technicalities .our view was mainly built thinking about the philosophical ideas of boltzmann .they are contained in his only specific technical article , which is both long and hard to read , in his notes for the philosophy lectures held by boltzmann in his last years , which are even harder , and also informally scattered in his popular writings and other popularizations , which are very rewarding . in the next section [ theproblem ]we address the main problem of the existence of the physical world and its description , beginning with a historical perspective to provide a selection of boltzmann s ideas , and we reassure the reader immediately with the basic conclusion , that one can safely maintain a realistic position on the objective existence of the external world , in permanent evolution , adjusted and regulated by experimental confrontation . in boltzmann`` we must adopt the objective point of view '' , as the phrase goes._ .this will lead us to our main proposal , to call existing or non existing only what is detectable or falsable respectively . as for the different concepts , at various levels of complexity and abstraction , for which neither option can be achieved at present, one should distinguish between the physics approach with mathematical analysis and measurements , and the philosophical reasoning , involving qualitative attributes which can be related to existence .interesting examples of the former are entanglement or virtualities , and of the later unavoidability , which is discussed in section [ framework ] .there we apply these ideas to specific concepts of quantum field theory , string theory and cosmology , like monopoles and branes and to the general problem of the interpretation of quantum mechanics .section [ conclusions ] summarizes the conclusions and outlook . in appendix 1we illustrate the mental representations with a model analogy from neural networks and programming languages.we include another appendix 2 to explain in a concise way the philosophical background , relevant for our arguments and beyond .to put the problem in a useful historical perspective , we remind that boltzmann , who had based the ( mechanical ) understanding of thermodynamics on atoms , had to discuss and defend them frequently against the radical philosophical positivism as well as against extreme phenomenological theorists .the present work is in part an attempt to make more accessible his ideas and to start applying them to present physics .the first point of boltzmann philosophy was the necessity to define clearly the concepts discussed and in fact this is how his purely philosophical work begins and ends , illustrated with examples and personal anecdotes .this claim for clarity is a constant in his writings , urging to prevent the perverse antinomies of philosophy.this will be present in the discussion on the _ existence _ now and we shall keep it in mind in the applications below , especially to the interpretation of quantum mechanics . in his articleboltzmann goes immediately after this introduction to describe the process of human perception of the external world , starting from the observation that _ the laws according to which our own perceptions run their course are familiar to us and lie ready in the memory . by attaching these same pictures also to the perceptual complexes that define the bodies of others , we obtain the simplest description of these complexes_.this is elaborated further arguing that in the extreme idealistic position _the sensations and volitions of all others could not be on the same level as the sensations of the observer , but would have to be taken as merely expressing the equations between his own sensations_. the idea is stated more clearly in the next page : _ therefore we designate these alien perceptions with analogous mental signs and words to those for our own , because this gives a good picture of the course of many complexes of sensations and this simplifies the world picture_. for clarity and economy , boltzmann claims after another couple of pages that _ we must adopt the `` objective point of view '' , as the phrase goes .it turns out that the concepts we linked with `` existing '' and `` non existing '' largely remain applicable unchanged .those people or inanimate things that i merely imagine without being forced to do so by the regularities in complexes of perceptions do not exist for others , they are `` objectively '' non - existing ._ the main conclusion of this line of thought , based largely on the _ common judgement of all _ , implies in ( our ) in simple terms , that one can maintain a realistic position , assuming the objective existence of the physical world , in a reasonable degree of agreement with our representations thereof , which are sufficiently universal and which may evolve as required by the experimental confrontation and of course by our own evolution , an essential ingredient of boltzmann philosophy .for instance he wrote : _ the brain we view as the apparatus or organ for producing world views , an organ which because of the great utility of these views for the preservation of the species has , conformably , with darwin s theory , developed in a man to a degree of particular perfection just as the neck in the giraffe and the bill in the stork have developed to an unusual length _ .let us explain briefly the argument , which being _ philosophical _ , has to use logic starting , of course , from our mental representations , the concepts .we recognize them and decide whether they are relevant or not from confrontation with the representations of others .but those are also external _ objects _ , so that any statement and mental construction is actually based on the external world , as represented with enough degree of fidelity and universality by our concepts .the adequacy or correspondence of the reasonably universal representations ( concepts ) is based on confrontation and guaranteed by evolution , which also renders the process dynamical ( over large time scales ). therefore it would be _ inconsistent _ to _ deny _ the reality of the external world .notice that one has not _ proved _ its existence , but the absurdity of the attempts to disprove it , thereby establishing the possibility and convenience of an objective world picture .boltzmann uses then an ingenious argument to maintain this universal realism for any kind of brain process and beyond , applying these reasoning successively to simpler and simpler organisms , reaching the virus and molecular levels , until confrontation or detection is generalized to interactions .our starting point , the mental processes , beginning with the primary physical inputs , which are later elaborated in different degrees of robustness and complexity , yielding the representations with varying degrees of fidelity and universality , should become ultimately also a question of bio - physical interactions . but for our philosophical arguments suffice to say in that respect that the agreement between nature and reason is because reason is natural and not the opposite . in appendix1we provide a cybernetic analogy of the cognitive process , which can be useful . in any case , from the arguments so far , i.e. from clear conceptions , rigour and logic , one has established the possibility of an `` objective '' world view , and the convenience of this representation , provided it is sufficiently contrasted and updated .this point of view can be seen a golden middle between the two extreme positions , as illustrated in appendix2 .this leads further to propose , that we call _ existent _ only those representations which , clearly defined , are physically realizable and detectable ( in principle with some energy transfer ) .a consequence is that there is no place for gradations in this clear , but restricted notion of existence : representations which fulfil it correspond to existing objects , like atoms or neutron stars , while those which do not , should not be called existing .this notion of existence has many advantages , like a highest degree of universality . _the assumption of different degrees of existence would be decidedly inappropriate _ as boltzmann says , and _ the denotation must always be so chosen that we can operate with the same concepts in the same way under all circumstances , just as a mathematician defines negative or fractional exponents in such a way that he can operate with them as with integral ones_. this avoids confusions , like most of the dreadful antinomies of philosophy , but it also rises the following problem .there are concepts which can be clearly disproved , like ether or the planet vulcan .but for many ideas one is not able , at least for different time periods , to detect or disprove them as defined .what can then be said about such useful representations , with respect to the external world ?we have to address the problem , because as we said , any statement of any kind involves the representations .this marks a line between a physics approach , where one has to look for a verification or falsification , and the philosophical , where one can envisage attributes , which can be related to existence more or less directly on the one hand , but which have the possibility of qualitative , more or less coarse gradations , on the other .they can be vague , like clarity , simplicity or beauty , or very sharp , like _( in)dispensability _ , which is discussed in the next section at length because of his potential relevance .a first consequence of this is the great convenience of distinguishing in physics between qualitative concepts , but which are ultimately philosophical interpretations , from genuine physical proposals , always falsable in a way or another .textbook examples are many of the different proposals to render quantum mechanics _ complete _ or more understandable , which will be discussed in [ qm ] .there is of course place in physics for useful qualitative discussions , even in our restrictive philosophical view , as we discuss next .one of the simplest and useful examples of such general predicates related to existence can be _ indispensability _ ( or the closely related concept of _ unavoidability _ ) , introduced by boltzmann less formally for the concept of the atom in his popular writings .they are not present in his technical technical publication , and so their analysis below , is ( needless to say ! ) , essentially ours .boltzmann argued in his talks and popular writings , that his ideal atoms were , not only useful , but indispensable .they became of course properly existing after einstein computed ( following boltzmann s prescription for fluctuations ) the observed brownian motion of pollen and made predictions confirmed later by perrin .at this level the concept of atoms was defined simply as elementary grains of matter .of course concepts have to be defined precisely , and that of atom was finely sharpened later .more specifically , they are represented by complex functions , solutions of linear differential equation ( schrdinger s ) , which in turn can be combined , enlarging at will their potential manifestations , as discussed in [ qm ] . to illustrate this furtherlet us use the concept of gen , similar in some ways to the atom . with a very general definition , as units of transmittable information , one can of course call them existing , after their molecular structure was found in 1953 , but in hindsight one could have shown them to be unavoidable , at least since w. sutton named them in 1922 as the mendelian units of transmission .life is even more difficult to define than gen , but we think it should not be difficult to show it to be unavoidable under rather general conditions . as for conscious life , it seems to us almost hopeless at present to define and accordingly much more difficult to argue its unavoidability , as drake equation " shows . notice the subtle difference between indispensability ( unentbehrlichkeit ) , which refers to the object concept direction , the easy way " according to boltzmann , and avoidable ( vermeidlich in german or evitabile in latin languages ) more related to the concept object direction , which is harder the more complex the concept , as is clear and it is illustrated in appendix2 .this admittedly exaggerated subtlety , can nevertheless illustrate the impossibility of a perfect one to one correspondence between concepts and the external world , as we shall see is required by some attempts to make quantum mechanics `` complete '' . still in biologylet us remind of another example given by boltzmann for non existent concepts in the philosophical article : the unicorn .it turns out that today one can speak of the realizability of such a concept , and in fact it has been done already , e.g. with drosophila flies .it seems on the other hand unlikely to be neither unavoidable from evolution , which is difficult anyway , nor at least stable under it .this can illustrate the role of evolution in the notion of existence as stated above , and of the relevance of dispensability .another interesting example , back to physics , is the concept of the electromagnetic field , ( missed by boltzmann ) .of course the concept of electric _ and _ magnetic fields , became almost unavoidable after hertz discovery , and certainly with the disproof of ether .but after the success of quantum gauge theory the concept of electromagnetic field , _ the vector potential _ , is clearly unavoidable .this shows the new level one reaches when a concept is defined in terms of mathematically deeper theories , as the fundamental interactions and constituents .our next discussion involves in fact concepts , monopoles and branes , which are expressed mathematically , with higher degrees of abstraction and complexity .of course the first question in physics is whether the objects can be realized , or detected , i.e. registered in processes involving some energy transfer and which can be reproduced .the properly defined concept will correspond in that case , and only in that case , to an existing object , or , shortly , _ exists _ ( in the sense of the definition ) . until this can be achievedone can discuss questions as how fundamental or _ effective _ is the corresponding mathematical theory , but one can also make progress from a more philosophical point of view as the analysis of the attribute of indispensability , applied to the following relevant examples shows .the concept of monopoles corresponds to the sources of the magnetic field , i.e. magnetic charges .they were shown by dirac to be a way of implementing discreteness of the electric charge , which requires regularization of singularities and conservation of symmetries .its existence has not been proven so far , although there is a more recent claim in condensed matter physics experiments , with a special composite called _ spin ice _ .it is a matter of debate at present as to whether these objects fulfil the requirements of the general class required for discrete charges .on the other hand there are arguments in favour of the indispensability of monopoles : fundamental theories with compact ( gauge ) phase symmetries , the so called grand unified , imply trivial discreteness of charge , but at the same time , they also predict monopoles .grand unification of interactions could be established soon .in fact neutrino masses provide a good hint .this example also shows the importance to consider , as mentioned above , how fundamental is an object , distinguishing monopoles as extended solutions of a fundamental theory from those aggregates of particles combined in atoms , molecules , and further structures .as the name indicates , the concept of brane refers to extended objects in spatial and temporal directions , introduced or appearing in some string and gravity theories .they are useful for combining gauge fields and gravity at the quantum level , at least in some approximations , and in cosmology .they serve so far an auxiliary role . to decide about their existence one has to consider their energy ( or density ) and propagation .in fact they provide a way to implement energy conservation for the strings , which mediate their interactions , to make particles or even the universe .direct observable consequences , to decide if they exist ( in a specified class ) are very difficult ( for instance , there have been proposals for special gravitational waves ) .alternatively , one could consider if they are unavoidable from their ability to change the rate of expansion , at present and in the past ( inflation ) .but these are not so well understood .so , in contrast to the case of monopoles , we could have to wait very long to decide about the existence of branes and even about their indispensability .therefore , it would be convenient on occasion to keep this in mind speaking , or writing about these most interesting concepts .another conclusion of this section is that although indispensability or unavoidability are at another level ( philosophical ) than the physical existence , which requires experimental confirmation or falsification , they can be useful even in physics .besides , they are more flexible and admit with full right loose gradations as _ almost _ or the celebrated _ for practical purposes_. from a formal point of view , they could be seen as a much weaker form than the mathematical attribute _necessary _ , as is appropriate in physics where experiments ultimately decide . in quantum mechanicsone incorporates the representation from the beginning which may be one of the reasons for its astonishing performances .one works in fact at the level of representations without actual reference to the external world until measurement .these representations are complex `` wave '' functions , which can be superposed with the interference properties of waves .our proposal requires not to call these representations _ existing _ until they have been realized in a detectable way , with a probability given of course by the norm of the combined function .this way one keeps a universal meaning of the concept and avoids the potential confusion of many interpretations which have been proposed to cope with the apparent conceptual difficulties of quantum mechanics . these are mainly the essential probabilistic nature of the description ( `` stochastic unit samples '' ) , the mechanism and the nature of the transition from an uncertain or fluctuating state to the robust and certain measurement result ( `` collapse and decoherence '' ) .interpretations which in a more or less subtle way attribute existence to the representations , the wave functions and their combinations , or to additional auxiliary functions , are explained in many excellent textbooks .well known examples are the many worlds of everett , hartle s consistent histories and the pilot waves of bohm .it is important to remember there is no way to detect or falsify those interpretations , so we are in fact at the philosophical level , where the relevant question should be whether these auxiliary objects , worlds or paths , are unavoidable .the answer is clearly they are not , rather the opposite , a view shared by many of the active researchers using those fundamental aspects of quantum mechanics , like a. zeilinger .in fact , it is natural to accept limitations to common sense imposed by physics , as it happened before with simultaneity or with indistinguishability .this illustrates the usefulness of our simple proposal , but we do not claim that the situation is completely satisfactory .the question whether quantum mechanics is a complete theory is alive since einstein , podolsky and rosen posed it .one has to define completeness , and of course they did : _ every element of physical reality should have a counterpart in the physical theory_. this strong requirement is very difficult to meet , as we have seen from the discussion on the cognitive process .in fact , trying to complete the theory to meet the above difficulties one has to cope with very restrictive theorems about the implications of adding new `` hidden '' variables , under very reasonable general conditions , which have been always experimentally confirmed in favour of pure quantum mechanics .there are even stronger theorems limiting the possibility of such extension from internal consistency .there are partial but promising solutions to the last of the basic mentioned problems , decoherence , but it is clearly beyond the scope and space limits of this article to enter into details of these well defined physics . let us remind instead that there are in fact other reasons for insatisfaction beyond these conceptual problems , especially in the relativistic extensions , where infinities appear in the perturbative solutions .these divergences are under control , and some even well understood , which is not the case for the extension to gravitational interactions .also there are many basic problems related to strong interactions defying solution for decades , like the confinement of quarks.this is seen by some as a need to reformulate the foundations , with new physics , which would be interesting to study in our framework . from the simple concept of existence point of view , there are well defined physics notions to measure virtualities and quantum entanglements .let us comment briefly on a special one .it is based on the worldline or proper time formulation of quantum mechanics , due to fock , schwinger , and feynman . there , one parameterizes quantum amplitudes with an auxiliary parameter called proper time , which controls the fluctuations of the quantum state in spacetime , and are represented as a path integral in that auxiliary space .after some manipulations , the integrals can be computed by combination of numerical and analytical methods , and the fluctuations visualized .the virtuality of the process is related to different random walks , more or less directed , which in turn can be put in correspondence with a hausdorff dimension , 2 in the extreme quantum or 1 in the the classical regime .this is to illustrate that there are concepts , like virtualities , which can be further analysed with our philosophy proposals and which could help understanding and even visualizing the physics problem .building on boltzmann philosophical ideas , briefly recalled , we have given arguments based on general concepts and logic , strongly supporting the possibility and the convenience of maintaining the objective existence of physics processes .they imply in turn to _ restrict _ that _ existence _ to concepts which can have a clear manifestation . for concepts which oneis not able for the moment to submit to such a requirement , the philosophical analysis can make useful contributions , with attributes related to existence .we elaborated the simple examples of unavoidability and its reverse .they were first illustrated with historical interesting examples of atoms , fields , and even gens and life and then applied in some detail to relevant concepts at present like monopoles and branes . in quantum mechanics ,they provide a clarification and criticism of popular interpretations .more involved analysis have been suggested for future research , in a fruitful interplay of physics and philosophy .we hope that our presentation is not perceived as an over - simplification , a possibility we assumed in the spirit of the practical and direct form boltzmann declared indispensable for philosophical argumentations .jsg has benefitted from many conversations with d. flamm , philosopher , physicist and grandson of boltzmann and with j. ordoez , who provided many of the original references .we were encouraged by the late julio abad in previous stages of this work , which will appear in a special volume in his memory .this was supported in part by ministerio de ciencia e innovacion and conselleria de educacion under grants fpa2008 - 01177 and 2006/51 .our arguments started from the basics of the cognitive process , where a physical input triggers a primary signal , which is later processed into a memory or commemorative recordings of various degrees of complexity , the concepts , involving neural networks in specific areas in the brain .they are decanted trough permanent confrontation with the external world and those of others , as explained , and become ideas at various degrees of abstraction and distance from the primary trigger .this basic conception ( and logic ) is what has been used for the argument supporting an objective world picture , in which those representations , the concepts , can be appropriately said to correspond to existing objects , provided they are detected , or not , if experimentally disproved .the human brain produces general frames abstracting relevant properties of objects , the languages , consisting of words , syntax , and meaning , of different levels of abstraction and , correspondingly , of retroprojectivity to the external objects .the main areas and processes involved are becoming known , but of course there is a long way to go to master the codes , and it is even an open question whether the whole process can be controlled in a relevant way . besides , we are aware of the key role of chemistry in the brains functions ! but for our philosophical reasoning here we only need the general concept , which can illustrated with a neural network analogy . in neural networks ,one has a first layer of devices ( neurons ) receiving an input .this in turn triggers an output from different numbers of neurons to the next layers , with different weights .combining outputs , simple functions ( say tanh ) connecting two values ( on - off ) one can produce complicated functions , which will implement effective operations , like recognising voices or pictures . in terms of computersthese simple functions could correspond to _machine languages_. but one has also _ object oriented languages _ , like c++ , where one works with abstract functions ( for instance , templates ) , and can operate with them . in our analogy , these correspond to more abstract and general languages , the most universal of which is mathematics .let us look at the concept of _ pair_. in the first case , in machine related language , one has pairs of concrete objects ( integers , for example ) , which allow in turn for operations like permutations or orderings . in the second more advanced case , templates provide abstract pairs which can in turn be paired or otherwise operated successively , including combinations with other objects . in the neural networkthis could correspond to nonplanar and transverse connections , which , require a much larger size and plasticity , like in parallel vector networks , as indicated schematically in fig.1as our main conclusion can be seen as a compromise between empiricism and idealism , we explain these basic philosophical ideas and terminology , condensed and simplified , using physical examples .kant is a standard reference and for good reason , as mathematics and physics were his starting point , and it is the first manifestly critical approach to the theory of knowledge .his position was the first reasonably , partially realistic one . in the old debate of empiricism , denying reality of ideas and the external world and rationalism , assigning it to both of them , kant s proposal is a middle solution : he did not grant general existence to ideas but he defended some existence of the sensorial world .boltzmann s position can be seen as a big step forward in this direction , with a sound scientific basis , including evolution and opening it to future progress .back in 1781 , kant noticed that statements can be _ analytical _( e.g. the electrons with up or down spin directions in a factorized tensor product state ) or _ synthetical _( e.g. the electrons in a symmetrized coupled ( entangled ) state with a given total spin , or ) .more schematically:`` is in '' is analytical while `` is in ''is synthetical .all empirical ( called a posteriori ) statements are synthetical ( experience always teaches ) , but the opposite is not true , not all synthetic statements are empirical : there are some statements with new properties in addition to the premises ( synthetic ) which are true independent of empirical experience ( a priori ) .this simple scheme was thought to apply to mathematics and even some concepts of physics but it has been generalized ( and relativized ) by modern philosophy of science .a proper analysis is also the task of neuroscience and the theory of knowledge as discussed , but as a philosopher kant argued that there are preconditions in humans , such as time and space , which universally allow such processes .of course kant , a devoted newtonian who had himself worked out the notion of galaxies , took time and space as universal and absolute , as well as other necessary ingredients of thought called _categories_. they are fundamental ideas like _ causality _ , which had been around from the beginning of philosophy and ordered by aristotle .needless to say that those absolute notions were naive , and wrong in strict terms , but the framework was adequate for scientific discussions and the seed for the later developments .it is worth reminding ourselves that boltzmann who proposed a big step forward , facing the ontological problem , warned against absolute use of `` laws of thought '' like causality , _ which we may denote either a precondition of all experience or as itself an experience we have in conjunction with every other experience_ .he also warned frequently that in the realm of explanations , models and theories could be also useful , even if apparently wrong .999 m. tegmark , the mathematical universe , foundations of physics 38 ( 2008 ) 101 .l. boltzmann , sb wien ber . 106( 1897 ) part iia , 83 - 109 .english version in brian mcguiness ( ed . ) , theoretical physics and philosophical problems , d. reidel , dordrecht , boston 1974 , p. 57 .i. fasol - boltzmann , ( ed . ) principien der naturfilosofi , springer verlag , berlin ( 1990 ) .l. boltzmann , populre schriften , leipzig ( 1905 ) joh .encyclopaedia britannica .10th and 11th editions ( 1902 ) .it is remarkable that when the universal character of energy conservation by helmholtz was still a matter of intense debate , p.g .tait clearly stressed that `` energy conservation merely asserts its objective reality '' in his `` lectures on some recent advances in physical science '' , p.19 , macmillan , 1876 , london .d. lindley , boltzmann s atoms , the free press , new york 2001 .albeit a popular book , it gives an excellent short account of this debate . c. castelnovo , r. moessner , and s.l .sondhi , nature 451 ( 2008 ) 42 .j. polchinski , string theory , cambridge university press , 1998 .s. sarkar , gen .40 ( 2008 ) 269 .adler , quantum theory as an emergent phenomenon , cambridge u.p.(2004 ) .a. zeilinger , einsteins spuk .goldmann , munich ( 2007 ) .a. einstein , d. podolsky , and n. rosen , phys .47 ( 1935 ) 777 . j. s. bell , phys.rep . 137 ( 1986 )j. conway and s. kochen , foundations of physics 36 ( 2006 ) 1441 .wheeler and w. zurek , quantum theory and measurement , princeton university press , 1983 .g. t hooft , arxiv : hep - th/0707.4568 h. gies , j. sanchez - guillen , and r.a .vazquez , jhep 0508 ( 2005 ) 067 .these words are almost literal from s. ramn y cajal in `` textura del sistema nervioso del hombre y los vertebrados '' ( re - edited in 1992 by the universidad de alicante , spain ) .we have found that boltzmann attended his lectures about it in worcester , mass . in 1899 .j. sanchez - guillen , l.e .boltzmann , puz , zaragoza ( 2009 ) .r. omnes , understanding quantum mechanics , princeton university press , 1999 .i. kant , kritik der reinen vernunft ( 1781 ) .
inspired by philosophical ideas of boltzmann , which are briefly recalled , we provide strong support for the possibility and convenience of a realistic world picture , properly nuanced . the arguments have consequences for the interpretation of quantum mechanics , and for relevant concepts of quantum field and string theory , like monopoles and branes . our view is illustrated with a cybernetic analogy and complemented with a summary of the basic philosophical concepts .
in this paper , we seek to apply stein s method a technique for obtaining convergence ( often clt - type ) results for random variables on a vertex - count in the neighborhood attack voter - type model .voter models are interacting - particle - system models on finite graphs . the original voter model ( introduced independently in the 1970s by clifford and sudbury in 1973 , and by holley and liggett in 1975 , as mentioned in ) can be formulated as follows : take a connected , -regular ( each vertex has edges ) graph of size .assign s and s to the nodes of the graph . run a markov chain on the graph with the following transition procedure : each turn , pick a node at random ( under some distribution ; usually we take the uniform ) , pick one of its neighbors at random ( usually uniformly ) , and switch the value of the selected neighbor - node to the value of the originally selected node . under uniformity of node and neighbor selection, this chain converges to one of two absorbing states , in which all nodes have the same values .the `` anti - voter '' model , introduced in , has the selected neighbour node adopt a value opposite to that of the originally selected node . under uniformity ( again , of node and neighbor selection )the resulting chain has a stationary distribution .persi diaconis and christos athanasiadis in proposed the following variation of the voter model : upon selecting a node , instead of picking one of its neighbors , flip a coin ( with weight , perhaps taken to be a half ) , and , according to the result of the cointoss , assign either or to the selected nodes and all its neighbors .the model has been labeled the `` neighborhood attack '' model .stein s method ( first introduced in ) provides an infrastructure for the estimation of the distances between certain classes of random variables and certain ( usually classical ) distributions , most notably the gaussian and the poisson distributions . for practical purposes, we can break stein s method into three key steps : first , one has to use stein s identities to establish a bound on the distance between a class of random variables and a specific distribution expected to be close to the given class ; second , one has to satisfy the conditions generated in the preceding step ; and third , one has to evaluate the acquired bound .the last step typically involves something along the lines of reducing an expression involving a function of the variance of the given random variable . in ,yosef rinott and vladimir rotar show , using a stein s method argument , that the sum of the values of the nodes in the anti - voter model at stationarity is asymptotically normally distributed .the problem rinnott and rotar tackled was posed by aldous and fill in a book that touches on voter models , .our goal in the present article is to show that the sum of the values of the nodes in the neighborhood attack model is asymptotically normally distributed , using stein s method techniques different from the ones employed by rinott and rotar .for an application of the stein technique in a different context , see the paper , in which jason fulman shows that the number of descents or inversions in permutations complies to a central limit theorem .both the current problem and the one examined in can be viewed as random walks on hyperplanes ; and hence there is a structural similarity between the approach adopted here , and the one in . for more results on the neighbourhood attack model , see , .the former paper introduces the model and presents some results on random walks on hyperplane arrangements .the latter paper studies some properties of the distributions of the implicit markov chains in models similar to the neighbourhood attack model . for more on stein s method , see , , .the first two books provide a comprehensive overview of stein s method in regard to its applications to normal and poisson approximations reflexively .the monograph is an up - to - date survey of stein s method literature and a useful entry - level source on the subject . in section [ sectionproblemapproach ] ,we pose our problem . in section [ sectionsm ], we conduct a brief overview of our main technique : stein s method . in section [ sectionresults ], we introduce a few definitions and assumptions , and then list the main result of the paper . in the section [ sectionproof ] , we provide calculations and proofs for the result .section [ sectionconsequences ] interprets the result with some examples of its applicability .we draw conclusions in section [ sectionconclusions ] .we apply the neighbourhood attack model ( introduced in ) on a given family of ( finite ) graphs .randomly assign either or to each node of the graph . as mentioned above, the model does the following each turn : * selects a node uniformly at random .* turns the node and all its immediate neighbours into s or s according to a bernoulli( ) distribution with ; we want for the sake of symmetry .given : 1 ) a connected graph ; 2 ) positive probability of selection for all nodes ; and 3 ) positive probabilities of turning into or for the selected node and its neighbours , the underlying markov chain , the states of which are the possible permutations of s and s , is irreducible and everywhere recurrent on an essential class of its state space , and therefore possesses a stationary distribution . assume the considered markov chain begins at this stationary distribution .let be the number of s at stationarity .then equals the number of s , where is the number of nodes .we want to use stein s method to show that { } \mathcal{z}\ ] ] where is of the standard normal distribution , and and are the expectation and standard deviation of .we derive our result under an assumption of -regularity for the underlying graphs . we seek to apply stein s method , and in particular we want to use a result along the lines of theorem 1.2 in : [ theorem1 ] let be exchangeable with and .define the r.v . by where . then , if there is some for which , we have \ } } + 37 \frac{\sqrt{\mathbb{e}r^{2}}}{\lambda } + 48\frac{aa^{3}}{\lambda } + 8\frac{aa^{2}}{\sqrt{\lambda}},\end{aligned}\ ] ] where is such that all functions in it are uniformly bounded in absolute value by 1 , for any real numbers and and any , the function is in , and for any and any , the functions are also in , and for some constant which depends only on the class . our would be some normalization of a vertex - count on the voter - type model graphs we deal with .stein s technique goes as follows : for a given probability distribution , one can come up with an appropriate operator which implicitly defines the distribution . for example , the operator in implicitly defines the gaussian distribution , in the sense that 1 ) for all absolutely continuous with , where is a variable with the standard normal distribution ; and 2 ) if for some random variable we have for all absolutely continuous functions with , then has the standard normal distribution .next , for an appropriately chosen , one can solve the differential equation given by where is the c.d.f .of the target distribution .but now , armed with the solution to equation ( [ eq : steindiffeq ] ) , and within the context of an appropriate metric ( above we used the kolomogorov metric ) , we can produce a bound on the distance between a given distribution we want to analyze , and the target distribution with c.d.f . .for example , is the unique bounded solution to where is the c.d.f . of the standard normal . and next , under the wasserstein metric given by , one can show that ( for example , see ) }}{\sqrt{\pi}},\ ] ] where is a normalized sum of i.i.d .standard normal variables endowed with a fourth moment , and stands for the wasserstein distance between and the standard normal distribution .the potential utility of stein s technique in producing powerful bounds and obtaining convergence results is clear ; and , indeed , stein s method has been instrumental in the proofs of a variety of interesting convergence and bounding results . in general , there are two standard avenues of research focusing on stein s method one can try to obtain formulas for bounds on the distances between various target distributions and various random variables ( or rather , their distributions ) examples of recent results in this direction include ( exponential distribution ) , ( laplace ) , and ( zero - bias couplings and concentration inequalities ) ; and one can use these formulas and techniques to obtain results pertaining to specific problems , including many classic problems such as the birthday problem or the coupon collector problem for examples , refer to ( comprehensive survey ) and ( lightbulb process ) .we first seek to show that ( [ steinpaireq ] ) holds . to that end , let be the number of 1 s at stationarity . let here is the total number of nodes and is the value of node ( under an arbitrary indexing ) .examining is equivalent to examining .next , define note is a constant dependent on : now , is mean-0 variance-1 . to get the condition for theorem [ theorem1 ], we first need to define a as the equivalent of after one further turn of the neighborhood attack model .that is to say , if is the normalized node count of the model at some turn of its evolution in stationarity , then is the same normalized node count in the next turn .note , once again , that we assume -regularity for the graph ( i.e. every node has exactly neighbours ) .we want -regularity for the sake of symmetry , because without symmetry , the problem under consideration is far less tractable .[ maintheorem ] under the assumptions where is the index - set of the nodes , is the number of first or second order neighbors node has , and is some constant dependent on the graph ; and where is the count of pairs of neighbors or near - neighbors with values both equal to 1 , and is the count of pairs of s , we derive the bound on the distance between ( the distributions of ) and the standard normal , we first establish bounds on in section [ sectionvariancey ] , and then complete the proof in section [ sectionbigvar1 ] .given the -regularity assumption , the sum changes each turn by between and .a basic example of a graph of this type is the circle ( 2-regular ) graph , in which we have a set of nodes arranged in a circle , each node with two neighbours .we also assumed uniformity in choosing nodes and in flipping s or s . under such conditions ,i.e. at stationarity each node is or with equal probability .the sum of the node values will tend toward 0 ( under certain conditions ; one of which , clearly , has to do with the number of neighbors each node has , since our model takes only the extreme values over the complete graph ) , since if nodes of a certain value ( or ) dominate the graph , we are less likely to see an increase in the number of the nodes of that value .we show in section [ sectionvariancey ] that , as desired , which complies with the stein linearity condition ( [ steinpaireq ] ) where and is a random variable . in our caseconveniently .as for lambda , in general , the next step is to show that and are exchangeable , i.e. , as was done in .exchangeability clearly holds when the markov chain underlying and is stationary and reversible .reversibility is not always available or easily proved .for example , our chain is clearly not necessarily reversible . consider the circle graph .it is easy to see that for large , can take the value i.e. there is an attainable at stationarity arrangement of values for the nodes in which all nodes but one have the value of 1 .now , the probability of going from that arrangement to the all 1 s arrangement for which is positive ; but the probability of going from to is zero , and hence our chain fails to satisfy the detailed balance equations . however , a recent result by adrian rllin removes the necessity for exchangeability .rllin s theorem ( see ( * ? ? ? * theorem 2.1 ) ) states : [ rollin ] assume are r.v.s on the same probability space , s.t . ( for law ) , , .given , for ( here is the standard normal distribution , and is the family of functions associated with the wasserstein distance ) , we have if also there exists a constant s.t . a.s . , we have see ( * ? ? ?* theorem 2.1 ) . in our case ; so the bound is } + 32\frac{a^{3}}{\lambda } + 6\frac{a^{2}}{\sqrt{\lambda}}.\ ] ] the next step is to bound .note .so thus , the bound becomes } + 32\frac{8(r+1)^{2}n}{\sigma_{y}^{3 } } + 6\frac{4(r+1)^{3/2}\sqrt{n}}{\sigma_{y}^{2}}.\ ] ] in effect , the next goal is to bound the two terms ] term : \frac{q_{i}}{2n } | \{q_{i}\ } \right ) = \\ % & = \mathbb{e } \left ( \sum_{in i } \left [ ( r+1)^{2}-2(r+1)i + i^2 + ( r+1)^{2 } + 2(r+1)i + i^{2 } \right ] \frac{q_{i}}{2n } | \{q_{i}\ } \right ) = \\ % & = \frac{1}{2n } \mathbb{e } \left ( 2\sum_{i\in i } \left [ ( r+1)^{2 } + i^2 \right ] q_{i}| \{q_{i}\ } \right ) = \\ % & = \mathbb{e } \left [ \frac{(r+1)^{2}}{n}\sum_{i\in i } q_{i}| y \right ] + \frac{1}{n}\mathbb{e } \left [ \sum_{i\in i}i^{2}q_{i } | \{q_{i}\ } \right ] = \\ & = ( r+1)^{2 } + \frac{1}{n } \left [ \sum_{i\in i}i^{2}q_{i } \right ] \leq \label{deltasquare } \\\nonumber & \leq ( r+1)^{2 } + \frac{1}{n } \left [ \sum_{i\in i}(r+1)^{2}q_{i } \right ] = ( r+1)^{2 } + ( r+1)^{2 } = 2(r+1)^{2}\end{aligned}\ ] ] and therefore , continuing from ( [ vary2 ] ) , \frac{q_{i}}{2n } | \{q_{i}\ } \right ) \leq \\ & \leq -\frac{2(r+1)}{n}\mathbb{e } ( y^{2 } ) + 2(r+1)^{2},\end{aligned}\ ] ] meaning however , since the terms appear in the denominators of the terms in ( [ bigbound ] ) , we need either a lower bound of or the exact variance of .observe that we have : \frac{q_{i}}{2n } | \{q_{i}\ } \right ) = \\ & = \frac{n}{2(r+1 ) } \left [ ( r+1)^{2 } + \frac{1}{n}\mathbb{e } \left ( \sum_{i\in i } i^{2}q_{i } \right ) \right ] \geq \frac{(r+1)n}{2}.\end{aligned}\ ] ] thus for the neighborhood attack model on an -regular graph , for the sum of the values of the nodes of the graph , now we have to evaluate or bound = \var \mathbb{e}^{y}[(y'-y)^{2}]$ ] , as in and .let us consider the following : = \var \left [ \mathbb{e}(\delta y)^{2}| y \right ] \leq \var \left [ \mathbb{e}(\delta y)^{2}| \{q_{i}\ } \right ] = \\ & = \var \left ( ( r+1)^{2 } + \frac{1}{n } \left [ \sum_{i\in i}i^{2}q_{i } \right ] \right ) = \frac{1}{n^{2}}\var \left ( \sum_{i\in i}i^{2}q_{i } \right)\end{aligned}\ ] ] the transition between the lines follows from ( [ deltasquare ] ) and ( [ vary2 ] ) . also , = \frac{1}{n^{2}}\var \left ( \mathbb{e } \left ( \sum_{i\in i}i^{2}q_{i } \right ) | \ { \xi_{i } \ }\right ) = \frac{1}{n^{2 } } \var\left ( \sum_{k=1}^{n } \left ( \sum_{j\in \mathcal{n}_{k}}\xi_{j } \right)^{2 } \right),\ ] ] where is the set of node and all its neighbors .next , here is the distance between nodes and . for the last line ,observe that is invariant , and that the sum can be interpreted as a sort of an edge count over our graph , with each pair of neighbors or near neighbors of the same sign participating as a , and each pair of opposite values participating as a .each such pair gets counted twice .thus let be the number of neighbors or near - neighbors ( meaning nodes at distances one or two ) each node has ; be the number of pairs of neighbors or near - neighbors with equal node - values ; and be the number of pairs with opposite node - values .moreover , corresponds to the count of pairs of neighbors or near - neighbors with values both equal to 1 , and is the count of pairs of s . now ,suppose , with the indexed set of nodes on our graph , that that is , we assume is some fixed quantity : i.e. the underlying graph possesses sufficient symmetry so that each node has the same number of neighbors or near - neighbors .for example each node in the circle - graph has 4 neighbors/ near - neighbors .the assumption that is fixed for all is not particularly gratuitous , since , either way , , and under the present assumptions , is fixed . next , since , we have .therefore , , where is the number of pairs of neighbors and near neighbors with node - values 1 , and is the count of pairs with values . on the other hand , we have , and therefore .one naturally wonders if we can use the bound we established for to bound . from the definition, it suffices to show that to obtain . for and to be negatively correlated , an increase in one would have to imply a decrease in the other meaning , in our setting , that an increase in the number of edges ( i.e. pairs of nodes at distance 1 ) with ones at both ends would have to imply a decrease in the edges with negative ones at both ends and vice versa .one is tempted to try to use the fkg inequality to prove .specifically , we know that the lattice ( where is our graph ) is a poset ; and that is an increasing function of that lattice , while is a decreasing function on the same lattice . moreover , if we take an element from , and suppose that is the element we obtain by switching all s in to s , and all s to s , then , and the stationary probability of state occurring in our markov chain equals the corresponding probability for state that is , .now , the fkg theorem ( after fortuin , kasteleyn , and ginibre ) states that for a finite distributive lattice , and a non - negative function ( really a measure ) on it , satisfying the `` log - supermodularity condition '' yields for any two monotonically increasing ( or decreasing ) on functions and ; with the inequality reversed if one of and is monotonically increasing and other one monotonically decreasing . in our particular case , for the considered lattice , stationary distribution over the lattice , and functions and , having would do the job , since and and implies exactly . unfortunately , our example fails to necessarily satisfy the log - supermodularity condition ( [ logsupermod ] ) .for example , take a circle graph of odd length .the state in which s and s alternate along the entire graph can not occur at stationarity , and therefore has measure zero in the stationary distribution of our chain .but the state in which we have one at some node and everything else is ; and the state in which node and its two neighbors are s , and the rest of the graph consists of alternating s and s , can both occur .but then , while , meaning that the log - supermodularity condition fails . one can come up with similar examples for other standard families of graphs .so we fail to have the log - supermodularity condition .still , the log - supermodularity condition is only sufficient rather than necessary for our desired result .hence our results might be obtainable via different means .for now , suppose given ( [ covcondition ] ) , it follows that \leq \frac{4 ( r^{*})^{2}(r+1)}{n}\ ] ] we thus arrive at the overall bound ( [ bigbound ] ) : } + 32\frac{8(r+1)^{2}n}{\sigma_{y}^{3 } } + 6\frac{4(r+1)^{3/2}\sqrt{n}}{\sigma_{y}^{2 } } \leq \\ % & \leq \frac{24n}{(r+1)n}\sqrt{\frac{4 ( r^{*})^{2}(r+1)}{n } } + 32\frac{16\sqrt{2}(r+1)^{2}n}{((r+1)n)^{3/2 } } + 6\frac{8(r+1)^{3/2 } \sqrt{n}}{(r+1)n } = \\ & \leq 48\frac{r^{*}}{\sqrt{r+1}\sqrt{n } } + 2^{19/2}\frac{\sqrt{r+1}}{\sqrt{n } } + 48\frac{\sqrt{r+1}}{\sqrt{n}}\end{aligned}\ ] ] thus our overall bound is : where is a constant dependent on the underlying family of graphs and satisfying .this completes the proof of theorem [ maintheorem ] , and derives ( [ boundfinal ] ) .the final bound is of .the bound in ( [ finalbound ] ) implies that ( under stationarity ) the normalized sum of values of the nodes of the graph , , goes in law to the standard normal distribution as the size of the graph rises given . note that .let us consider four specific families of graphs .first , the complete graph , in which . on the complete graph, clearly has the uniform binary distribution taking values .thus it is to no surprise that our bound on the distance to the normal distribution rises to infinity with . from the other side of the spectrum of regular graphs, we can take the circuit ( or circle or simple cycle ) graph , in which we have ordered nodes , each connected to its predecessor and its successor , with node connected to nodes and .here , and hence goes to 0 as increases to infinity .the argument can be extended to circulant , in such a way that if the nodes corresponding to two indices and are adjacent , then any two nodes indexed by and are adjacent .here is the number of nodes and adjacency of two nodes means they are connected by an undirected edge . ]graphs : as long as stays constant as rises , would converge to the normal in distribution . for a slightly more complicated example , consider the hypercube graph .one can index the nodes of the -dimensional hypercube graph with a string of zeros and ones , with nodes differing in exactly one digit being neighbors .it is easy to see that for an -dimensional hypercube , , and . since { } 0,\ ] ]we can conclude that goes in law to the standard normal distribution for the hypercube family of graphs . finally , consider the ( complete ) bipartite graph of size , with a natural number .for this family , , and .on such a graph , would frequently take values near and , and hence can not be expected to go to the normal in distribution .indeed , we have { } \infty.\ ] ] the argument can clearly extend to multipartite graphs of a fixed number of partitions .to sum up , we have shown that , subject to some symmetry assumptions , the normalized sum of the values of the nodes in the neighborhood attack model is at a distance of to the standard normal distribution in the wasserstein metric .hence the sum of the nodes is asymptotically normally distributed as the sizes of the underlying graphs increase , provided that goes to zero as rises to infinity .along the way to the result , we also showed that the node - sum in the neighborhood attack model on an -regular graph satisfies stein s linearity condition with and ; and that satisfies this work is based on the author s 2008 - 2013 graduate research at the university of southern california under the advisorship of prof .jason fulman .the author would like to express his gratitude to prof .fulman and the usc department of mathematics .
the neighborhood attack model is a voter type model , which takes a finite graph , assigns s and s to its nodes ( vertices ) , and then runs a markov chain on the graph by uniformly at random picking a node at every turn , and then switching the values of the node and its neighbors to s or s according to a ( not necessarily fair ) coin toss . we show , via a stein s method argument , that for certain ( highly symmetric ) families of graphs the number of 1 s in the neighbourhood attack voter - type model is asymptotically normally distributed as the number of nodes tends to infinity .
state - machine replication is an established way to enhance the resilience of a client - server application .it works by executing the service on multiple independent components that will not exhibit correlated failures .we consider the approach of _ byzantine fault - tolerance ( bft ) _ , where a group of _ processes _ connected only by an unreliable network executes an application .the processes use a protocol for _ consensus _ or _ atomic broadcast _ to agree on a common sequence of operations to execute .if all processes start from the same initial state , if all operations that modify the state are _ deterministic _ , and if all processes execute the same sequence of operations , then the states of the correct processes will remain the same .( this is also called _ active _ replication . ) a client executes an operation on the service by sending the operation to all processes ; it obtains the correct outcome based on comparing the responses that it receives , for example , by a relative majority among the answers or from a sufficiently large set of equal responses . tolerating _byzantine faults _ means that the clients obtain correct outputs as long as a qualified majority of the processes is correct , even if the faulty processes behave in arbitrary and adversarial ways .traditionally state - machine replication requires the application to be deterministic .but many applications contain implicit or explicit non - determinism : in multi - threaded applications , the scheduler may influence the execution , input / output operations might yield different results across the processes , probabilistic algorithms may access a random - number generator , and some cryptographic operations are inherently not deterministic .recently bft replication has gained prominence because it may implement distributed consensus for building _ blockchains _a blockchain provides a distributed , append - only ledger with cryptographic verifiability and is governed by decentralized control .it can be used to record events , trades , or transactions immutably and permanently and forms the basis for cryptocurrencies , such as bitcoin or ripple , or for running `` smart contracts , '' as in ethereum . with the focus on active replication, this work aims at _ permissioned _ blockchains , which run among known entities .in contrast , _ permissionless _ blockchains ( including ethereum ) do not rely on identities and use other approaches for reaching consensus , such as proof - of - work protocols . forpractical use of blockchains , ensuring deterministic operations is crucial since even the smallest divergence among the outputs of different participants lets the blockchain diverge ( or `` fork '' ) .this work presents a _ general treatment _ of non - determinism in the context of bft replication and introduces a distinction among different models to tackle the problem of non - determinism .for example , applications involving cryptography and secret encryption keys should be treated differently from those that access randomness for other goals .we also distinguish whether the replication mechanism has access to the application s source code and may modify it .we also introduce two novel protocols .the first , called _ sieve _ , replicates non - deterministicprograms using in a _way , where we treat the application as a black box and can not change it .we target workloads that are usually deterministic , but which may occasionally yield diverging outputs .the protocol initially executes all operations speculatively and then compares the outputs across the processes .if the protocol detects a minor divergence among a small number of processes , then we _ sieve out the diverging values _ ; if a divergence among too many processes occurs , we _ sieve out the operation _ from sequence .furthermore , the protocol can use _ any _ underlying consensus primitive to agree on an ordering .the second new protocol , _ mastercrypt _ , provides master - slave replication with cryptographic security from verifiable random functions .it addresses situations that require strong , cryptographically secure randomness , but where the faulty processes may leak their secrets .we introduce three different models and discuss corresponding protocols for replicating non - deterministicapplications .modular : : : when the application itself is fixed and can not be changed , then we need _ modular _ replicated execution . in practice this is often the case .we distinguish two approaches for integrating a consensus protocol for ordering operations with the replicated execution of operations .one can either use _ order - then - execute _ , where the operations are ordered first , executed independently , and the results are communicated to the other processes through atomic broadcast .this involves only deterministic steps and can be viewed as `` agreement on the input . ''alternatively , with _execute - then - order _ , the processes execute all operations speculatively first and then `` agree on the output '' ( of the operation ) . in this case operations with diverging results may have to be rolled back .+ we introduce protocol _ sieve _ that uses speculative execution and follows the _ execute - then - order _ approach .as described before , _ sieve _ is intended for applications with occasional non - determinism .it represents the first modular solution to replicating non - deterministicapplications in a bft system .master - slave : : : in the _ master - slave _ model , one process is designated as the master or `` leader , '' makes all non - deterministic choices that come up , and imposes these on the others which act as slaves or `` followers . ''because a faulty ( byzantine ) master may misbehave , the slaves must be able to validate the selections of the master before the operation can be executed as determined by the master .the master - slave model is related to passive replication ; it works for most applications including probabilistic algorithms , but can not be applied directly for cryptographic operations . as a further complication, this model requires that the developer has access to the internals of the application and can modify it .+ for the master - slave model we give a detailed description of the well - known replication protocol , which has been used in earlier systems .cryptographically secure : : : traditionally , randomized applications can be made deterministic by deriving pseudorandom bits from a secret seed , which is initially chosen truly randomly .outsiders , such as clients of the application , can not distinguish this from an application that uses true randomness .this approach does not work for bft replication , where faulty processes might expose and leak the seed . to solve this problem ,we introduce a novel protocol for master - slave replication with cryptographic randomness , abbreviated _ mastercrypt_. it lets the master select random bits with a _verifiable random function_. the protocol is aimed at applications that need strong , cryptographically secure randomness ; however it does not protect against a faulty master that leaks the secret .we also review the established approach of threshold ( public - key ) cryptography , where private keys are secret - shared among the processes and cryptographic operations are distributed in a fault - tolerant way over the whole group .the modular protocol _ sieve _ has been developed for running potentially non - deterministicsmart contracts as applications on top of a permissioned blockchain platform , built using bft replication .an implementation has been made available as open source in `` hyperledger fabric , '' which is part of the linux foundation s hyperledger project .as of november 2016 , the project has decided to adopt a different architecture ; the platform has been redesigned to use a master - slave approach for addressing non - deterministicexecution .the problem of ensuring deterministic operations for replicated services is well - known .when considering only crash faults , many authors have investigated methods for making services deterministic , especially for multi - threaded , high - performance services .practical systems routinely solve this problem today using master - slave replication , where the master removes the ambiguity and sends deterministic updates to the slaves . in recent research on this topic , for instance , kapitza et al . present an optimistic solution for making multithreaded applications deterministic .their solution requires a predictor for non - deterministicchoices and may invoke additional communication via the consensus module . in the bft model ,most works consider only sequential execution of deterministic commands , including pbft and upright .base and cbase address byzantine faults and adopt the master - slave model for handling non - determinism , focusing on being generic ( base ) and on achieving high throughput ( cbase ) , respectively .these systems involve changes to the application code and sometimes also need preprocessing steps for operations .fault - tolerant execution on multi - core servers poses a new challenge , even for deterministic applications , because thread - level parallelism may introduce unpredictable differences between processes .eve heuristically identifies groups of non - interfering operations and executes each group in parallel .afterwards it compares the outputs , may roll back operations that lead to diverging states , or could transfer an agreed - on result state to diverging processes .eve resembles protocol _ sieve _ in this sense , but lacks modularity . for the same domain of scalable services running on multi - cores , rex uses the master - slave model , where the master executes the operations first and records its non - deterministicchoices .the slaves replay these operations and use a consensus primitive to agree on a consistent outcome .rex only tolerates crashes , but does not address the bft model .fault - tolerant replication involving cryptographic secrets and distributed cryptography has been pioneered by reiter and birman . many other works followed , especially protocols using threshold cryptography ; an early overview of solutions in this space was given by cachin . in current work duan and zhang discuss how the master - slave approach can handle randomized operations in bft replication , where execution is separated from agreement in order to protect the privacy of the data and computation .the remainder of this paper starts with section [ sec : def ] , containing background information and formal definitions of broadcast , replication , and atomic broadcast ( i.e. , consensus ) .the following sections contain the discussion and protocols for the three models : the modular solution ( section [ sec : modular ] ) , the master - slave protocol ( section [ sec : master ] ) , and replication methods for applications demanding cryptographic security ( section [ sec : secure ] ) .we consider a distributed system of _ processes _ that communicate with each other and provide a common _ service _ in a fault - tolerant way . using the paradigm of service replication , requests to the service are broadcast among the processes , such that the processes execute all requests in the same order .the clients accessing the service are not modeled here .we denote the set of processes by let . a process may be _ faulty _ , by crashing or by exhibiting _byzantine faults _ ; the latter means they may deviate arbitrarily from their specification .non - faulty processes are called _ correct_. up to processes may be faulty and we assume that .the setup is also called a _ byzantine fault - tolerant ( bft ) service replication system _ or simply a _ bft system_. we present protocols in a modular way using an event - based notation .a process is specified through its _ interface _ , containing the events that it exposes to other processes , and through a set of _ properties _ , which define its behavior .a process may react to a received event by doing computation and triggering further events .the events of a process interface consist of _ input events _ , which the process receives from other processes , typically to invoke its services , and _ output events _ , through which the process delivers information or signals a condition to another process .every two processes can _ send _ messages to each other using an authenticated point - to - point communication primitive .when a message arrives , the receiver learns also which process has sent the message .the primitive guarantees _ message integrity _, i.e. , when a message is received by a correct process with indicated sender , and is correct , then previously sent .authenticated communication can be implemented easily from an insecure communication channel by using a message - authentication code ( mac ) , a symmetric cryptographic primitive that relies on a secret key shared by every pair of processes .these keys have been distributed by a trusted entity beforehand .the system is _ partially synchronous _ in the sense that there is no a priori bound on message delays and the processes have no synchronized clocks , as in an asynchronous system .however , there is a time ( not known to the processes ) after which the system is _ stable _ in the sense that message delays and processing times are bounded . in other words ,the system is _ eventually synchronous_. this model represents a broadly accepted network model and covers a wide range of real - world situations .suppose processes participate in a broadcast primitive .every process may _ broadcast _ a request or message to the others .the implementation generates events to output the requests when they have been agreed ; we say that a request is _ delivered _ through this .atomic broadcast also solves the _ consensus _problem .we use a variant that delivers only messages satisfying a given _ external validity _condition .[ def : abv ] a _ byzantine atomic broadcast with external validity _ ( _ abv _ ) is defined with the help of a validation predicate and in terms of these events : input event : : : : broadcasts a message to all processes .output event : : : : delivers a message broadcast by process . the deterministic predicate validates messages .it can be computed locally by every process .it ensures that a correct process only delivers messages that satisfy .more precisely , must guarantee that when two correct processes and have both delivered the same sequence of messages up to some point , then obtains for any message if and only if also determines that . with this validity mechanism , the broadcast satisfies : validity : : : if a correct process broadcasts a message , then eventually delivers . external validity: : : when a correct process delivers some message , then .no duplication : : : no correct process delivers the same message more than once . integrity : : : if some correct process delivers a message with sender and process is correct , then was previously broadcast by .agreement : : : if a message is delivered by some correct process , then is eventually delivered by every correct process .total order : : : let and be any two messages and suppose and are any two correct processes that deliver and . if delivers before , then delivers before . in practiceit may occur that not all processes agree in the above sense on the validity of a message .for instance , some correct process may conclude while others find that .for this case it is useful to reason with the following relaxation : weak external validity : : : when a correct process delivers some message , then at least one correct process has determined that at some time between when was broadcast and when it was delivered .every protocol for byzantine atomic broadcast with external validity of which we are aware either ensures this weaker notion or can easily be changed to satisfy it .atomic broadcast is the main tool to implement state - machine replication ( smr ) , which executes a service on multiple processes for tolerating process faults . throughout this workwe assume that many operation requests are generated concurrently by all processes ; in other words , there is request contention . a _ state machine _ consists of variables and operations that transform itsstate and may produce some output .traditionally , operations are _ deterministic _ and the outputs of the state machine are solely determined by the initial state and by the sequence of operations that it has executed .the state machine _functionality _ is defined by , a function that takes a _ state _ , initially , and operation as input , and outputs a successor state and a _ response _ or _ output value _ : a _ replicated state machine _ can be characterized as in definition [ def : rsm ] .basically , its interface presents two events : first , an input event that a process uses to invoke the execution of an operation of the state machine ; and second , an output event , which is produced by the state machine . the output indicates the operation has been executed and carries the resulting state and response .we assume here that an operation includes both the name of the operation to be executed and any relevant parameters .[ def : rsm ] a _ replicated state machine ( rsm ) _ for a functionality and initial state is defined by these events : input event : : : : requests that the state machine executes the operation .output event : : : : indicates that the state machine has executed an operation , resulting in new state , and producing response . it also satisfies these properties : agreement : : : the sequences of executed operations and corresponding outputs are the same for all correct processes .correctness : : : when a correct process has executed a sequence of operations , then the sequences of output states and responses satisfies for , termination : : : if a correct process executes a operation , then the operation eventually generates an output .the standard implementation of a replicated state machine relies on an atomic broadcast protocol to disseminate the requests to all processes .every process starts from the same initial state and executes all operations in the order in which they are delivered .if all operations are _ deterministic _ the states of the correct processes never diverge .implementations of atomic broadcast need to make some synchrony assumptions or employ randomization .a very weak timing assumption that is also available in many practical implementations is an _ eventual leader - detector oracle _ .we define an eventual leader - detector primitive , denoted , for a system with byzantine processes .it informs the processes about one correct process that can serve as a leader , so that the protocol can progress .when faults are limited to crashes , such a leader detector can be implemented from a failure detector , a primitive that , in practice , exploits timeouts and low - level point - to - point messages to determine whether a remote process is alive or has crashed . with processes acting in arbitrary ways , though , one can not rely on the timeliness of simple responses for detecting byzantine faults .one needs another way to determine remotely whether a process is faulty or performs correctly as a leader .detecting misbehavior in this model depends inherently on the specific protocol being executed .we use the approach of `` trust , but verify , '' where the processes monitor the leader for correct behavior . more precisely ,a leader is chosen arbitrarily , but ensuring a fair distribution among all processes ( in fact , it is only needed that a correct process is chosen at least with constant probability on average , over all leader changes ) .once elected , the chosen leader process gets a chance to perform well .the other processes monitor its actions .should the leader not have achieved the desired goal after some time , they complain against it , and initiate a switch to a new leader . hence we assume that the leader should act according to the application and within some time bounds . if the leader performs wrongly or exceeds the allocated time before reaching this goal , then other processes detect this and report it as a failure to the leader detector by filing a complaint . in an asynchronous system with eventual synchrony as considered here , every process always behaves according to the specification and eventually all remote processes also observe this ; if such correct behavior can not be observed from a process , then the process must be faulty .this notion of `` performance '' depends on the specific algorithm executed by the processes , which relies on the output from the leader - detection module .therefore , eventual leader election with byzantine processes is not an isolated low - level abstraction , as with crash - stop processes , but requires some input from the higher - level algorithm .the event allows to express this .every process may _ complain _ against the current leader by triggering this event .[ def : bld ] a _ byzantine leader detector _ ( ) is defined with these events : output event : : : : indicates that process is trusted to be leader .input event : : : : expresses a complaint about the performance of leader process .the primitive satisfies the following properties : eventual accuracy : : : there is a time after which every correct process trusts some correct process .eventual succession : : : if more than correct processes that trust some process complain about , then every correct process eventually trusts a different process than .coup resistance : : : a correct process does not trust a new leader unless at least one correct process has complained against the leader which trusted before .eventual agreement : : : there is a time after which no two correct processes trust different processes .it is possible to lift the output from the byzantine leader detector to an _ epoch - change _ primitive , which outputs not only the identity of a leader but also an increasing _epoch number_. this abstraction divides time into a series of epochs at every participating process , where epochs are identified by numbers .the numbers of the epochs started by one particular process increase monotonically ( but they do not have to form a complete sequence ) .moreover , the primitive also assigns a _ leader _ to every epoch , such that any two correct processes in the same epoch receive the same leader .the mechanism for processes to complain about the leader is the same as for .more precisely , epoch change is defined as follows : [ def : bec ] a _ byzantine epoch - change _ ( ) primitive is defined with these events : output event : : : : indicates that the epoch with number and leader starts .input event : : : : expresses a complaint about the performance of leader process .the primitive satisfies the following properties : monotonicity : : : if a correct process starts an epoch and later starts an epoch , then .consistency : : : if a correct process starts an epoch and another correct process starts an epoch with , then .eventual succession : : : suppose more than correct processes have started an epoch as their last epoch ; when these processes all complain about , then every correct process eventually starts an epoch with a number higher than .coup resistance : : : when a correct process that has most recently started some epoch starts a new epoch , then at least one correct process has complained about leader in epoch .eventual leadership : : : there is a time after which every correct process has started some epoch and starts no further epoch , such that the last epoch started at every correct process is epoch and process is correct .when an epoch - change abstraction is initialized , it is assumed that a default epoch with number 0 and a leader has been started at all correct processes .the value of is made available to all processes implicitly .all `` practical '' bft systems in the eventual - synchrony model starting from pbft implicitly contain an implementation of byzantine epoch - change ; this notion was described explicitly by cachin et al .* chap . 5 ) .we model cryptographic _ hash functions _ and _ digital signature schemes _ as ideal , deterministic functionalities implemented by a distributed oracle . a cryptographic _ hash function _ maps a bit string of arbitrary length to a short , unique representation .the functionality provides only a single operation _ hash _ ; its invocation takes a bit string as parameter and returns an integer with the response .the implementation maintains a list of all that have been queried so far .when the invocation contains , then _ hash_ responds with the index of in ; otherwise , _ hash _appends to and returns its index .this ideal implementation models only collision resistance but no other properties of real hash functions .the functionality of the _ digital signature scheme _ provides two operations , and .the invocation of specifies a process , takes a bit string as input , and returns a signature with the response .only may invoke .the operation takes a putative signature and a bit string as parameters and returns a boolean value with the response .its implementation satisfies that returns truefor any process and if and only if has executed and obtained before ; otherwise , returns false. every process may invoke _verify_. the signature scheme may be implemented analogously to the hash function .in this section we discuss the _ modular _ execution of replicated non - deterministicprograms . herethe program is given as a black box , it can not be changed , and the bft system can not access its internal data structures . very informally speaking ,if some processes arrive at a different output during execution than `` most '' others , then the output of the disagreeing processes is discarded .instead they should `` adopt '' the output of the others , e.g. , by asking them for the agreed - on state and response .when the outputs of `` too many '' processes disagree , the correct output may not be clear ; the operation is then ignored ( or , as an optimization , quarantined as non - deterministic ) and the state rolled back . in this modular solutionany application can be replicated without change ; the application developers may not even be aware of potential non - determinism .on the other hand , the modular protocol requires that most operations are deterministic and produce almost always the same outputs at all processes ; it would not work for replicating probabilistic functions .more precisely , a _non - deterministicstate machine _ may output different states and responses for the same operation , which are due to probabilistic choices or other non - repeatable effects .hence we assume that _ execute _ is a relation and not a deterministic function , that is , repeated invocations of the same operation with the same input may yield different outputs and responses .this means that the standard approach of state - machine replication based directly on atomic broadcast fails .there are two ways for modular black - box replication of non - deterministicapplications in a bft system : order - then - execute : : : applying the smr principle directly , the operations are first ordered by atomic broadcast .whenever a process delivers an operation according to the total order , it executes the operation .it does not output the response , however , before checking with enough others that they all arrive at the same outputs . to this end , every process atomically broadcasts its outputs ( or a hash of the outputs ) and waits for receiving a given number ( up to ) of outputs from distinct processes. then the process applies a fixed decision function to the atomically delivered outputs , and it determines the successor state and the response .+ this approach ensures consistency due to its conceptual simplicity but is not very efficient in typical situations , where atomic broadcast forms the bottleneck .in particular , in atomic broadcast with external validity , a process can only participate in the ordering of the next operation when it has determined the outputs of the previous one .this eliminates potential gains from pipelining and increases the overall latency .execute - then - order : : : here the steps are inverted and the operations are executed _ speculatively _ before the system commits their order . as in other practical protocols , this solution uses the heuristic assumption that there is a designated _ leader _ which is usually correct .thus , every process sends its operations to the leader and the leader orders them .it asks all processes to execute the operations speculatively in this order , the processes send ( a hash of ) their outputs to the leader , and the leader determines a unique output .note that this value is still speculative because the leader might fail or there might be multiple leaders acting concurrently .the leader then tries to obtain a confirmation of its speculative order by atomically broadcasting the chosen output .once every process obtains this output from atomic broadcast , it commits the speculative state and outputs the response .+ in rare cases when a leader is replaced , some processes may have speculated wrongly and executed other operations than those determined through atomic broadcast . due to non - determinism in the executiona process may also have obtained a different speculative state and response than what the leader has obtained and broadcast .this implies that the leader must either send the state ( or state delta ) and the response resulting from the operation though atomic broadcast , or that a process has a different way to recover the decided state from other processes . in the following we describe protocol _ sieve _ , which adopts the approach of _ execute - then - order _ with speculative execution .protocol _ sieve _ runs a byzantine atomic broadcast with weak external validity ( abv ) and uses a _sieve - leader _ to coordinate the execution of non - deterministicoperations .the leader is elected through a byzantine epoch - change abstraction , as defined in section [ subsec : leaderelection ] , which outputs epoch / leader tuples with monotonically increasing epoch numbers .for the _ sieve _ protocol these epochs are called _ configurations _ , and _ sieve _ progresses through a series of them , each with its own sieve - leader .the processes send all operations to the service through the leader of the current configuration , using an invoke message .the current leader then initiates that all processes execute the operation speculatively ; subsequently the processes agree on an output from the operation and thereby _ commit _ the operation . as described here , _ sieve _executes one operation at a time , although it is possible to greatly increase the throughput using the standard method of _ batching _ multiple operations together .the leader sends an execute message to all processes with the operation . in turn, every process executes _ speculatively _ on its current state , obtains the speculative next state and the speculative response , signs those values , and sends a hash and the signature back to the leader in an approve message .the leader receives approve messages from distinct processes .if the leader observes at least approvals for the _ same _ speculative output , then it _ confirms _ the operation and proceeds to committing and executing it .otherwise , the leader concludes that the operation is _ aborted _ because of diverging outputs .there must be equal outputs for confirming , in order to ensure that every process will eventually learn the correct output , see below .the leader then _ abv - broadcasts _ an order message , containing the operation , the speculative output for a confirmed operation or an indication that it aborted , and for validation the set of approve messages that justify the decision whether to confirm or abort . during atomic broadcast , the external validity check by the processes will verify this justification .as soon as an order message with operation is _ abv - delivered _ to a process in _ sieve _ , is committed .if is confirmed , the process adopts the output decided by the leader .note this may differ from the speculative output computed by the process .protocol _ sieve _ therefore includes the next state and the response in the order message . in practice , however , one might not send , but state deltas , or even only the hash value of while relying on a different way to recover the confirmed state .indeed , since processes have approved any confirmed output , a process with a wrong speculative output is sure to reach at least one of them for obtaining the confirmed output later . in case the leader _ abv - broadcasted _ an order message with the decision to abort the current operation because of the diverging outputs ( i.e. , no identical hashes in approve messages ) , the process simply ignores the current request and speculative state . as an optimization, processes may _ quarantine _ the current request and flag it as non - deterministic .as described so far , the protocol is open to a denial - of - service attack by multiple faulty processes disguising as sieve - leaders and executing different operations .note that the epoch - change abstraction , in periods of asynchrony , will not ensure that any two correct processes agree on the leader , as some processes might skip configurations .therefore _ sieve _ also orders the configuration and leader changes using consensus ( with the _ abv _ primitive ) . to this effect , whenever a process receives a _ start - epoch _ event with itself as leader , the process _abv - broadcasts _ a new - sieve - config message , announcing itself as the leader .the validation predicate for broadcast verifies that the leader announcement concerns a configuration that is not newer than the most recently started epoch at the validating process , and that the process itself endorses the same next leader .every process then starts the new configuration when the new - sieve - config message is _ abv - delivered_. if there was a speculatively executed operation , it is aborted and its output discarded .the design of _ sieve _ prevents uncoordinated speculative request execution , which may cause contention among requests from different self - proclaimed leaders and can prevent liveness easily .naturally , a faulty leader may also violate liveness , but this is not different from other leader - based bft protocols .the details of protocol _ sieve _ are shown in algorithms [ alg : sieve1][alg : sieve2 ] .the pseudocode assumes that all point - to - point messages among correct processes are authenticated , can not be forged or altered , and respect fifo order .the invoked operations are unique across all processes and _ self _ denotes the identifier of the executing process .[ thm : sieve ] protocol _ sieve _ implements a replicated state machine allowing a non - deterministicfunctionality , except that demonstrably non - deterministic operations may be filtered out and not executed .the _ agreement _ condition of definition [ def : rsm ] follows directly from the protocol and from the _ abv _ primitive . every event is immediately preceded by an _abv - delivered _ order message , which is the same for all correct processes due to _ agreement _ of _ abv_. since all correct processes react to it deterministically , their outputs are the same .for the _ correctness _ property , note that the outputs ( state and response ) resulting from an operation must have been confirmed by the protocol and therefore the values were included in an approve message from at least one correct process .this process computed the values such that they satisfy according to the protocol for handling an execute message . on the other hand , no correct process outputs anything for committed operations that were aborted , this is permitted by the exception in the theorem statement .moreover , only operations are filtered out for which distinct correct processes computed diverging outputs , as ensured by the sieve - leader when it determines whether the operation is confirmed or aborted . in order to abort , no set of processesmust have computed the same outputs among the processes sending the approve messages .hence , at least two among every set of correct processes arrived at diverging outputs ._ termination _ is only required for deterministic operations , they must terminate despite faulty processes that approve wrong outputs .the protocol ensures this through the condition that at least among the approve messages received by the sieve - leader are equal .the faulty processes , of which there are at most , can not cause an abort through this . but every order message is eventually _ abv - delivered _ and every confirmed operation is eventually executed and generates an output . [[ rollback - and - state - transfer . ] ] rollback and state transfer .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + in protocol _ sieve _ every process maintains a copy of the application state resulting from an operation until the operation is committed .moreover , the confirmed state and the response of an operation are included in order messages that are _ abv - broadcast_. for practical applications though , this is often too expensive , and another way for recovering the application state is needed .the solution is to roll back an operation and to transfer the correct state from other processes .we assume that there exists a _primitive , ensuring that for , the output of is always . in the order messages of the protocol , the resulting output state and response are replaced by their hashes for checking the consistency among the processesthus , when a process receives an order message with a confirmed operation and hashes and of the output state and response , respectively , it checks whether the speculative state and response satisfy and .if so , it proceeds as in the protocol . if the committed operation was aborted , or the values do not match in a confirmed operation , the process rolls back the operation . for a confirmed operation ,the process then invokes _ state transfer _ and retrieves the correct state and response that match the hashes from other clients .it will then set the state variable to and output the response .rollback helps to implement transfer state efficiently , by sending incremental updates only . for transferring the state , the process sends a state - request message to all those processes who produced the speculate - signatures contained in the approve messages , which the process receives together with a committed and confirmed operation . since at most of them may fail to respond , the process is guaranteed to receive the correct state .state transfer is also initiated when a new configuration starts through an _ abv - delivered _ new - sieve - config message , but the process has already speculatively executed an operation in the last configuration without committing it ( this can be recognized by ) .as in the above use of state transfer , the operation must terminate before the process becomes ready to execute further operations from execute messages .[ [ synchronization - with - pbft - based - atomic - broadcast . ] ] synchronization with pbft - based atomic broadcast .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + when the well - known _ pbft protocol _ is used to implement _ abv - broadcast _ , two further optimizations are possible , mainly because pbft also relies on a leader and already includes byzantine epoch - change .hence assume that every process runs pbft according to castro and liskov ( * ? ? ?* sec . 4 ) .first , let an epoch - change event of occur at every view change of pbft .more precisely , whenever a process has received a correct pbft - new - view message for pbft - view number and new primary process , and when the matching view - change messages have arrived , then the process triggers a event at _ sieve_. the process subsequently starts executing requests in the new view .moreover , complaints from _ sieve _ are handled in the same way as when a backup in pbft _ suspects _ the current primary to be faulty , namely , it initiates a view change by sending a view - change message .the view change mechanism of pbft ensures all properties expected from , as follows .the _ monotonicity _ and _ consistency _ properties of byzantine epoch - change follow directly from the calculation of strictly increasing view numbers in pbft and the deterministic derivation of the pbft - primary from the view .the _ eventual leadership _condition follows from the underlying timing assumption , which essentially means that timeouts are increased until , eventually , every correct process is able to communicate with the leader , the leader is correct , and no further epoch - changes occur .the second optimization concerns the new - sieve - config message . according to _ sieve _it is _ abv - broadcast _ whenever a new sieve - leader is elected , by that leader .as the leader is directly mapped to pbft s primary here , it is now the primary who sends this message as a pbft request .note that this request might not be delivered when the primary fails , but it will be delivered by the other processes according to the properties of _ abv - broadcast _ , as required by _sieve_. hence , the new sieve - configuration and sieve - leader are assigned either by all correct processes or by none of them . with these two specializations for pbft ,_ sieve _ incurs the additional cost of the execute / approve messages in the request flow , and one new - sieve - config following every view - change of pbft . but determining the sieve - leader and implementing do not lead to any additional messages .non - deterministic operations have not often been discussed in the context of bft systems .the literature commonly assumes that deterministic behavior can be imposed on an application or postulates to change the application code for isolating non - determinism . in practice , however , it is often not possible .liskov sketches an approach to deal with non - determinism in pbft which is similar to _ sieve _ in the sense that it treats the application code modularly and uses execute - then - order .this proposal is restricted to the particular structure of pbft , however , and does not consider the notion of external validity for _ abv _ broadcast . for applications on multi - core servers , the _ eve _system also executes operation groups speculatively across processes and detects diverging states during a subsequent verification stage . in case of divergence, the processes must roll back the operations .the approach taken in eve resembles that of _ sieve _ , but there are notable differences .specifically , the primary application of eve continues to assume deterministic operations , and non - determinism may only result from concurrency during parallel execution of requests .furthermore , this work uses a particular agreement protocol based on pbft and not a generic _ abv _ broadcast primitive .it should be noted that _sieve _ not only works with byzantine atomic broadcast in the model of eventual synchrony , but can equally well be run over randomized byzantine consensus .by adopting the _ master - slave _ model one can support a broader range of non - deterministicapplication behavior compared to the modular protocol .this design generally requires source - code access and modifications to the program implementing the functionality . in a master - slave protocol for non - deterministicexecution, one process is designated as _master_. the master executes every operation first and records all non - deterministicchoices .all other processes act as _ slaves _ and follow the same choices . to cope with a potentially byzantine master , the slavesmust be given means to verify that the choices made by the master are plausible .the master - slave solution presented here follows _ primary - backup replication _ , which is well - known to handle non - deterministicoperations .for instance , if the application accesses a pseudorandom number generator , only the master obtains the random bits from the generator and the slaves adopt the bits chosen by the master .this protocol does not work for functionalities involving cryptography , however , where master - slave replication typically falls short of achieving the desired goals .instead a cryptographically secure protocol should be used ; they are the subject of section [ sec : secure ] .as introduced in section [ sec : modular ] , the _ execute _ operation of a non - deterministicstate machine is a relation .different output values are possible and represent acceptable outcomes .we augment the output of an operation execution by adding _ evidence _ for justifying the resulting state and response .the slave processes may then _ replay _ the choices of the master and accept its output .more formally , we now extend _ execute _ to _ nondet - execute _ as follows : its parameters , , , and are the same as for _ execute _ ; additionally , the function also outputs _ evidence _evidence enables the slave processes to execute the operation by themselves and obtain the same output as the master , or perhaps only to validate the output generated by another execution . for this taskthere is a function that outputs trueif and only if the set of possible outputs from contains . for completenesswe require that for every and , when , it always holds . as a basic verification method, a slave could rerun the computation of the master .extensions to use cryptographic verifiable computation are possible .note that we consider randomized algorithms to be a special case of non - deterministicones .the evidence for executing a randomized algorithm might simply consist of the random coin flips made during the execution . implementing a replicated state machine with non - deterministicoperations using master - slave replication does not require an extra round of messages to be exchanged , as in protocol _sieve_. it suffices that the master is chosen by a byzantine epoch - change abstraction and that the master broadcasts every operation together with the corresponding evidence .more precisely , the processes operate on top of an underlying broadcast primitive _ abv _ and a byzantine epoch - change abstraction . whenever a process receives a _ start - epoch _ event with itself as leader from , the process considers itself to be the master for the epoch and _ abv - broadcasts _ a message that announces itself as the master for the epoch .the epochs evolve analogously to the configurations in _ sieve _ , with the same mechanism to approve changes of the master in the validation predicate of atomic broadcast .similarly , non - master processes send their operations to the master of the current epoch for ordering and execution . for every invoked operation , the master computes and _ abv - broadcasts _ an order message containing the current epoch and parameters , , , and .the validation predicate of atomic broadcast for order messages verifies that the message concerns the current epoch and that using the current state of the process .once an order message is _ abv - delivered _, a process adopts the response and output state from the message as its own .as discussed in the first optimization for _ sieve _ ( section [ subsec : sieveopt ] ) , the output state and response do not always have to be included in the order messages .in the master - slave model , they can be replaced by hashes only for those operations where the evidence contains sufficient data for a process to compute the same and values as the master .this holds , for example , when all non - deterministicchoices of an operation are contained in .should the master _ abv - broadcast _ an operation with evidence that does not execute properly , i.e. , , the atomic broadcast primitive ensures that it is not _ abv - delivered _ through the external validity property . as in _ sieve _, every process periodically checks if the operations that it has invoked have been executed and complains against the current master using .this ensures that misbehaving masters are eventually replaced .the master - slave protocol is inspired by primary - backup replication , and for the concrete scenario of a bft system , it was first described by castro , rodrigues , and liskov in base .the protocol of base addresses only the particular context of pbft , however , and not a generic atomic broadcast primitive .as mentioned before , the master - slave protocol requires changes to the application for extracting the evidence that will convince the slave processes that choices made by the master are valid .this works well in practice for applications in which only a few , known steps can lead to divergence .for example , operations reading inputs from the local system , accessing platform - specific environment data , or generating randomness can be replicated whenever those functions are provided by programming libraries .master - slave replication may only be employed when the application developer is aware of the causes of non - determinism ; for example , a multi - threaded application influenced by a non - deterministicscheduler could not be replicated unless the developer can also control the scheduling ( e.g. , ) .security functions implemented with cryptography are more important today than ever . replicating an application that involves a cryptographic secret ,however , requires a careful consideration of the attack model .if the bft system should tolerate that processes become faulty in arbitrary ways , it must be assumed that their secrets leak to the adversary against whom the cryptographic scheme is employed .service - level secret keys must be protected and should never leak to an individual process .two solutions have been explored to address this issue .one could delegate this responsibility to a third party , such as a centralized service or a secure hardware module at every process .however , this contradicts the main motivation behind replication : to eliminate central control points .alternatively one may use _ distributed cryptography _ , share the keys among the processes so that no coalition of up to among them learns anything , and perform the cryptographic operations under distributed control .this model was pioneered by reiter and birman and exploited , for instance , by sintra or coca . in this sectionwe discuss two methods for integrating non - deterministic cryptographic operations in a bft system .the first scheme is a novel protocol in the context of bft systems , called _ mastercrypt _ , and uses verifiable random functions to generate pseudorandom bits .this randomness is unpredictable and can not be biased by a byzantine process .the second scheme is the well - known technique of distributed cryptography , as discussed above , which addresses a broad range of cryptographic applications .both schemes adopt the master - slave replication protocol from the previous section .a _ verifiable random function ( vrf ) _ resembles a pseudorandom function but additionally permits anyone to verify non - interactively that the choice of random bits occurred correctly .the function therefore guarantees correctness for its output without disclosing anything about the secret seed , in a way similar to non - interactive zero - knowledge proofs of correctness .more precisely , the process owning the vrf chooses a secret seed and publishes a public verification key .then the function family and algorithms and are a vrf whenever three properties hold : correctness : : : can be computed efficiently from and for every one can also ( with the help of ) efficiently generate a proof such that . uniqueness : : : for every input there is a unique that satisfies , i.e. , it is impossible to find and and and such that .pseudorandomness : : : from knowing alone and sampling values from and , no polynomial - time adversary can distinguish the output of from a uniformly random -bit string , unless the adversary calls the owner to evaluate or on .thus , a vrf generates a value for every input which is unpredictable and pseudorandom to anyone _ not _ knowing . as is unique for a given ,even an adversarially chosen key preserves the pseudorandomness of s outputs towards other processes .efficient implementations of vrfs have not been easy to find , but the literature nowadays contains a number of reasonable constructions under broadly accepted hardness assumptions . in practice , when adopting the random - oracle model , vrfs can immediately be obtained from unique signatures such as ordinary rsa signatures . [ [ replication - with - cryptographic - randomness - from - a - vrf . ] ] replication with cryptographic randomness from a vrf .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + with master - slave replication , cryptographically strong randomness secure against faulty non - leader processes can be obtained from a vrf as follows .initially every process generates a vrf - seed and a verification key . then it passes the verification key to a trusted entity , which distributes the verification keys to all processes consistently , ensuring that all correct processes use the same list of verification keys . at every place where the application needs to generate ( pseudo-)randomness , the vrf is used by the master to produce the random bits and all processesverify that the bits are unique . in more detail ,_ mastercrypt _ works as follows .the master computes all random choices while executing an operation as , where _ tag _ denotes a unique identifier for the instance and operation .this tag must never reused by the protocol and should not be under the control of the master . the master supplies to the other processes as evidence for the choice of . during the verification step in process now validates that .when executing the operation , every process uses the same randomness .the pseudorandomness property of the vrf ensures that no process apart from the master ( or anyone knowing its secret seed ) can distinguish from truly random bits .this depends crucially on the condition that _ tag _ is used only once as input to the vrf .hence this solution yields a deterministic pseudorandom output that achieves the desired unpredictability and randomness in many cases , especially against entities that are not part of the bft system .note that simply handing over the seed of a cryptographic pseudorandom generator to all processes and treating the generator as part of a deterministic application would be predictable for the slave processes and not pseudorandom .of course , if the master is faulty then it can predict the value of , leak it to other processes , and influence the protocol accordingly .the protocol should leave as little as possible choice to the master for influencing the value of _ tag_. it could be derived from an identifier of the protocol or bft system `` instance , '' perhaps including the identities of all processes , followed by a uniquely determined binary representation of the operation s sequence number .if one assumes all operations are represented by unique bit strings , a hash of the operation itself could also serve as identifier . _ distributed cryptography _ or , more precisely , _ threshold cryptography _ distributes the power of a cryptosystem among a group of processes such that it tolerates faulty ones , which may leak their secrets , fail to participate in protocols , or even act adversarially against the other processes . threshold cryptosystems extend cryptographic secret sharing schemes , which permit the process group to maintain a secret such that or fewer of them have no information about it , but any set of _ more _ than can reconstruct it . a _ threshold public - key cryptosystem ( t - pkcs ) _, for example , is a public - key cryptosystem with distributed control over the decryption operation .there is a single public key for encryption , but each process holds a _ key share _ for decryption . when a ciphertext is to be decrypted , every process computes a decryption share from the ciphertext and its key share . from any of these decryption shares , the plaintext can be recovered .usually the decryption shares are accompanied by zero - knowledge proofs to make the scheme robust .this models a non - interactive t - pkcs , which is practical because it only needs one round of point - to - point messages for exchanging the decryption shares ; other t - pkcss require interaction among the processes for computing shares .the public key and the key shares are generated either by a trusted entity before the protocol starts or again in a distributed way , tolerating faulty processes that may try to disrupt the key - generation protocol .a state - of - the - art t - pkcs is secure against _ adaptive chosen - ciphertext attacks _ , ensuring that an adversary can not obtain any meaningful information from a ciphertext unless at least one correct process has computed a decryption share . with a t - pkcsthe bft system can receive operations in encrypted form , order them first without knowing their content , and only decrypt and execute them after they have been ordered .this approach defends against violations of the causal order among operations .a _ threshold signature scheme_ works analogously and can be used , for instance , to implement a secure name service or a certification authority .practical non - interactive threshold signature schemes are well - known . to generate cryptographically strong and unpredictable pseudorandom bits , _ threshold coin - tossing schemes _have also been constructed .they do not suffer from the limitation of the vrf construction in the previous section and ensure that no single process can predict the randomness until at least one correct process has agreed to start the protocol .[ [ replication - with - threshold - cryptosystems . ] ] replication with threshold cryptosystems .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + threshold cryptosystems have been used in bft replication starting with the work reiter and birman .subsequently sintra and other systems exploited it as well with robust , asynchronous protocols . for integrating a threshold cryptosystems with a bft system ,no particular assumptions are needed about the structure of the atomic broadcast or even the existence of a leader .the distributed scheme can simply be inserted into the code that executes operations and directly replaces the calls to the cryptosystems . to be more precise ,suppose that invokes a call to one of the three cryptosystem functions discussed before ( that is , public - key decryption , issuing a digital signature , or generating random bits ) . the process now invokes the threshold algorithm to generate a corresponding share .if the threshold cryptosystem is non - interactive , the process sends this share to all others over the point - to - point links. then the process waits for receiving shares and assembles them to the result of the cryptographic operation . with interactive threshold schemes ,the processes invoke the corresponding protocol as a subroutine .ideally the threshold cryptosystem supports the same cryptographic signature structure or ciphertext format as the standardized schemes ; then the rest of the service ( i.e. , code of the clients ) can remain the same as with a centralized service .this holds for rsa signatures , for instance .this paper has introduced a distinction between three models for dealing with non - deterministicoperations in bft replication : _ modular _ where the application is a black box ; _ master - slave _ that needs internal access to the application ; and _ cryptographically secure _ handling of non - deterministicrandomness generation . in the past ,dedicated bft replication systems have often argued for using the master - slave model , but we have learned in the context of blockchain applications that changes of the code and understanding an application s logic can be difficult .hence , our novel protocol _ sieve _ provides a modular solution that does not require any manual intervention . for a bft - based blockchain platform , _ sieve _can simply be run without incurring large overhead as a defense against non - determinism , which may be hidden in smart contracts .we thank our colleagues and the members of the ibm blockchain development team for interesting discussions and valuable comments , in particular elli androulaki , konstantinos christidis , angelo de caro , chet murthy , binh nguyen , and michael osborne .this work was supported in part by the european union s horizon 2020 framework programme under grant agreement number 643964 ( supercloud ) and in part by the swiss state secretariat for education , research and innovation ( seri ) under contract number 15.0091 .j. bonneau , a. miller , j. clark , a. narayanan , j. a. kroll , and e. w. felten . : research perspectives and challenges for bitcoin and cryptocurrencies . in _ proc .36th ieee symposium on security & privacy _ , pages 104121 , 2015 . c. cachin , k. kursawe , f. petzold , and v. shoup .secure and efficient asynchronous broadcast protocols ( extended abstract ) . in _ advances in cryptology : crypto 2001 _ , volume 2139 of _ lecture notes in computer science _ , pages 524541 .springer , 2001 .k. croman , c. decker , i. eyal , a. e. gencer , a. juels , a. kosba , a. miller , p. saxena , e. shi , e. g. sirer , d. song , and r. wattenhofer . on scaling decentralized blockchains .in _ proc .3rd workshop on bitcoin and blockchain research _ , 2016 .a. doudou , b. garbinato , r. guerraoui , and a. schiper .muteness failure detectors : specification and implementation . in _ proc .3rd european dependable computing conference ( edcc-3 ) _ , volume 1667 of _ lecture notes in computer science _ ,pages 7187 .springer , 1999 .t. jager .verifiable random functions from weaker assumptions . in _ proc .12th theory of cryptography conference ( tcc 2015 ) _ , volume 9015 of _ lecture notes in computer science _ , pages 121143 .springer , 2015 .m. kapritsos , y. wang , v. quema , a. clement , l. alvisi , and m. dahlin .all about eve : execute - verify replication for multi - core servers . in _ proc .10th symp .operating systems design and implementation ( osdi ) _ , 2012 .a. lysyanskaya .unique signatures and verifiable random functions from the dh - ddh separation . in _ advances in cryptology : crypto 2002 _ , volume 2442 of _ lecture notes in computer science _ , pages 597612 .springer , 2002 .v. shoup and r. gennaro .securing threshold cryptosystems against chosen ciphertext attack .in _ advances in cryptology : eurocrypt 98 _ , volume 1403 of _ lecture notes in computer science _ ,pages 116 .springer , 1998 .t. swanson .consensus - as - a - service : a brief report on the emergence of permissioned , distributed ledger systems .report , apr . 2015 .url : http://www.ofnumbers.com/wp-content/uploads/2015/04/permissioned-distributed-ledgers.pdf .m. vukoli .the quest for scalable blockchain fabric : proof - of - work vs. bft replication . in _open problems in network security , proc .ifip wg 11.4 workshop ( inetsec 2015 ) _ , volume 9591 of _ lecture notes in computer science _ , pages 112125 .springer , 2016 .
service replication distributes an application over many processes for tolerating faults , attacks , and misbehavior among a subset of the processes . with the recent interest in blockchain technologies , distributed execution of one logical application has become a prominent topic . the established state - machine replication paradigm inherently requires the application to be deterministic . this paper distinguishes three models for dealing with non - determinism in replicated services , where some processes are subject to faults and arbitrary behavior ( so - called byzantine faults ) : first , the modular case that does not require any changes to the potentially non - deterministicapplication ( and neither access to its internal data ) ; second , master - slave solutions , where ties are broken by a leader and the other processes validate the choices of the leader ; and finally , applications that use cryptography and secret keys . cryptographic operations and secrets must be treated specially because they require strong randomness to satisfy their goals . the paper also introduces two new protocols . first , protocol _ sieve _ uses the modular approach and filters out non - deterministicoperations in an application . it ensures that all correct processes produce the same outputs and that their internal states do not diverge . a second protocol , called _ mastercrypt _ , implements cryptographically secure randomness generation with a verifiable random function and is appropriate for most situations in which cryptographic secrets are involved . all protocols are described in a generic way and do not assume a particular implementation of the underlying consensus primitive .
what is the longest curve on the unit sphere ?the most probable answer of any mathematically inclined person to this nave question is : there is no such thing , since any spherical curve of finite length can be made arbitrarily long by replacing parts of it by more and more `` wavy '' arcs ; see figure [ fig1 ] .rephrasing the initial query as `` what is the longest _ rope _ on the unit sphere ? '' makes a big difference . a rope in contrast to a mathematical curve forms a solid body with positive thickness , so that now this question addresses a packing problem with obvious parallels in everyday life .is there an optimal way of winding electrical cable onto the reel ?similarly , and economically quite relevant , can one maximize the volume of yarn wound onto a given bobbin , or how should one store long textile fibre band most efficiently to save storage space ?common to all these packing problems , in contrast to the classic kepler problem of optimal sphere packing , , is that long and slender deformable objects are to be placed into a fixed volume or onto a given surface .nature displays fascinating packing strategies on various scales .extremely long strands of viral dna are packed very efficiently into the tiny phage head of bacteriophages , and chromatin fibres are folded and organized in various aggregates within the chromatid . to model a rope as a mathematical curve with positive thicknesswe follow the approach of gonzalez and maddocks who considered all triples of distinct curve points , and their respective circumcircle radius .the smallest of these radii determines the curve s thickness :=\inf_{x\not= y\not = z\not = x\atop x , y , z\in\gamma}r(x , y , z).\ ] ] a positive lower bound on this quantity controls local curvature but also prevents the curve from self - intersections ; see figure [ fig2 ] .in fact , it equips the curve with a tubular neighbourhood of uniform radius ] before discussing the solvability of this maximization problem for various thickness values we would like to point out that every loop of positive thickness enjoys a strong geometric property , the presence of _ forbidden balls _ : _ any open ball of radius ] for sufficiently large .a direct consequence of the presence of forbidden balls is that problem ( p ) is not solvable at all if the prescribed thickness is strictly greater than , there are simply no spherical curves whose thickness exceeds the value .indeed , for any point on a spherical curve with thickness >1 ] touching the unit sphere ( and therefore as well ) in and containing all of the unit sphere but .however , this ball is forbidden , hence contains no curve point so that is the only curve point on this settles problem ( p ) for if we intersect the union of all forbidden touching balls , ] does not intersect any open geodesic ball on whose boundary is tangent to in at least one curve point . here denotes the intrinsic distance on .one can imagine a bow tie consisting of two open geodesic balls of spherical radius attached to the curve at their common boundary point .this bow tie can be moved freely along the curve without ever hitting any part of the curve . .right : the unit sphere cut along a normal plane that is orthogonal to at the point .the grey spatial forbidden ball rotated along the dashed circle generates a forbidden geodesic ball of radius on . ,title="fig:",height=151 ] .right : the unit sphere cut along a normal plane that is orthogonal to at the point .the grey spatial forbidden ball rotated along the dashed circle generates a forbidden geodesic ball of radius on ., title="fig:",height=151 ] the full strength of property ( fgb ) is frequently used later on to completely classify infinitely many explicit solutions of problem ( p ) . for the moment it helps us to quickly solve that problem for .take any point on an arbitrary spherical curve with thickness .the two forbidden open geodesic balls of spherical radius touching in are two complementary hemispheres that according to ( fgb ) do not intersect hence must be the equator as the only closed curve contained in the complement thus the equator is the only spherical curve with thickness and hence up to congruence the unique solution to problem ( p ) for but what about other thickness values , is the variational problem ( p ) solvable at all ?the answer is yes , and once one has analyzed the continuity properties of the constraint \ge\theta ] problem ( p ) possesses ( at least ) one solution .in addition , every such solution attains the minimal thickness , i.e. , =\theta. ] carries an open tubular neighbourhood which equals the union of subarcs of great circles of uniform length on the sphere .each of these great - arcs is centered at a curve point , and is orthogonal to the respective tangent vector of at .if two such great - arcs centered at different points had a common point , then would be contained in one of the forbidden geodesic balls touching at , which is excluded by property ( fgb ) .therefore this union of great - arcs is disjoint , and we conclude that a curve with ( spatial ) thickness =\theta ] consequently , any curve with thickness \ge\theta ] such that for we have relation ?if we relax for a moment our assumption that we search for one connected closed curve then we easily find sphere - filling ensembles of curves .for , , the stack of latitudinal circles with and mutual distance for forms a set of spherical curves each with spherical thickness . their mutually disjoint spherical tubular neighbourhoods completely cover the sphere : = 4\pi.\ ] ] this collection of latitudinal circles can now be used to construct _one _ closed sphere - filling curve . let us explain in detail how , for the case .we cut the sphere with the latitudinal circles along a longitudinal into an eastern hemisphere and a western hemisphere .each hemisphere contains now a stack of latitudinal semicircles . keeping the western hemisphere fixed we rotate the eastern hemisphere by an angle of such that all the endpoints of the now turned semicircles on meet endpoints of the semicircles on ; see figure [ fig5 ] . .the third and the fifth image depict the sphere - filling curves for turning angles and .the fourth image , in contrast , contains a disconnected sphere - filling ensemble with two components ., title="fig:",height=83 ] .the third and the fifth image depict the sphere - filling curves for turning angles and .the fourth image , in contrast , contains a disconnected sphere - filling ensemble with two components ., title="fig:",height=83 ] .the third and the fifth image depict the sphere - filling curves for turning angles and .the fourth image , in contrast , contains a disconnected sphere - filling ensemble with two components ., title="fig:",height=83 ] .the third and the fifth image depict the sphere - filling curves for turning angles and . the fourth image , in contrast , contains a disconnected sphere - filling ensemble with two components ., title="fig:",height=83 ] .the third and the fifth image depict the sphere - filling curves for turning angles and .the fourth image , in contrast , contains a disconnected sphere - filling ensemble with two components ., title="fig:",height=83 ] this modified collection of semicircles still has spherical thickness and is sphere - filling , since in the construction the sphere - filling stack of the original latitudinal circles was only cut orthogonally and reunited along one longitudinal , which does not change the thickness and sphere - filling property of the ensemble .we also observe that this new ensemble , which resembles to some extent the seamlines on a tennis ball , forms _one _ closed curve , hence solves our problem at least for this particular given spatial thickness are there other solutions for ? why not continue rotating the eastern hemisphere against the fixed hemisphere to obtain more solutions ?it turns out that a total rotation by yields two connected components , which is not what we are looking for . butturning by an angle of leads to another solution : a new single closed loop not congruent to the first one ; see figure [ fig5 ] .one can show that this procedure works well for arbitrary , and with a little elementary algebra we can determine the exact number of solutions : [ thm:1.2 ] for each and each whose greatest common divisor with equals , the construction described above starting with latitudinal circles with spherical distance and rotating the eastern hemisphere against the fixed western hemisphere by an angle of , leads to explicit piecewise circular solutions of the variational problem ( p ) for prescribed minimal thickness . here, denotes the eulerian totient function from number theory : gives the number of integers so that the greatest common divisor of and equals . in our example above , , we indeed found explicit solutions by rotating the eastern hemisphere by the amount of for and for figure [ fig6 ] depicts such sphere - filling closed curves for various , and one notices a striking resemblance with certain so - called _ turing patterns _ observed and analyzed in chemistry and biology as characteristic concentration distributions of different substances ; see , e.g. , .in that context , the patterns are caused by diffusion - driven instabilities ; here in contrast , the shape of solutions is a consequence of a simple variational principle . [ cols="^,^,^ " , ] similar constructions for thickness values , starting from an initial ensemble of semicircles together with one or two poles on , lead to two disjoint families of sphere - filling _ open curves_ distinguished by the relative position of the two endpoints on the sphere ; see figure [ fig7 ] . for all even respective open sphere - filling curves have antipodal endpoints , which is not the case if is odd .let us point out that these open curves occur in the different context of statistical physics , namely as two of three possible configurations of ordered phases of long elastic rods densely stuffed into spherical shells ; see , in particular their figures 4a and 4c .those studies aimed at explaining the possible nematic order of densely packed long dna in viral capsids .for each positive integer we have constructed explicitly longest closed ropes of thickness on the unit sphere . are there more ?we know there are , for intermediate values by theorem [ thm:1.1 ] , but even if we stick to these specific countably many values of given minimal thickness we might find more sphere - filling and thus length maximizing curves of considerably different shapes ?the answer may be surprising , but , no , up to congruence our solutions are the only ones , and this `` uniqueness '' result is actually a consequence of a complete classification of sphere - filling thick curves : [ thm:1.3 ] if the spherical tubular neighbourhood ] satisfies then there exist positive integers , and with greatest common divisor equal to , such that =\theta_n= \sin\vartheta_n, ] ; see part a below .this principle guarantees then that the number of possible local touching situations between the curve and geodesic balls with radius equal to ] touches the boundary of a geodesic ball in in at least two non - antipodal points with in this situation the boundary of the strictly smaller geodesic ball for which and are antipodal , is intersected transversally by in and , which means that the open geodesic ball contains curve points ; see figure [ fig8 ] . on the other hand, contains no further curve point different from and , since this would imply for the corresponding ( euclidean ) circumcircle contradicting our assumption \ge\theta ] ( so that , ) we use the same argument as the one that led to to show that there are no curve points in thus we have proven the a closed spherical curve with ( spatial ) thickness \ge\theta=\sin\vartheta ] in terms of their local behaviour when touching geodesic balls of spherical radius : for any open geodesic ball disjoint from and there are plenty of those , e.g. , all forbidden balls by ( fgb ) then one of the following three touching situations is guaranteed for the intersection : * b. possible local touching situations .* 1 . touches in exactly two antipodal points , i.e. , with , or 2 .the intersection is a relatively closed semicircle of spherical radius , or 3 .this intersection equals the full geodesic circle to see this we notice first that for a sphere - filling curve the relatively closed intersection is nonempty , since otherwise a slightly larger ball for some small positive would not contain any curve point , which leads to and hence contradicting .similarly , one can rule out that the set is contained in a relatively open semicircle on , since then two extremal points realizing the diameter of would have spherical distance this is one of the frequent occasions that the touching principle ( tp ) comes into play .it implies that the whole circular subarc of connecting and , is contained in and therefore equals in this situation .but the fact that lifts off the geodesic circle at and before completing a full semicircle allows us to move the closed ball slightly `` away '' from , that is , in the direction orthogonal to and away from the geodesic arc connecting and this way we obtain a slightly shifted closed ball of the same radius without any contact to , a situation that we have ruled out above .therefore , is not contained in any relatively open semicircle on if is contained in a relatively closed semicircle we may assume that it contains apart from the antipodal endpoints of that semicircle also at least one third point , otherwise we were in situation ( a ) and could stop here. therefore , by virtue of the touching principle ( tp ) coincides completely with that closed semicircle , which is option ( b ) .if , however , is not contained in any semicircle we can simply look at one point and its antipodal point .if happens to be also in , then both semicircles connecting and would contain further curve points and therefore again by the touching principle , and we end up with option ( c ) .if , then the largest open subarc of containing but no point of must be shorter than unless is contained in a semicircle , a situation we brought to a close before . applying the touching principle to the two endpoints of we find in fact that lies on the subarc of connecting these endpoints on , which exhausts the last possible situation to verify that our list of situations ( a)(c ) is complete .we are going to use the local structure established in parts a and b to prove geometric rigidity of sphere - filling curves with positive spatial thickness * c. global patterns of sphere - filling curves . *if intersects a normal plane orthogonally in distinct points whose mutual spherical distance equals , then is even and contains a semicircle of spherical radius in each of the two hemispheres bounded by . if contains one latitudinal semicircle , then for some and the portion of in the corresponding hemisphere consists of the whole stack of latitudinal semicircles ( including ) with mutual spherical distance before providing the proofs for these rigidity results let us explain how we can combine these to establish the * proof of theorem [ thm:1.3 ] . * the goal is to show the existence of a latitudinal semicircle of spherical radius contained in in order to apply the global pattern ( c2 ) , which then assures that consists of a stack of latitudinal semicircles with mutual spherical distance in one hemisphere , say in .this behaviour in leads to the characteristic intersection point pattern in the longitudinal circle needed in order to apply ( c1 ) , which in turn guarantees the existence of a semicircle of radius on whose endpoints meet orthogonally .again property ( c2 ) leads to a whole stack of equidistant semicircles now on .our construction described in section [ sec:2 ] finally reveals the only possible loops made of two such stacks of equidistant latitudinal semicircles meeting orthogonally , which completes the proof of the classification theorem .the logic of proof resembles a ride on the merry - go - round ; ( c1 ) produces the semicircle on necessary to use ( c2 ) to obtain the stack of semicircles on , which itself generates the point pattern needed to apply ( c1 ) on and finish the task via ( c2 ) on the only problem is : how do we enter the merry - go - round ?we have to show that one portion of is a semicircle of spherical radius without assuming the intersection point pattern needed in ( c1 ) .let be the integer such that for a fixed point we walk along a unit speed geodesic ray emanating from in a direction orthogonal to at in search of such a semicircle .the geodesic ball is a forbidden ball by means of ( fgb ) , i.e. , where denotes the point reached on the geodesic ray after a spherical distance according to the possible local touching situations we find the desired semicircle on ( option ( b ) or ( c ) in b ) , unless the antipodal point is contained in in that case we continue along the same geodesic ray passing through orthogonally to , until we either find a closed semicircle on one of the geodesic circles , or and all `` antipodal points '' are contained in , so that in other words , either we have found the desired semicircle during the walk along , or we have walked once around the whole longitudinal circle traced out by generating equidistant points where intersects orthogonally . but this is exactly what is needed to apply ( c1 ) to finally establish the existence of the semicircle we are looking for . one final comment on why this exact quantization takes place , i.e. , why we find so that the walk along pinpointing the centers of geodesic balls on the way , actually leads exactly back to the starting point the successive localization of forbidden balls according to ( fgb ) and the possible local touching situations yield the fact that all open geodesic balls for , are disjoint from if , for instance , the walk had stopped too late since the step size was too large , , then , that is , , a contradiction .a similar argument works if we had stopped our walk too early .let us establish the global patterns ( c1 ) and ( c2 ) in more detail since they served as the core tools in the proof of our classification theorem .we start with the * proof of ( c1)*. here it suffices to focus on one of the two hemispheres and determined by , say on .since is simple and closed the curve can leave merely as often as it enters , which immediately gives for some .moreover , is homeomorphic to a flat disk so that we can find nearest neighbouring exit and entrance points with minimal spherical distance such that the closed subarc connecting and satisfies we will show that contains the desired semicircle of spherical radius .since intersects orthogonally we infer from ( fgb ) that the open geodesic ball containing as antipodal boundary points , is disjoint from .if there were a third point distinct from and , then according to the touching principle ( tp ) the whole semicircle on with endpoints and would be contained in , and we were done .else we trace the open spherical region bounded by with geodesic rays emanating from arbitrary points orthogonally into the region notice that is disjoint from , and that and where the argument of indicates how long one has to travel along the geodesic ray to reach the destination point .in addition , the forbidden ball property ( fgb ) implies and therefore for all points see figure [ fig9 ] . emanating from points on the subarc to trace the enclosed spherical region .the depicted antipodal touching is excluded to hold throughout by virtue of brouwer s fixed point theorem ., width=151 ] according to part b either ( the antipodal situation ( a ) ) , or contains a semicircle containing itself the point for some this semicircle lies completely in the western hemisphere , which concludes the proof of ( c1 ) . to see the latter assume contrariwise that there exists a point on .then by connectivity also or would lie on the semicircle , too , which immediately implies that and hence also lies on the original geodesic circle , a situation that we had excluded already .it remains to exclude the antipodal touching throughout the subarc .we use brouwer s fixed point theorem for the continuous map defined by , which actually maps into .relation in fact guarantees that hence is either contained in in which case we are done , or lies in but then the antipodal partner of would also lie on , which was excluded earlier .consequently , brouwer s theorem is applicable and leads to a fixed point , which implies because parametrizes a unit speed great circle on .but this contradicts our assumption that . ] of since its curvature is to large .if the stack is not high enough ( ) then the last semicircle is still on the correct hemisphere but has spherical radius , which is again to small for the thick curve an infinite countable number of thickness values we have established a complete picture of the solution set for problem ( p ) using the sphere - filling property to a large extent .the general existence theorem , theorem [ thm:1.1 ] , however , guarantees the existence of longest ropes on the unit sphere also for all intermediate thickness values .what are their actual shapes ?theorem [ thm:1.3 ] ascertains that those solutions can not be sphere - filling . in we constructed a comparison curve that could serve as a promising candidate for prescribed minimal thickness , but this question remains to be investigated , as well as the interesting connections to turing patterns and the statistical behaviour of long elastic rods under spherical confinement mentioned in sections 1 and 2 .in addition , if one substitutes the unit sphere by other supporting manifolds such as the standard torus , then the issue of analyzing the shapes of optimally packed ropes is wide open .katzav , e. ; adda - bedia , m. ; boudaoud , a. a statistical approach to close packing of elastic rods and to dna packaging in viral capsids .proceedings of the national academy of sciences , usa , * 103 * ( 2006 ) , 1890018904 .kyosev , y. numerical analysis for sliver winding process with additional can motion . in : _ 5th international conference textile science 2003 texsci 2003 _ , isbn 80 - 7083 - 711-x , tu - liberec , czech republic ( 2003 ) , pp .330 - 334 .
what is the longest rope on the unit sphere ? intuition tells us that the answer to this packing problem depends on the rope s thickness . for a countably infinite number of prescribed thickness values we construct and classify all solution curves . the simplest ones are similar to the seamlines of a tennis ball , others exhibit a striking resemblance to turing patterns in chemistry , or to ordered phases of long elastic rods stuffed into spherical shells . . inserting more and more oscillations into a given curve its length can be made arbitrarily large . , width=151 ]
protein secondary structure elements ( psse ) are the basic building blocks of proteins and their form and arrangement is of fundamental importance for protein folding and function .they have been first predicted by pauling and corey on the basis of hydrogen bonding and were later confirmed by x - ray diffraction experiments .the localization of psses in protein structure databases is one of the most basic tasks in bioinformatics and various methods have been developed for this purpose .we mention here dssp ( define secondary structure of proteins) and stride ( structural identification ) , which assign psses on the basis of geometrical , energetic and statistical criteria and which are the most widely used approaches .the result are contiguous domains along the amino acid sequence of the protein , which are labeled as `` -helix '' , `` -strand '' , etc .there is no precise and universally accepted definition for psses , and therefore each method produces slightly different results .the geometrical variability of these psses , which depends on the global protein fold , is not explicitly considered by these approaches .the published screwfit method allows for both assignment and geometrical description of psses .it describes the geometry of the protein backbone by a succession of screw motions linking successive groups in the peptide bonds , from which psses can be assigned on the basis of statistically established thresholds for the local helix parameters .the latter have been derived by screening the astral database , which provides representative protein structure sets containing essentially one secondary structure motif .the screwfit description is intuitive and bears some ressemblances with the p - curve approach proposed by sklenar , etchebest and lavery , in the sense that both methods lead to a sequence of local helix axes , the ensemble of which defines an overall axis of the protein under consideration. uses , however , a set of parameters and was originally developed to pinpoint changes in protein structure due to external stress .the experimental basis for the automated assignment of psses in proteins is x - ray crystallography , which yields information about the positions of the heavy atoms in a protein . although the number of resolved protein structures increased almost exponentially during the last two decades , the fraction of proteins for which the atomic structure is known is still very small .low resolution techniques , like electron microscopy , are an additional source of information and in this context the description of psses must be correspondingly simplified , in order to be useful in structure refinement .a natural and commonly used coarse - grained description of proteins is the -model , where each residue is represented by its respective -atom on the protein backbone . to our knowledge ,_ were the first to publish a method of secondary structure assignment on the basis of the -positions , and different approaches for that purpose have been published since then .like dssp and stride , these methods aim at assigning psses on a true / false basis and the underlying models for this decision are not exploited or not exploitable for a more detailed description of protein folds .the motivation of this paper was to develop an extension of the screwfit method which works only with the -positions , maintaining the capability to describe the global fold of a protein by a minimalistic model and to assign psses .the method is described in section [ sec : method ] and two applications are presented and discussed in section [ sec : applications ] . a short rsum with an outlook concludes the paper .we consider the ensemble of the -positions , , as a discrete representation of a space curve , , where $ ] and ( ) are the basis vectors of a space - fixed euclidean coordinate system .imposing that at equidistantly sampled values of , we define a continuous space curve by a piecewise polynomial interpolation of the - .the values for and are arbitrary and one may in particular choose and , such that ., where are , respectively , the tangent vector , the normal vector , and the bi - normal vector to the curve .the dot denotes a derivative with respect to . interpolating the space curve around each -position with a second order polynomial involving the respective left and right neighbors , we obtain for . at the end points of the chainone can only use forward and backward differences , respectively , and a second - order interpolation of the -space to identical -planes at the first and last two -positions , which is not compatible with a helicoidal curve . in this casewe resort to third - order interpolation , such that we note here that the frenet frames constructed at the -positions 2 are identical with the so - called `` discrete frenet frames '' introduced in ref . .having constructed the frenet frames , the next step consists in constructing the screw motions which link consecutive frames along the protein main chain . for this purpose , the basis vectors must be referred to their respective anchor points , . defining the `` tips '' of the frenet basis vectorsare located at and the mathematical problem consists in finding the screw parameters for the mappings for .in general , a rigid body displacement can be expressed in the form where is the center of rotation , is a rotation matrix , and a translation vector . by construction , elements of the rotation matrix can be expressed in terms of three independent real parameters .one possible choice is to use the rotation angle , , and the unit vector , , pointing into the direction of the rotation axis . for this parametrization, has the form where ( ) is the projector on and is a skew - symmetric matrix which is defined by the relation for an arbitrary vector .the elements of are , where ( ) are the components of the totally antisymmetric levi - civita tensor .we recall that , and zero otherwise .the parameters of the rigid - body displacement ( [ eq : rbdisplacement ] ) depend on the choice of the rotation center , , and there is a special choice , , for which the translation vector points into the direction of the rotation axis , such that .this is known as chasles theorem and the corresponding rigid body displacement describes a screw motion , using that , one shows easily that is the projection , the position is not uniquely defined , but stands for all points on the screw axis . defining to be the point for which the distance is a minimum , the screw axis is defined through where and is the component of which is perpendicular to the rotation axis .we note that .the radius of the screw motion is defined through and it follows from ( [ eq : sperp ] ) that assuming that the frenet frames at the -positions have been constructed , the fold of a protein is defined by the sequence of screw motions , where for and .the corresponding parameters are computed as follows : 1 .[ eq : translations ] determine the translation vectors 2 .perform a rotational least squares fit by minimizing the target function with respect to four quaternion parameters , , which parametrize the rotation matrix according to the quaternion parameters are normalized such that , which leaves three free parameters describing the rotation .we note here only that the minimization of ( [ eq : leastsquaresquat ] ) leads to an eigenvector problem for the optimal quaternion , which can be efficiently solved by standard linear algebra routines , and that the corresponding eigenvalue is the squared superposition error .the latter is zero for superposition of frenet frames , since two orthonormal and equally oriented vector sets can be perfectly superposed .it is also worthwhile noting that the upper limit in the sum in ( [ eq : leastsquaresquat ] ) can be changed from 3 to 2 , since two linearly independent vectors with the same origin , here and , suffice to define a rigid body .3 . extract and from the quaternion parameters .this can be easily achieved by expoiting the relations here and in the following the index is dropped .several cases have to be considered .if , where depends on the machine precision of the computer being used , we compute a `` tentative rotation axis '' then we check if .if this is the case we set in case that we set this corresponds to replacing before evaluating and according to ( [ eq : n1 ] ) and ( [ eq : phi1 ] ) .such a replacement is possible since the elements of are homogeneous functions of order two in the quaternion parameters , such that .+ for the sake of completeness , we the case that , which corresponds to a pure translation and can not occur in our application to protein backbones . in this caseone would set and .4 . using the parameters and defining the positions to be the rotation centers , , compute for 1 .the positions on the local screw axes according to relation ( [ eq : sperp ] ) , 2 .the local helix radii according to relation ( [ eq : rho ] ) . to quantify the regularity of psses ,we introduce the distance measure where .for an ideal psse , where all consecutive frenet frames are related by the same screw motion , is strictly zero .this measure of non - ideality deviates from the `` straightness '' parameter in the screwfit algorithm , which is defined as with , and which defines ideality of psses through the cosine of the angle between subsequent local screw axes . at one point of the helicoidal curve defined in eq .( [ eq : helicoidalcurve ] ) ( red solid line ) . setting and ,the latter is shown for one turn , together with equidistantly spaced sampling points ( red points ) .the blue line is the helix axis and the blue points correspond to the rotation centers ( ) .the figure has been produced with the mathematica software . ] to test the numerical construction of frenet frames , we consider a helicoidal curve and compare the exact frenet frames with the corresponding numerical approximations .the parametric representation of the curve is where is the radius of the helix and its pitch is .[ fig : frenetframe ] shows the form of the curve ( [ eq : helicoidalcurve ] ) for one complete turn ( red line ) , setting and in arbitrary length units . defining the matrix , it follows from ( [ eq : helicoidalcurve ] ) that using the method described in section [ sec : frenetframes ] , we construct numerical approximations of the frenet bases ( [ eq : frenetexact ] ) at equidistant sampling points , , which are shown as red dots in fig .[ fig : frenetframe ] . from these frenet baseswe construct the axis points ( blue dots ) , which are shown together with the exact screw axis ( blue line ) . for the first and the last axis point onenotices a visible offset from the latter . where for a perfect overlap of and one should have , we note that is the frobenius norm of .[ fig : frenetframeerror ] shows corresponding to the frenet basis in fig .[ fig : frenetframe ] and confirms the slight offset of the first and last axis point from the ideal screw axis . ) for the bases and at the red points in fig .[ fig : frenetframe ] . ]in the following we consider two applications of the coarse - grained model for protein secondary structure , which has been described in the previous section and which will be referred to as screwframe in the following . the first application concerns the construction of a tube model for a protein from the screwframe parameters and in the second application ,these parameters are used for a comparative study of screwframe and dssp for secondary structure assignment .-curve ( red ) of myoglobin ( pdb code 1a6 g ) and b - spline curve ( blue ) linking the screw motion centers . *bottom : * tube representation of the -curve .the local tube radii equal the respective helix radii of the screw motions linking the frenet frames and ( ) .the figure has been produced with the mathematica software . ] as a first application we consider the screwframe model for myoglobin , which is an oxygen - binding protein in muscular tissues .myoglobin is composed of 151 amino acids which fold into a globular form and the dominant psses are -helices . for our demonstrationwe use the crystallographic structure 1a6 g of the protein data bank . the red and blue line in the upper part of fig .[ fig : screwaxis ] display , respectively , the space curve defined by the positions of the -atoms and the space curve linking the corresponding screw motion centers . both space curves are constructed by a piecewise polynomial interpolation of second order .the blue line indicates the global fold of the protein , where ideal psses appear simply as straight segments . in the following we refer to this line as the protein screw axis .it plays the same role as the `` overall protein axis '' in the p - curve algorithm , although its construction is different .the lower part of the figure shows the corresponding `` tube model '' , where the axis of the tube equals the protein screw axis and the local tube radius corresponds to the radius of the local screw motion . as in the original screwfit algorithm , the screw radius allows for a discrimination of different types of psses ( see table [ tab : rhomodel ] ) ..[tab : rhomodel ] screw radii in nm for standard model structures generated chimera . since screwfit uses the -atoms in the peptide planes as reference points for the ( pure ) rotations , whereas as screwframe uses the -atoms , the radii determined by screwfit are systematically smaller than those obtained from screwframe .[ cols="<,<,<,<,<,<",options="header " , ] we define a -strand as a segment of consecutive c atoms whose screw transformations satisfy where and are the mean values and standard deviations of the gaussian distributions for the parallel and antiparallel peaks in fig .[ fig : rho - detail ] . the numerical parameters in these definitionswere chosen to make our definitions match the secondary structure assignments made by the dssp method .there is a fundamental difference between our approach and the dssp method for defining -strands .the screwframe approach looks for a regular structure along the peptide chain , whereas the dssp method identifies hydrogen bonds between the strands that make up a -sheet .screwframe thus finds individual strands , which can be paired up to identify sheets in a separate step .a strand must consist of at least three consecutive residues in order to be considered regular ; in fact , the regularity measure is defined in terms of the difference of two consecutive screw transformations , each of which connects two residues .dssp needs to look at two strands simultaneously in order to identify structures , but has no minimal length condition and in fact admits -sheets as mall as a single h - bonded residue pair .not important , but they must be understood for interpreting the following comparison between the two methods .-strands as identified by screwframe and dssp .the strong localization of the distribution around the diagonal shows the similarity between these two assignments .* bottom : * the distribution of the lengths of identified -helices , left for dssp , right for screwframe .the peak at very short strands in the dssp distribution is absent from the screwframe results because screwframe needs at least three consecutive residues to recognize a regular structure.,title="fig : " ] + -strands as identified by screwframe and dssp .the strong localization of the distribution around the diagonal shows the similarity between these two assignments .* bottom : * the distribution of the lengths of identified -helices , left for dssp , right for screwframe .the peak at very short strands in the dssp distribution is absent from the screwframe results because screwframe needs at least three consecutive residues to recognize a regular structure.,title="fig : " ] a one - to - one comparison of secondary structure elements from two different assignment methods is not of particular interest , because an exact match is the exception rather than the rule .the inherent fuziness of secondary structure definitions leads to arbitrary choices and thus inevitable differences .the most frequent deviation between two assignments is the end points of secondary structure elements , where a difference of one or two residues is common and acceptable .another frequent deviation concerns deformed secondary structure elements , which one method may identify as a single element whereas another one recognizes it as multiple distinct elements .we therefore chose a statistical comparison to compare the screwframe results to those of dssp , which is shown in figs .[ fig : alpha - comparison ] for -helices and [ fig : beta - comparison ] for -strands .we consider two quantities : ( 1 ) the total number of residues of a given structure which are inside a recognized secondary - structure element , and ( 2 ) the length of each individual secondary - structure element .we compute the first quantity for both methods and show their joint distribution ( upper plot in the two figures ) .for the vast majority of structures , the two residue counts are close to equal , which means that neither method yields systematically more or longer secondary - structure elements than the other .the lower plots show the distributions of the lengths of individual secondary - structure elements . for -helices, dssp has a fatter tail ( helices of length 20 or more ) , whereas screwframe identifies a larger number of short helices .the reason for these differences is that screwframe tends to split up kinked helices which dssp identifies as single units . for -strands ,we notice that dssp identifies many more very short elements .this is due to the different definitions : a single -type hydrogen bond is sufficient to define a -sheet in dssp , but screwframe requires at least three consecutive residues to identify any regular structure .we have presented a generalization of the screwfit method for protein structure assignment and description , which uses only the positions of the -atoms along the protein backbone . as in the screwfit approach ,the global protein fold is described as a succession of screw motions relating consecutive recurrent motifs along the protein backbone , but the `` motifs '' are here the tripods ( planes ) formed by the three ( two ) orthonormal vectors of the local frenet bases to the space curve . despite the fact that screwframe uses less information than screwfit , all standard psses are recognized on the basis of thresholds for the local screw radii and a suitably defined regularity measure .screwframe even permits to distinguish between parallel and antiparallel -strands , which the classical screwfit method fails to do . a thorough comparison with the commonly used dssp method on the assignment of psses in the astral database shows that both methods yield very similar results for the total amount of psses. screwframe tends , however , to break long helices into smaller pieces , such that the length distribution of psses is different . due to the minimalistic character of the geometrical model for protein folds ,the evaluation of the screwframe model parameters is very efficient .this allows for working with protein structure databases and for analyzing simulated molecular dynamics trajectories of proteins .screwframe may also be used a starting point for the development of minimalistic models for protein structure and dynamics , similar to the wormlike chain model , which has been successfully applied to dna . as already mentioned, our method can also be used to analyze dynamical processes , such as the folding and unfolding of peptides an activepaper containing all the software , input datasets , and results from this study is available as supplementary material .the datasets can be inspected with any hdf5-compatible software , e.g. the free hdfview. running the programs on different input data requires the activepaper software .
the paper presents a geometrical model for protein secondary structure analysis which uses only the positions of the -atoms . we construct a space curve connecting these positions by piecewise polynomial interpolation and describe the folding of the protein backbone by a succession of screw motions linking the frenet frames at consecutive -positions . using the astral subset of the scope data base of protein structures , we derive thresholds for the screw parameters of secondary structure elements and demonstrate that the latter can be reliably assigned on the basis of a -model . for this purpose we perform a comparative study with the widely used dssp ( define secondary structure of proteins ) algorithm .
understanding the ecological implications of infectious disease is one of a few long - lasting problems that still remains challenging due to their inherent complexities .pathogen - mediated invasion is one of such ecologically important processes , where one invasive species carrying and transmitting a pathogen invades into a more vulnerable species .apparent competition , the competitive advantage conferred by a pathogen to a less vulnerable species , is generally accepted as a major force influencing biodiversity . due to the complexities originating from dynamical interactions among multiple hosts and multiple pathogens ,it has been difficult to single out and to quantitatively measure the effect of pathogen - mediated competition in nature . for this reason ,pathogen - mediated competition and infectious disease dynamics in general have been actively studied with theoretical models .theoretical studies of ecological processes generally employ deterministic or stochastic modeling approaches . in the former case ,the evolution of a population is described by ( partial- ) differential or difference equations . in the latter case ,the population is modeled as consisting of discrete entities , and its evolution is represented by transition probabilities .the deterministic modeling approach has been favored and widely applied to ecological processes due to its simplicity and well - established analytic tools .the applicability of the deterministic approach is limited in principle to a system with no fluctuations and no ( spatial ) correlations , e.g. , a system composed of a large number of particles under rapid mixing .the stochastic modeling approach is more broadly applicable and more comprehensive , as the macroscopic equation naturally emerges from a stochastic description of the same process . while being a more realistic representation of noisy ecology , the stochastic approach has a downside : most stochastic models are analytically intractable and stochastic simulation , a popular alternative , is demanding in terms of computing time .nonetheless , the stochastic approach is indispensable when a more thorough understanding of an ecological process is pursued .the role of stochastic fluctuations has been increasingly appreciated in various studies of the spatio - temporal patterns of infectious diseases such as measles , pertussis and syphilis .there has been an escalating interest in elucidating the role of stochastic noise not only in the studies of infectious disease dynamics but also in other fields such as stochastic interacting particle systems as model systems for population biology , the stochastic lotka - volterra equation , inherent noise - induced cyclic pattern in a predator - prey model and stochastic gene regulatory networks .here we investigate the effects of noise on pathogen - mediated competition , previously only studied by deterministic approaches .in our previous work we developed an experimental system and a theoretical framework for studying bacteriophage - mediated competition in bacterial populations .the experimental system consisted of two genetically identical bacterial strains ; they differed in that one strain was a carrier of the bacteriophage and resistant to it while the other strain was susceptible to phage infection .based on the _ in vitro _experimental set - up , we constructed a differential equation model of phage - mediated competition between the two bacterial strains .most model parameters were measured experimentally , and a few unknown parameters were estimated by matching the time - series data of the two competing populations to the experiments ( see fig . [ fig1 ] ) .the model predicted , and experimental evidence confirmed , that the competitive advantage conferred by the phage depends only on the relative phage pathology and is independent of other phage and host parameters such as the infection - causing contact rate , the spontaneous and infection - induced lysis rates , and the phage burst size .here we examine if intrinsic noise changes the dynamics of the bacterial populations interacting through phage - mediated competition , and more specifically if it changes the validity of the conclusions of the deterministic model .the phage - bacteria infection system is modeled and analyzed with two probabilistic methods : ( i ) a linear fokker - plank equation obtained by a systematic expansion of a full probabilistic model ( i.e. , a master equation ) , and ( ii ) stochastic simulations .both probabilistic methods are used to identify the source of noise and assess its magnitude , through determining the ratio of the standard deviation to the average population size of each bacterial strain during the infection process .finally stochastic simulations show that the conclusions obtained from the deterministic model are robust against stochastic fluctuations , yet deviations become large when the phage are more pathological to the invading bacterial strain .illustrations of phage - mediated competition obtained from _ in vitro _ experiments ( symbols ) and a deterministic model ( lines ) .the phage infection system consists of two genetically identical bordetella bronchiseptica bacteria ( bb ) and the bacteriophage bpp-1 ( ) . a gentamicin marker ( gm ) is used to distinguish the susceptible bacterial strain ( bbgm ) from the phage - carrying bacterial strain ( bb:: ) . as time elapses ,a fraction of bbgm become lysogens ( bbgm:: ) due to the phage - infection process .bb:: are represented by open squares and a thick solid line , bbgm:: by open circles and a thin solid line , and the total bbgm ( bbgm+bbgm:: ) by filled circles and a long - dashed line , respectively .( a ) lysogens ( bb:: ) exogenously and endogenously carrying the prophage invade the bbgm strain susceptible to phage , and ( b ) lysogens ( bb:: ) are protected against the invading susceptible bacterial strain ( bbgm) .the differential equations were solved with biologically relevant parameter values .( see section 3.1 and table 1 for a detailed description.),width=377 ]we consider a generalized phage infection system where two bacterial strains are susceptible to phage infection , yet with different degrees of susceptibility and vulnerability to phage .the interactions involved in this phage - mediated competition between two bacterial strains are provided diagrammatically in fig .[ fig2 ] .diagrammatic representation of phage - mediated competition between two bacterial strains with differential susceptibilities and phage pathogenicities .the subscript denotes the type of bacterial strain .phage ( ) are represented by hexagons carrying a thick segment ( dna ) . a susceptible bacterium ( ) is represented by a rectangle containing an inner circle ( bacterial dna ) while a lysogen ( ) is represented by a rectangle containing dna integrated into its bacterial dna .all bacterial populations grow with an identical growth rate while a latent bacterium ( ) is assumed not to divide . and represent spontaneous and infection - induced lysis rates , respectively . , width=377 ] we describe this dynamically interacting system with seven homogeneously mixed subpopulations : each bacterial strain can be in one of susceptible ( ) , lysogenic ( ) , or latent ( ) states , and they are in direct contact with bacteriophage ( ) .all bacteria divide with a constant rate when they are in a log growth phase , while their growth is limited when in stationary phase .thus we assume that the bacterial population grows with a density - dependent rate where is the total bacterial population and is the maximum number of bacteria supported by the nutrient broth environment .susceptible bacteria ( ) become infected through contact with phage at rate . upon infectionthe phage can either take a lysogenic pathway or a lytic pathway , stochastically determining the fate of the infected bacterium .we assume that a fraction of infected bacteria enter a latent state ( ) .thereafter the phage replicate and then lyse the host bacteria after an incubation period , during which the bacteria do not divide .alternatively the phage lysogenize a fraction of their hosts , which enter a lysogenic state ( ) , and incorporate their genome into the dna of the host .thus the parameter characterizes the pathogenicity of the phage , incorporating multiple aspects of phage - host interactions resulting in damage to host fitness .the lysogens ( ) carrying the prophage grow , replicating prophage as part of the host chromosome , and are resistant to phage .even though these lysogens are very stable without external perturbations , spontaneous induction can occur at a low rate , consequently replicating the phage and lysing the host bacteria .in general , both the number of phage produced ( the phage burst size ) and the phage pathology depend on the culture conditions .the two bacterial strains differ in susceptibility ( ) and vulnerability ( ) to phage infection .when the initial population size of the invading bacterial strain is small , the stochastic fluctuations of the bacterial population size are expected to be large and likely to affect the outcome of the invasion process .a probabilistic model of the phage infection system is able to capture the effects of intrinsic noise on the population dynamics of bacteria .let us define the joint probability density denoting the probability of the system to be in a state at time where , and denote the number of bacteria in susceptible , latent or infected states , respectively .the time evolution of the joint probability is determined by the transition probability per unit time of going from a state to a state .we assume that the transition probabilities do not depend on the history of the previous states of the system but only on the immediately past state .there are only a few transitions that are allowed to take place .for instance , the number of susceptible bacteria increases from to through the division of a single susceptible bacterium and this process takes place with the transition rate = .the allowed transition rates are where .the second line represents the division of a lysogen ; the 3rd line describes an infection process by phage taking a lysogenic pathway while the fourth line denotes an infection process by phage taking a lytic pathway .the last two transitions are spontaneously - induced and infection - induced lysis processes , respectively . bacterial subpopulations that are unchanged during a particular transition are denoted by `` '' .the parameters , , and in the transition rates of eq .( [ transition ] ) represent the inverse of the expected waiting time between events in an exponential event distribution and they are equivalent to the reaction rates given in fig .[ fig2 ] . the stochastic process specified by the transition rates in eq .( [ transition ] ) is markovian , thus we can immediately write down a master equation governing the time evolution of the joint probability . the rate of change of the joint probability is the sum of transition rates from all other states to the state , minus the sum of transition rates from the state to all other states : \\ \nonumber & + & ( e^{+1}_{\phi}e^{+1}_{s_j}e^{-1}_{i_j}-1)[t(s_j-1,i_j+1, ... ,\phi-1|s_j , i_j, ... ,\phi;t ) p_t(\underline{\eta } ) ] \\ \nonumber & + & ( e^{+1}_{\phi}e^{+1}_{s_j}e^{-1}_{l_j}-1)[t(s_j-1, ...,l_j+1,\phi-1|s_j, ... ,l_j,\phi;t ) p_t(\underline{\eta } ) ] \\ \nonumber & + & \delta(e^{+1}_{i_j}e^{-\chi}_{\phi}-1)[t( ...,i_j-1, ... ,\phi+\chi| ... ,i_j, ... ,\phi;t ) p_t(\underline{\eta } ) ] \\ \nonumber & + & \lambda ( e^{+1}_{l_j}e^{-\chi}_{\phi}-1)t( ... ,l_j-1, ... ,\phi+\chi| ... ,l_j, ... ,\phi;t ) p_t(\underline{\eta } ) ] \bigr \ } \nonumber\end{aligned}\ ] ] where is a step operator which acts on any function of according to . the master equation in eq .( [ master ] ) is nonlinear and analytically intractable .there are two alternative ways to seek a partial understanding of this stochastic system : a stochastic simulation and a linear fokker - plank equation obtained from a systematic approximation of the master equation .a stochastic simulation is one of the most accurate / exact methods to study the corresponding stochastic system .however , stochastic simulations of an infection process in a large system are very demanding in terms of computing time , even today .moreover , simulation studies can explore only a relatively small fraction of a multi - dimensional parameter space , thus provide neither a complete picture nor intuitive insight to the current infection process .the linear fokker - plank equation is only an approximation of the full stochastic process ; it describes the time - evolution of the probability density , whose peak is moving according to macroscopic equations . in cases where the macroscopic equations are nonlinear, one needs to go beyond a gaussian approximation of fluctuations , i.e. , the higher moments of the fluctuations should be considered . in cases when an analytic solution is possible, the linear fokker - plank equation method can overcome most disadvantages of the stochastic simulations .unfortunately such an analytic solution could not be obtained for the master equation in eq .( [ master ] ) . in the following sections we present a systematic expansion method of the master equation to obtain both the macroscopic equations and the linear fokker - plank equation , then an algorithm of stochastic simulations .in this section we will apply van kampen s elegant method to a nonlinear stochastic process , in a system whose size increases exponentially in time .this method not only allows us to obtain a deterministic version of the stochastic model in eq .( [ master ] ) but also gives a method of finding stochastic corrections to the deterministic result .we choose an initial system size and expand the master equation in order of .we do not attempt to prove the validity of our application of van kampen s -expansion method to this nonlinear stochastic system ; a required condition for valid use of -expansion scheme , namely the stability of fixed points , is not satisfied because the system size increases indefinitely and there is no stationary point .however , as shown in later sections , the linear fokker - plank equation obtained from this -expansion method does provide very reliable results , comparable to the results of stochastic simulations . in the limit of infinitely large , the variables ( , , , ) become deterministic and equal to ( ) , where ( ) are normalized quantities , e.g. , . in this infinitely large size limitthe joint probability will be a delta function with a peak at ( ) .for large but finite , we would expect to have a finite width of order .the variables ( , , , ) are once again stochastic and we introduce new stochastic variables ( ) : , , , .these new stochastic variables represent inherent noise and contribute to deviation of the system from the macroscopic dynamical behavior . the new joint probability density function is defined by where .let us define the step operators , which change into and therefore into , so that in new variables the time derivative of the joint probability in eq . ( [ master ] )is taken at a fixed state , which implies that the time - derivative taken on both sides of should lead to where can be either , , , , , , or .hence , we shall assume that the joint probability density is a delta function at the initial condition , i.e. , .the full expression of the master equation in the new variables is shown in appendix a. here we collect several powers of . in section 3.1 we show that macroscopic equations emerge from the terms of order and that a so - called invasion criterion , defined as the condition for which one bacterial population outcompetes the other , can be obtained from these macroscopic equations . in section 3.2we show that the terms of order give a linear fokker - plank equation whose time - dependent coefficients are determined by the macroscopic equations .there are a few terms of order in the master equation in the new variables as shown in appendix a , which appear to make a large -expansion of the master equation improper .however those terms in order of cancel if the following equations are satisfied eq . ( [ macroscopic ] ) are identical to the deterministic equations of the corresponding stochastic model in the limit of infinitely large , i.e. , in the limit of negligible fluctuations .these equations allow for the derivation of the invasion criterion , defined as the choice of the system parameters in table 1 that makes one invading bacterial strain dominant in number over the other strain .suppose that an initial condition of eq .( [ macroscopic ] ) is , , , , and .( a ) in the case of , there is no phage - mediated interaction between bacteria and the ratio of remains unchanged for .( b ) however when either ( and ) or , the above ratio changes in time due to phage - mediated interactions .even though in principle these nonlinear coupled equations are unsolvable , we managed to obtain an analytic solution of the macroscopic eq .( [ macroscopic ] ) in the limit of a fast infection process , i.e. , ( and ) , by means of choosing appropriate time - scales and using a regular perturbation theory .( see ref . for a detailed description in a simpler system . )we found a simple relationship between the ratios of the two total bacterial populations : where for a sufficiently long time .thus the final ratio is determined solely by three quantities , the initial ratio , , and the two phage pathologies , and is independent of other kinetic parameters such as the infection - causing contact rate , the spontaneous and infection - induced lysis rates , and the phage burst size .the invasion criterion , the condition for which bacterial strain 1 outnumbers bacterial strain 2 , is simply . numerical verification of the invasion criterion of eq .( [ invasion ] ) for a generalized deterministic infection system where both bacterial strains are susceptible to phage infection .the ratio was numerically evaluated by solving eq .( [ macroscopic ] ) with 2000 sets of parameters chosen uniformly in the intervals for phage pathologies , for the phage burst size , for the spontaneous induction rate , and for the initial concentrations of bacterial strains and phage .the time is chosen to be a sufficiently long time .filled circles represent the data from 1000 sets of parameters with relatively large and ( e.g. , ) .open circles are from another 1000 sets of parameters with small and ( e.g. , ) ., width=377 ] to validate the invasion criterion of eq .( [ invasion ] ) in the range of relatively small values of and , we solved eq .( [ macroscopic ] ) numerically with 2000 sets of parameters selected randomly from the biologically relevant intervals .[ fig3 ] shows that the simple relationship in eq .( [ invasion ] ) between and is robust against parameter variations .the results deviate from the linear relationship with increasing phage pathology on the invading bacterial strain 1 compared with that on bacterial strain 2 , i.e. , , or .for simplicity we will assume hereafter that all bacteria grow with a growth rate in a log phase , i.e. , there is no resource competition .identifying terms of in the power expansion of the master equation ( see appendix a ) we obtain a linear fokker - plank equation ( see appendix b ) .this approximation is called as linear noise approxiamtion and the solution of the linear fokker - plank equation in appendix a is a gaussian , which means that the probability distribution is completely specified by the first two moments , and , where . multiplying the fokker - plank equation by and and integrating over all find the time - evolution of the first and the second moments of noise , and ( see appendix c ) .the solutions of all first moments are simple : for all , provided that the initial condition is chosen such that initial fluctuations vanish , i.e. , .the differential equations governing the time evolution of the second moments are coupled , and their solutions can only be attained by means of numerical integrations .we use the time evolution of the second moments of noise to study the role of stochastic fluctuations on phage - mediated competition , and especially to investigate the effects of noise on the invasion criterion .let be the deviation of the total population size of the bacterial strain from its average value , i.e. , where and =+ .let us define the normalized variance of the total population size of the bacterial strain where is a statistical ensemble average .the square root of the normalized variance , , is the magnitude of noise of the bacterial strain at a given time t. another useful quantity is the normalized co - variance between the bacterial strain in a state and the bacterial strain in a state : we will present the results for these variances and co - variances in section 5 .in this section we briefly describe our application of the gillespie algorithm for simulation of the stochastic process captured in the master equation of eq .( [ master ] ) , where in total 12 biochemical reactions take place stochastically .the gillespie algorithm consists of the iteration of the following steps : ( i ) selection of a waiting time during which no reaction occurs , where is a random variable uniformly chosen from an interval and is the reaction rate for the biochemical reaction .( ii ) after such a waiting time , which biochemical reaction will take place is determined by the following algorithm .the occurrence of each event has a weight .thus the biochemical reaction is chosen if where is another random number selected from the interval and is the total number of biochemical reactions .( iii ) after execution of the reaction , all reaction rates that are affected by the reaction are updated .we measure the averages , the normalized variances and co - variances of bacterial populations at various time points , by taking an average over realizations of the infection process , starting with the same initial condition .because a normalized variance or covariance is a measure of deviations of a stochastic variable from a macroscopic value ( which is regarded as a true value ) , it is not divided by the sampling size .the computing time of the gillespie algorithm - based simulations increases exponentially with the system size . in the absence of resource competition ,the total bacterial population increases exponentially in time .because we need to know the stationary ratio of the two bacterial populations , the computing time should be long enough compared to typical time scales of the infection process .this condition imposes a limit on the range of parameters that we can explore to investigate the validity of the invasion criterion .we choose the values of parameters from the biologically relevant range given in table 1 and we , furthermore , set lower bounds on the rates of infection causing contact and infection - induced lysis , namely and .while the methodologies described in section 3 and 4 apply to the general case of two susceptible bacterial strains , in this section we limit our investigations to a particular infection system , called a `` complete infection system '' hereafter , in which bacterial strain 1 is completely lysogenic and only bacterial strain 2 is susceptible to phage infection .there are two advantages to studying the complete infection system : 1 ) this is equivalent to the infection system which we studied experimentally and thus the results are immediately applicable to at least one real biological system .2 ) the probabilistic description of bacterial strain 1 ( lysogens ) is analytically solvable as it corresponds to a stochastic birth - death process . in section 5.1 , studying a system consisting of only lysogens , we elucidate the different dynamic patterns of the normalized variance when the system size remains constant or when it increases .this finding provides us with the asymptotic behavior of the normalized variances of both bacterial strains because both strains become lysogens eventually after all susceptible bacteria are depleted from the system . in section 5.2, we investigate the role of stochastic noise on phage - mediated competition by identifying the source of noise and assessing its magnitude in the complete infection system . finally in section 5.3, we investigate the effect of noise on the invasion criterion by means of stochastic simulations .the dynamics of lysogens of bacterial strain 1 is completely decoupled from that of the rest of the complete infection system and can be studied independently .they grow at a rate and are lysed at a rate .there exists an exact solution for the master equation of this stochastic birth - death process .thus we can gauge the accuracy of an approximate method for the corresponding stochastic process by comparing it with the exact solution .( see appendix c for description of the birth - death process and its exact solution . )the master equation of the birth - death process is where represents the number of lysogens at time . is transformed into a new variable as discussed in section 3 , which results in , , and . then keeping terms of order from -expansion of eq .( [ master - bd ] ) , we obtain the linear fokker - plank equation , where is a normalized quantity that evolves according to and .multiplying by and both sides of eq .( [ bd - fp ] ) and integrating over , we obtain the equations for the first and the second moments of noise : * case 1 : * .when the growth rate is greater than the lysis rate , the system size is increasing in time and the second moment of evolves in time according to the solution of eq .( [ bd - moments ] ) : .then the normalized variance of lysogens reads asymptotically the normalized variance approaches a constant value , in good agreement with the results of stochastic simulations ( see fig .[ fig4](b ) ) .* case 2 : * .when the growth rate is the same as the lysis rate , the system size remains constant and the normalized variance increases linearly in time : , exactly reproduced by stochastic simulations as shown in fig .[ fig4](d ) .( color online ) time - evolution of the normalized variance of a birth - death process of lysogens when the system size increases ( a , b ) , or when it remains constant ( c , d ) .( a ) and ( b ) are time - evolutions of the mean and the normalized variance of lysogens when the system size increases exponentially in time , and .( c ) and ( d ) depict those of lysogens when their growth and lysis rates are the same , .solid lines represent the results of stochastic simulations while dotted lines are the results of the macroscopic equation ( a and c ) or the results of the linear fokker - plank equation ( b and d ) ., width=377 ] in this subsection , we discuss the effects of noise on phage - mediated competition .we explore the dynamical patterns of the normalized variances and covariances of the complete infection system , from which we identify the major source of stochastic fluctuations and assess their magnitude . in the complete infection system ,all bacteria in strain 1 are lysogens and all bacteria in strain 2 are susceptible to phage infection .bacterial strain 1 ( lysogens ) , while decoupled from the rest of the system , play a role as the source of the phage , triggering a massive infection process in the susceptible bacterial strain 2 . throughout this subsection, we make pair - wise comparisons between the results of the deterministic equations , stochastic simulations and of the linear fokker - plank equation .[ fig5](a ) shows the time evolution of bacterial populations in the susceptible , lysogenic and latent states .while bacteria of strain 1 ( lysogens ) grow exponentially unaffected by phage , the susceptible bacteria of strain 2 undergo a rapid infection process , being converted either into a latent state or into lysogens .the number of bacteria in the latent state increases , reaches a peak at a later stage of infection process , and then decays exponentially at a rate . as time elapses ,eventually all susceptible bacteria are depleted from the system and both bacterial strains become lysogens , which grow at a net growth rate . the ratio of the two bacterial strains ( lysogens ) remains unchanged asymptotically .note that although the initial population size of bacterial strain 1 is one - tenth of the initial population size of bacterial strain 2 , strain 1 will outnumber strain 2 at a later time due to phage - mediated competition .pair - wise comparisons between the results from stochastic simulation and those from deterministic equations are made in fig.[fig5](a ) .they agree nicely with each other except a noticeable discrepancy found in the population size of susceptible bacteria .( color online ) time - evolution of the mean values of bacterial subpopulations ( a ) and the normalized variance of total population of bacterial strain 1 and 2 ( b ) .( a ) each subpopulation is represented by two lines ; thick lines come from macroscopic equations in eq ( [ macroscopic ] ) and thin lines are obtained from stochastic simulations .the four bacterial subpopulations are represented by different line patterns : bacterial strain 1 in lysogenic state ( solid lines ) , bacterial strain 2 in susceptible ( dotted lines ) , lysogenic ( dashed lines ) and latent ( dot - dashed lines ) states .( b ) thick solid and dashed lines represent the normalized variances of the bacterial strain 1 and 2 from stochastic simulations , respectively , while thin solid and dashed lines denote those from the linear fokker - plank equation , respectively .the initial condition is , and the rest are zero .the parameter values are , , , .,width=377 ] the temporal patterns of the normalized variances of the two bacterial strains are illustrated in fig .[ fig5](b ) .the normalized variance of bacterial strain 1 ( lysogens ) increases logistically while that of bacterial strain 2 increases logistically for the first few hours and then rapidly rises to its peak upon the onset of a massive phage infection process taking place in the susceptible bacterial strain 2 .asymptotically , susceptible bacteria are depleted from the system and all remaining bacteria are lysogens , and their normalized variances converge to a constant as given by eq .( [ bd - const ] ) .the results from stochastic simulations indicate that the magnitude of noise , defined as the ratio of the standard deviation to the average value , of bacterial strain 2 reaches a maximum value , 80 , during the time interval while the number of susceptible bacteria dramatically drops and the number of latent bacteria begins to decay from its peak value .this suggests that the stochastic fluctuations in the phage - mediated competition mainly come from the stochastic dynamics of the susceptible bacteria undergoing infection process and death .note that the linear fokker - plank equation underestimates the peak value of the normalized variance , compared to the stochastic simulations , while the stationary values of the normalized variances of both bacterial strains from two methods agree nicely .[ fig6 ] shows the dynamical patterns of the normalized covariances of bacterial populations .we utilize the normalized covariances to identify the main source of stochastic fluctuations in phage - mediated competition .the normalized covariance of the total population of bacterial strain 2 , , is composed of the 6 normalized covariances of the subpopulations of bacterial strain 2 , , , , , and .the peak values of , and are much smaller ( ten times smaller for this particular choice of parameters ) than those of , and .the normalized covariance of reaches its peak value , the largest value among all normalized covariances , at the exact moment when the normalized variance of the total population of bacterial strain 2 , , hits its maximum value as shown in fig .[ fig5](b ) .( color online ) time evolution of the normalized ( co-)variances of bacterial populations in different states .see the main text for a formal definition of the normalized covariance .the time - evolution of each co - variance is plotted with two lines : solid lines are from stochastic simulations while dashed lines are from the linear fokker - plank equation eq .( [ second_moment_fp ] ) .`` f '' stands for .the same parameters and initial conditions are used as in fig .only 12 out of 15 co - variances are plotted.,width=566 ] this indicates that the stochastic fluctuations in the phage - mediated competition mainly come from the fluctuations of the bacterial population in the latent state .those fluctuations originate from two events : incoming population flow from the just infected susceptible bacteria into the latent bacterial population and outgoing population flow by infection - induced lysis of the bacteria in the latent state .this indicates that the magnitude of noise does depend on the values of the kinetic parameters ( an infection causing cintact rate and infection - induced lysis rate ) of the complete infection system , and this also suggests the possibility of large deviations from the deterministic invasion criterion due to stochastic noise .the time evolution of the normalized co - variances that are obtained from both a linear fokker - plank equation and stochastic simulations agrees to each other nicely .this agreement validates the applicability of van kampen s -expansion method to a nonlinear stochastic system which grows indefinitely . in this subsectionwe investigate the effects of noise on the validity of the invasion criterion and measure the deviations of the stochastic results from the simple relationship in eq .( [ invasion ] ) obtained from the deterministic model .for further analysis of the effect of noise on phage - mediated competition , we need to perform stochastic simulations with different values of kinetic parameters and to investigate the effect of noise on the invasion criterion .we consider both a complete infection system having only lysogens in bacterial strain 1 ( ) in fig.[fig7](a ) and a generalized infection system in fig.[fig7](b ) where both strains are susceptible to phage infection , yet with different degrees of susceptibility and vulnerability to phage .the invasion criterion obtained from the deterministic equations is expressed with a simple relationship between the initial and final ratios of population sizes of two strains and phage pathologies : .( color online ) verification of the invasion criterion by means of stochastic simulations : ( a ) a complete infection system case where bacterial strain 1 is lysogen and only bacterial strain 2 is susceptible , ( b ) a general infection system where both strains are susceptible to phage .thick red lines represent the invasion criterion obtained from deterministic equations , i.e. , where the time is chosen to be a sufficiently long time so that there are no more susceptible bacteria in the system .error bars are the standard deviations calculated from the stochastic simulation .filled circles are for a fast infection process ( , ) while open squares are for slow infection ( , ) .each one of about 500 data points in each figure represents the result of stochastic simulations , averaged over realizations. please see the main text for the choice of parameter values . , width=377 ] here is defined as a sufficiently long time such that there are no more susceptible bacteria left to undergo the infection process and only lysogens are in the system . to amplify the effect of noise on phage - mediated competition ,we set the initial sizes of bacterial populations to be small ; they are randomly chosen from an interval . to make sure that the complete infection system reaches a stationary state of having only lysogens within 24 hours , we limit the values of the infection - causing contact rate and of the infection - induced lysis rate : and where and .we distinguish infection processes based on their speed : a very fast infection process ( , ) and a slow infection process ( , ) .the values of all other kinetic parameters in fig .[ fig2 ] are randomly selected from the biologically relevant intervals ( see table 1 ) : , and .for about 500 sets of parameters for each figure in fig .[ fig7 ] , we measure the average and the standard deviation of the ratio after taking ensemble average over realizations .note that the standard deviation is measured as a deviation from the macroscopic ( true ) value and it is not normalized by the square root of the sampling size .we find that the average values of the ratios still fall onto the linear relationship with phage pathologies , independently of other kinetic parameters .however , the ratios of are broadly distributed around the mean value with large deviations , especially when the phage is more pathological on strain 2 , i.e. , as with a fixed for fig .[ fig7](a ) and when the phage is more pathological on strain 1 than on bacterial strain 2 , i.e. , or for fig .[ fig7](b ) .thus the probabilistic model of phage - mediated competition in bacteria confirms that the quantitative amount of phage - mediated competition can be still predictable despite inherent stochastic fluctuations , yet deviations can be also large , depending on the values of phage pathologies .we utilized a probabilistic model of a phage - mediated invasion process to investigate the conjecture that ( i ) a bacterial community structure is shaped by phage - mediated competition between bacteria , and to examine ( ii ) the effect of intrinsic noise on the conclusions obtained from a deterministic model of the equivalent system .the system under our consideration consists of two strains of bacteria : both bacterial strains are susceptible to phage infection and one invasive bacterial strain contains lysogens carrying the prophage .two bacterial strains are genetically identical except in their susceptibilities to phage and in phage pathologies on them .we restricted the infection system such that bacteria grow in a log phase , i.e. , there is no resource competition between them . despite the historical success of deterministic models of ecological processes ,they produce , at best , only partially correct pictures of stochastic processes in ecological systems .a good number of examples of the failures of deterministic models in ecology are presented in ref .the principal flaw of deterministic models is their reliance on many , sometimes unphysical , assumptions such as continuous variables , complete mixing and no rare events .thus , we used both fokker - plank equations and stochastic simulations in the study of stochastic phage - mediated invasion processes in bacteria .van kampen s system size expansion was used to obtain the linear fokker - plank equation while the gillespie algorithm was used for stochastic simulations .we found that the linear fokker - plank equation is a good approximation to the nonlinear dynamics of the stochastic phage - mediated invasion process ; the time evolutions of co - variances of bacterial populations from both fokker - plank equation and stochastic simulations agree well with each other . to investigate the role of noise during phage - mediated processes , we measured the magnitude of noise , defined as the ratio of the standard deviation of bacterial population to its mean as time elapses .after a sufficiently long time , compared to the typical time scale of infection processes , all surviving bacteria are lysogens , which undergo the process of growth and spontaneous lysis . as it is a simple birth - death process with a positive net growth rate , the magnitude of noise asymptotically converges to a rather small constant value .however , it was found from both the linear fokker - plank equation and stochastic simulations that the magnitude of noise of the bacterial subpopulations both in the susceptible and latent states rapidly increases and reaches a peak value in the middle of the massive phage - induced lysis event .thus the population size of the susceptible and latent bacteria is subject to large deviations from its mean .we investigated the effect of noise on the invasion criterion , which is defined as the condition of the system parameters for which the invading bacterial strain harboring and transmitting the phage takes over the ecological niches occupied by bacterial strains susceptible to the phage . in our previous work , we showed , by using _ in vitro _ experiments and deterministic models , that phage - conferred competitive advantage could be quantitatively measured and is predicted and that the final ratio of population sizes of two competing bacteria is determined by only two quantities , the initial ratio and the phage pathology ( phage - induced mortality ) , independently of other kinetic parameters such as the infection - causing contact rate , the spontaneous and infection - induced lysis rates , and the phage burst size . herewe found from stochastic simulations that the average values of the ratios still fall onto the deterministic linear relationship with phage pathologies , independently of other kinetic parameters .however , the ratios are broadly distributed around the mean value , with prominently large deviations when the phage is more pathological on the invading bacterial strain than the strain 2 , i.e. , .thus the probabilistic model of phage - mediated competition in bacteria confirms that the quantitative amount of phage - mediated competition can still be predictable despite inherent stochastic fluctuations , yet deviations can also be large , depending on the values of phage pathologies .here we assumed that the bacterial growth rates and lysis rates are identical in the two strains . relaxing this assumption has a drastic simplifying effect as the steady state is determined solely by the net growth rates of the two strains .regardless of initial conditions in the generalized infection system , all bacteria that survive after a massive phage infection process are lysogens , so long as the phage - infection is in action on both bacterial strains . if the net growth rates of two strains are such that , asymptotically bacterial strain 1 will outnumber strain 2 , regardless of phage pathologies and initial population sizes .if the net growth rate of any bacterial strain is negative , it will go extinct .thus the non - trivial case is only when the growth rates of two bacterial strains are identical .we significantly simplified many aspects of complex pathogen - mediated dynamical systems to obtain this stochastic model .the two most prominent yet neglected features are the spatial distribution and the connectivity pattern of the host population . as demonstrated by stochastic contact processes on complex networks ( e.g. , infinite scale - free networks ) or on d - dimensional hypercubic lattices, these two effects may dramatically change the dynamics and stationary states of the pathogen - mediated dynamical systems .while our experimental system does not necessitate incorporation of spatial effects , complete models of real pathogen - modulated ecological processes , e.g. , phage - mediated competition as a driving force of the oscillation of two v. cholera bacterial strains , one toxic ( phage - sensitive ) and the other non - toxic ( phage - carrying and resistant) , may need to take these effects into account .* appendix a : systematic expansion of the master equation * + in this appendix we provide the result of the systematic expansion of the master equation in eq .( [ master ] ) .the master equation in the new variables reads -1 \bigr \ } \\ \nonumber & & ( \omega_o \phi + \omega^{\frac{1}{2}}_o \xi_{\phi } ) ( \omega s_j + \omega^{\frac{1}{2}}_o \xi_{s_j } ) \pi + \delta \bigl \ { ( 1+\omega^{-\frac{1}{2}}_o \frac{\partial}{\partial \xi_{i_j } } + \frac{\omega^{-1}_o}{2 } \frac{\partial^{2 } } { \partial \xi_{i_j}^{2}}+ ... ) \\ \nonumber & & ( 1-\omega^{-\frac{1}{2}}_o \frac{\partial}{\partial \xi_\phi } + \frac{\omega^{-1}}{2 } \frac{\partial^{2 } } { \partial \xi_\phi^{2}}+ ... )^{\chi } -1 \bigr \ } ( \omega_o i_j + \omega^{\frac{1}{2}}_o \xi_{i_j } ) \pi \\ \nonumber & & + \lambda \bigl \ { ( 1+\omega^{-\frac{1}{2}}_o \frac{\partial}{\partial \xi_{l_j } } + \frac{\omega^{-1}_o}{2 } \frac{\partial^{2 } } { \partial \xi_{l_j}^{2}}+ ... ) ( 1-\omega^{-\frac{1}{2}}_o \frac{\partial}{\partial \xi_\phi } + \frac{\omega^{-1}_o}{2 } \frac{\partial^{2 } } { \partial \xi_\phi^{2}}+ ... )^{\chi } -1 \bigr \ } \\ \nonumber & & ( \omega_o l_j + \omega^{\frac{1}{2}}_o \xi_{l_j } ) \pi \biggr \ } \nonumber\end{aligned}\ ] ] * appendox b : linear fokker - plank equation derived from systematic expansion of the master equation * + from eq .( [ expansion ] ) we can collect terms of order and obtain the linear fokker plank equation , * appendix c : stochastic birth - death processess * + in this section we present an exact solution of the master equation of a stochastic birth - death process , i.e. , a prototype of all birth - death systems which consists of a population of non - negative integer individuals that can occur with a population size .the concept of birth and death is usually that only a finite number of are born and die at a given time .the transition probabilities can be written thus there are two processes : birth , , with a transition probability , and death , , with a transition probability .the master equation then takes the form , where is a step operator , e.g. , .this expression remains valid at boundary points if we impose .in the case of the growth and spontaneous lysis process of bacteria carrying prophage , with and , the master equation takes the simple form to solve eq .( [ masterbd ] ) , we introduce the generating function so that where .we find a substitution that provides the desirable transformation of variable , , this substitution , , gives whose solution is an arbitrary function of .we write the solution of the above equation as $ ] , so \ ] ] normalization requires , and hence .the initial condition determines , which means \ ] ] so that eq .( [ gf ] ) can be expanded in a power series in to produce the conditional probability density , which is the complete solution of the master equation eq .( [ masterbd ] ) .because it is of little practical use and complicated , we do not present the conditional probability density here , but compute the moment equations from the generating function in eq .( [ gf ] ) {s=1}&=&\langle x(t ) \rangle\\ \nonumber \bigl [ \frac{\partial^{2 } log g(s , t)}{\partial s^{2 } } \bigr ] _{ s=1}&= & \langle x(t)^{2 } \rangle-\langle x(t ) \rangle^{2 } -\langle x(t ) \rangle,\end{aligned}\ ] ] obtaining which exactly corresponds to the the mean and the variance from the linear fokker - plank equation .u. dickmann , j. a. j. metz , m. w. sabelis , and k. sigmund ( eds . ) _ adaptive dynamics of infectious diseases : in pursuit of virulence management _ , ( cambridge university press , cambridge , united kingdom , 2002 ) .f. thomas , m. b. bonsall and a. p. dobson , parasitism , biodiversity and conservationin in f. thomas , f. renaud and j. f. guegan ( eds . ) _ parasitism and ecosystems _ , ( oxford university press , oxford , 2005 ) .s. m. faruque , i. b. naser , m. j. i. b. islam , a. s. g. faruque , a. n. ghosh , g. b. nair , d. a. sack , and j. j. mekalanos , seasonal epidemics of cholera inversely correlate with the prevalence of environmental cholera phages , _ proc ._ , * 102*:1702 ( 2005 ) ..[table1 ] parameters used for the numerical simulation of the phage - mediated competition in _b. bronchiseptica_. the two undetermined parameters and [ [ hours / ml] were estimated by comparing the experimental results with those of the theoretical model and by minimizing discrepancies . [ cols="^,^,^,^",options="header " , ]
pathogen - mediated competition , through which an invasive species carrying and transmitting a pathogen can be a superior competitor to a more vulnerable resident species , is one of the principle driving forces influencing biodiversity in nature . using an experimental system of bacteriophage - mediated competition in bacterial populations and a deterministic model , we have shown in ref . that the competitive advantage conferred by the phage depends only on the relative phage pathology and is independent of the initial phage concentration and other phage and host parameters such as the infection - causing contact rate , the spontaneous and infection - induced lysis rates , and the phage burst size . here we investigate the effects of stochastic fluctuations on bacterial invasion facilitated by bacteriophage , and examine the validity of the deterministic approach . we use both numerical and analytical methods of stochastic processes to identify the source of noise and assess its magnitude . we show that the conclusions obtained from the deterministic model are robust against stochastic fluctuations , yet deviations become prominently large when the phage are more pathological to the invading bacterial strain . + * key words : * phage - mediated competition ; invasion criterion ; fokker - plank equations ; stochastic simulations .
animal s respond to the environment using their sensory organs for collecting information that is passed on to the brain and analyzed for action . however, this would take a perceivable time .this time is called the reaction time . the definition of reaction time or latency as given in the wikipedia is `` _ the time from the onset of a stimulus until the organism responds _ '' . human reaction time is ultimately limited by how fast nerve cells conduct nerve impulses .although this speed is almost 250 miles per hour , messages still take a significant amount of time to travel from sensory organs to the brain and back to the appropriate muscle groups . a common `` experiment '' done asa game by children is for one boy to hold a scale about chest high and have someone place his thumb and index finger about an inch apart somewhere along the ( bottom ) length of the scale .now , he would have to catch the scale when the first boy allows it to fall .the scale wo nt be able be caught immediately and a length of the scale would pass through his finger before it is caught . from simple laws of mechanics , using the equation the reaction time of the child can be calculated .interest in the measurement of human reaction time apparently began as a result of the work of a dutch physiologist named f.c.donders . beginning in 1865 ,donders became interested in the question of whether the time taken to perform basic mental processes could be measured . until that time, mental processes had been thought to be too fast to be measurable . in his early experiments ,donders applied electric shocks to the right and left feet of his subjects .the subject s task was to respond by pressing a telegraph key with his right or left hand to indicate whether his right or left foot had received the shock .interest in measuring and minimizing the reaction time today is of interest in medicine , military , traffic control and sports .things can be put to better perspective by taking an example . in the game of cricket ,the average distance between the bowler and batsman is 20mtrs . with a spin bowlerdelivering the ball at around 80km / hr , the batsman has 0.9s ( 900ms ) to `` see '' the ball , decide the shot and implement it ! an analysis of high - speed film of international cricketers batting on a specially prepared pitch which produced unpredictable movement of the ball is reported , and it is shown that , when batting , highly skilled professional cricketers show reaction times of around 200ms .various methods have been used to measure the reaction time . essentially , measuring simple reaction time like in donders experiment or recognition reaction time or choice reaction time . in choice reaction timeexperiments the subject must give a response that corresponds to the stimulus , such as pressing a key corresponding to a letter as soon as the letter appears on a display amist random display of characters . in this articlewe are reporting the results of our experiment done using this method .the reaction time is known to be effected by factors such as age , gender , fatigue/ exercise , distractions and intelligence .our sample group were students of physics/ electronics in the age group of 18 - 21 , where studies have shown the reaction time to be the minimum in a human life span .these works report the reaction time of people in the age group of our study to be .we have sampled and recorded the reaction time of 137 students , however , here we discuss data of 44 students who were majoring in physics/ electronics . usually such experiments sample 20 or more people and make them repeat the experiment over large time .another approach is where single reading is taken after allowing the test person a period of practice .we first tested the effect of practice on a group of students .fig 1 shows a the variation of reaction time of students with increasing practice .raw reaction time , i.e. the first attempt , of the students were poor .as they practiced , a recording was taken at every 15 minutes .practice however did not keep on improving the reaction time . only four out of ten students had better reaction time on their forth recording ( i.e. after 45 minutes of practice ) as compared to their third try ( 30 minutes into practice ) . while one might be tempted to conclude this as improvement with practice, it should be noted that three of these four students in their forth attempt performed worse then their second attempt .the spectrum of reaction time is within of the average values .this deviation is just when the first attempt is neglected .hence , in our experiment , our approach has been to allow a subject to familiarize the machine for 20 - 25 minutes before taking their reaction time . 0.5 cm .table compares the male - female distribution of the three classes and their preformances . [ cols="^,^,^,^,^",options="header " , ] [ tab ] fig 2 shows the performance of the students from the first , second and third years majoring in physics/ electronics . along with each histogram, a gaussian was fitted to estimate the mean reaction time ( b ) of the class and the deviation from the mean ( using c ) .table 1 details the results for all three classes . the lower mean reaction time and narrower deviation from the mean of the third yearstudents show a collective better performance . a boarder sampling of reaction time with larger age variationwas collected based on gender ( results not shown here ) .we found no variation in performance based on gender with the ratio of female to male reaction time being equal to unity , i.e. ( ) .bellis and engel reported , with males having a faster reaction time .however , recent studies by silverman reports the difference in male - female reaction time was narrowing .table 1 gives the number boys and girls in each class .eventhough ratio of boys and girls are not same in these classes , no correction is called for in fig 3 since . as stated earlier, the students appear to be a sharper lot and it was thought worthwhile to test if the reaction time had any co - relation with learning ability .a comprehensive test was designed to test all the students under study for their ability to comprehend , learn on their own , analyze and solve a given problem .the test was different from the ordinary annual examination these students face and also care was taken that the evaluator s are not prejudiced or influenced by the results of fig 2 .fig 3 shows how reaction time seem to co - relate strongly with the student s ability to learn .this result is consistent with the findings of deary et al .all these students were admitted to the college based on their performance in higher secondary ( hs ) examination conductedd by cbse ( india ) .all the students had marks between 78 - 84% in their hs examination .the resolving of their performance with respect to their reaction time hence was made possible because of the complex method adopted for evaluation .schweitzer in his paper reports that the speed advantage of more intelligent people is greatest in tests requiring complex responses . in corollary , fig 3 suggests sharper ( faster ) students are stimulated and respond keenly to tests having a degree of complexity .another interesting result we have is how the students reaction time varies after they attend an hour of tendious lecture .we took recordings of the first year students before they entered the lecture hall and again , an hour later as they emerged from their mathematical physics class .the particular subject was selected after studying their response to a questionaire , where majority reported it as the most difficult subject .the subject was also low on popularity . also , their performance in class tests were consistently poor .fig 4 shows the variation of performance . barring four students , all the students showed deterioration in reaction ._ `` stress '' _ , hence , makes the reaction time poor .slower response due to fatigue doing complicated task was reported as back as in 1953 .in conclusion , the fatigue level seen in students after attending an hour of intense training in mathematical physics does advocate reducing the duration of a lecture from 60 minutes to 40 - 45 minutes for students below 21 years .a reduction in response time is also a reduction in concentration level and hence suggests much of the information imparted by the instructor ( fig 4 ) would have anyway not been absorbed .the methodolgy adopted in teaching difficult subjects also should be reviewed .b. t. engel , p. r. thorne , and r. e. quilter , `` _ on the relationship among sex , age , response mode , cardiac cycle phase , breathing cycle phase , and simple reaction time ._ '' , * journal of gerontology 27 * , ( 1972 ) 456 - 460 .
the reaction time of a group of students majoring in physics is reported here . strong co - relation between fatigue , reaction time and performance have been seen and may be useful for academicians and administrators responsible of working out time - tables , course structures , students counsellings etc .
optics would seem to be a strong contender for realizing quantum computation circuits .photons are easily manipulated and , as the electro - magnetic environment at optical frequencies can be regarded as vacuum , are relatively decoherence free .indeed one of the earliest proposals for implementing quantum computation was based on encoding each qubit in two optical modes , each containing exactly one photon .unfortunately , 2 qubit gates require strong interactions between single photons .such interactions would require massive , reversible non - linearities well beyond those presently available .recently knill , laflamme and milburn ( klm ) found a way to circumvent this problem and implement efficient quantum computation using only passive linear optics , photodetectors , and single photon sources .this efficient linear optical quantum computing ( eloqc ) is distinct from other linear optical schemes which are not efficiently scalable .although containing only linear elements , the optical networks described by klm are complex and would present major stability and mode matching problems in their construction .there is thus considerable interest in finding the simplest physical implementations of the klm techniques . in this manuscriptwe investigate this problem and find a major simplification of the original proposal .we begin by reviewing the technique via which non - deterministic gates can be used to implement an efficiently scalable system and in section 3 the physics of a basic non - deterministic gate , the ns gate , is discussed . in section 4we describe the construction of a non - deterministic quantum cnot gate using two ns gates .full scalability of this gate requires high efficiency , 0 , 1 , 2 photon discriminating photon counters .such detectors presently only exist in prototype form .however , in section 5 we show that the basic operation of this circuit can be tested with current detector technology .we then describe the simplified gate .a non - deterministic cnot gate with a simple linear architecture , but requiring triggered entangled sources as a resource , has been suggested recently .in contrast our scheme requires only separable input states .also recently proposed is a linear optical scheme for the probabilistic purification of non - maximal polarization entangled states . although the linear elements play the role of cnot gates in this protocol , they do not exhibit the full cnot logic of the gates described here .arbitrary quantum gate operations can be implemented if one has the ability to implement arbitrary single qubit rotations and two qubit cnot gates .single qubit operations can easily be implemented with single photons and a non - deterministic cnot gate is described in this manuscript .however a cascaded sequence of such non - deterministic gates would be useless for quantum computation because the probability of many gates working in sequence decreases exponentially .this problem may be avoided by using a teleportation protocol to implement quantum gates .the idea that teleportation can be used for universal quantum computation was first proposed by gottesman and chuang .a teleportation circuit is represented in fig.1(a ) .a qubit in an unknown state is teleported by making a joint bell measurement ( ) of it and one half of a bell pair .depending on the result of the measurements , and manipulations are made on the other half of the bell pair resulting in accurate reconstruction of the unknown state .a key issue is that the bell pair plays the role of a resource in the protocol .that is , it can be prepared `` off - line '' and then used when necessary to teleport the qubit . now consider the quantum circuit shown in fig.1(b ) .two unknown qubits are individually teleported and then a cnot gate is implemented . obviously , but not very usefully , the result is cnot operation between the input and output qubits .however , as shown in ref. , the commutation relations between cnot and and are quite simple , such that the circuits of fig.1(b ) and 1(c ) are in fact equivalent .but in the circuit of fig.1(c ) the problem of implementing a cnot gate has been reduced to that of producing the required entanglement resource .the entanglement resource required could be produced from separable input states using three cnot gates : one each to produce the bell pairs plus the one shown in fig.1(c ) .but the point is that these need not be deterministic gates .non - deterministic cnot gates could be used in a trial and error manner to build up the necessary resource off - line .it could then be used when required to implement the gate .a remaining issue is the performance of the bell measurements required in the teleportation protocol .these can not be performed exactly with linear optics .klm showed that by using the appropriate entangled resource the teleportation step can be made near deterministic .the near deterministic teleportation protocol requires only linear optics , photon counting and fast feedforward , albeit with a significant resource overhead .alternatively , progress has recently been made towards implementing bell measurements using non - linear optics .the basic element in the construction of our non - deterministic cnot gate is the nonlinear sign - shift ( ns ) gate .this is a non - deterministic gate the operation of which is conditioned on the detection of an auxiliary photon . when successful the gate implements the following transformation on signal state where the lack of normalization of the transformed state reflects the fact that the gate has a probability of success of .fig.2 shows a realization of this gate .two ancilla modes are required .a single photon is injected into one of the ancilla and the other is unoccupied . the first , second and third beamsplitters have intensity reflectivities , and respectively. the beamsplitters are phase asymmetric : transmission from either side and reflection off the `` black '' surface of these beamsplitters results in no phase change , whilst reflection off the `` grey '' surface results in a sign change .when a single photon is counted at the `` 1 '' ancilla output and no photon is counted at the `` 0 '' ancilla output ( as indicated in the figure ) the transformation of eq.[ns ] is implemented if a suitable choice of beamsplitter reflectivities is made .let us see how this works .suppose first that the signal mode is in the vacuum state , i.e. . the probability amplitude , , for the ancilla photon to appear at the `` 1 '' output port is given by now suppose the input is a single photon state , i.e. . if a photon arrives at the `` 1 '' output port and no photon arrives at the `` 0 '' port then a single photon must have exited the signal output .we wish the probability amplitude for this event to also be .this means and thus we consider the situation of a two photon input , i.e. . if a single photon arrives at the `` 1 '' port and no photon arrives at the `` 0 '' port then two photons must have exited at the signal output . to obtain the sign change of eq.[ns ]we require the probability amplitude for this event to be .this means substituting eq.[ns2 ] into eq.[ns3 ] gives the result substituting back into eq.[ns2 ] and eq.[ns1 ] we can solve for , and .the maximum value for is achieved when and is thus the transformation of eq.[ns ] is implemented whenever a single photon is recorded at port `` 1 '' and no photon is found at port `` 0 '' . on averagethis will occur 25% of the time since .a conditional cnot gate can now be implemented using two ns gates .the layout for doing this is shown schematically in fig .3 . we employ dual rail logic such that the `` control in '' qubit is represented by the two bosonic mode operators and .a single photon occupation of with in a vacuum state will be our logical 0 , which we will write ( to avoid confusion with the vacuum state ) . whilst a single photon occupation of with in a vacuum state will be our logical 1 , which we will write .of course superposition states can also be formed .similarly the `` target in '' is represented by the bosonic mode operators and with the same interpretations as for the control .the beamsplitters , , , and are all 50:50 .the four modes , , and are all the same polarization .the use of the `` h '' , `` v '' nomenclature alludes to the standard situation in which the two modes of the dual rail logic are orthogonal polarization modes .conversion of a polarization qubit into the spatial encoding used to implement the cnot gate can be achieved experimentally by passing the the photon through a polarizing beamsplitter , to spatially separate the modes , and then using a half - wave plate to rotate one of the modes into the same polarization as the other . after the gate, the reverse process can be used to return the encoding to polarization . the layout of fig .4 contains two nested , balanced mach - zehnder interferometers .the target modes are combined and then re - separated forming the `` t''interferometer .one arm of the t interferometer and the mode of the control are combined to form another interferometer , the `` c '' interferometer .ns gates are placed in both arms of the c interferometer .the essential feature of the system is that if the control photon is in the mode then there is never more that one photon in the c interferometer , so the ns gates do not produce a change , the t interferometer remains balanced and the target qubits exit in the same spatial modes in which they entered . on the other hand if the control photon is in mode then there is a two photon component in the c interferometer which suffers a sign change due to the ns gates .this leads to a sign change in one arm of the t interferometer and the target qubit exits from the opposite mode from which it entered .let us consider the systems operation in more detail : if the control is in a logical 0 then the mode will be in a vacuum state .consider the line labeled in fig.3 lying just before the ns gates .the state of the system at this point is given by where the left to right ordering is equivalent to the top to bottom ordering in fig.3 .the occurs when the target input state is , the occurs when the target input state is .now consider the state of the system directly after the ns gates operate on the middle two modes ( indicated by the line in fig.3 ) .substituting from eq.[ns ] we find .that is the gates do not effect the states in the arms of the c interferometer ( conditional on the detection of photons at the `` 1 '' ports of the ns gates ) . as both interferometers are balanced they will just return the same outputs as they had inputs .thus will be a vacuum mode , and if the target input photon was in , it will emerge in ; or if it was in , it will emerge in . in other wordsthe control and target qubits will remain in the same states . on the other handif the control is in a logical 1 , then the mode will contain one photon .the state at is now the two photon amplitudes suffer sign changes ( conditional on the detection of photons at the `` 1 '' ports of the ns gates ) such that the state at , after the ns gates , is now this leads to a sign change in the returning beam of the t interferometer which in turn results in a swap between the inputs and outputs of the t interferometer .thus if the target input photon was in it will emerge in or if it was in it will emerge in . the control output , also suffers a sign change , but this does not change its logical status .in other words the control is unchanged but the target qubit will change states .the truth table of the device is thus which is cnot logic .it is useful to also look at this arrangement in the heisenberg picture . referring again to fig.3 our input modes are and for the control , and for the target , and the ancilla modes , , and .the initial state of , , , is where or .the other modes are initially in the vacuum state .we propagate these modes through the system and obtain the following expressions for the output modes where the logical statements of eq.[logic ] can then be realized through measurements of 4-fold coincidences .thus if the initial state is then we find and similarly for the initial state we find with all other moments zero .however for initial state the we find with the other moments zero and for the initial state we find with the other moments zero .as expected the factor appears as we have employed two ns gates each of which works on average 25% of the time .it can also be verified that injection of the control qubit in the superposition states with the target in or produces correlations corresponding to the 4 entangled bell states , as expected from quantum cnot operation .a major experimental advantage to this set - up , as compared to the test circuit suggested in ref . , is that we can work in the coincidence basis .this allows low efficiency detectors and spontaneous single photon sources to be used to demonstrate the basic operation of the gate .of course incorporating these gates in a scalable system as discussed in section ii requires one to know that the gate has successfully operated without destroying the output .it is straightforward to show from eqs.[cnot ] that detection of one and only one photon in modes and and no photons in modes and is sufficient to ensure successful operation of the gate without disturbing the control and target outputs .however low - loss , 0 , 1 , 2-photon discriminating detection would be needed to operate in this way . even in the coincidence basis the above implementation represents a major technological challenge .four nested interferometers must simultaneously be mode matched and locked to sub - wavelength accuracy over the operation time of the gate .a major simplification is achieved by operating the ns gates in a biased mode .the idea is to set the reflectivities and in the ns gates to one , i.e. totally reflective .this removes the interferometers from both the ns gates , greatly reducing the complexity of the gate . summing over the paths as before we find that the ns operation becomes when is no solution such that the `` 0 '' , `` 1 '' and `` 2 '' components scale equally , so the gate is biased .however this problem can be solved by placing an additional beamsplitter in the beam path with a vacuum input and conditioning on no photons appearing at its output .now we find where is the reflectivity of the additional beamsplitter .remarkably the additional degree of freedom allows the gate to be rebalanced such that exact ns operation is achieved without an interferometric element .the trade - off is a small reduction in the probability of success .solving we find and gives ns operation with a success probability of .there is considerable flexibility in how the simplified gate is employed in the cnot .one of a number of possible scenarios is shown in fig.4 .the ns gates of fig.3 have been replaced by the beamsplitters b5 and b6 which have reflectivities .additional beamsplitters , b7 and b8 , of reflectivities have been inserted in beams and respectively .the state of the system at point in fig.4 ( conditional on a single photon being detected at outputs and _ and _ no photons appearing at outputs and ) is given by if the control is initially in and if the control is initially in .choosing as before and we obtain cnot operation with a probability .the operation of the gate can also still be described by eq.[cnot ] but with and the substitutions where now is the initial state of the control s vertical polarization mode .all the conditional moments of eq.[cond1]-[cond4 ] are reproduced but with the probabilities of the non - zero moments reduced from to approximately .all other properties of the original gate are retained .the efficient linear optics computation scheme of ref. appears exciting in principle but daunting in practice .however we have shown that by adopting a cnot test architecture the basic principles of the scheme can be tested with present technology .four - photon experiments with spontaneous sources are difficult , but have been achieved . basically such experiments utilize events where by chance two down converters simultaneously produce pairs .the use of our simplified scheme would reduce the stability issues in such an experiment significantly with only a small decrease in probability of success .calculations using eq.[cnot ] show that operation is not critically dependent on experimental parameters .for example 2% errors in beamsplitter ratios only lead to fractions of a percent errors in gate operation .
we describe the construction of a conditional quantum control - not ( cnot ) gate from linear optical elements following the program of knill , laflamme and milburn [ nature * 409 * , 46 ( 2001 ) ] . we show that the basic operation of this gate can be tested using current technology . we then simplify the scheme significantly .
nowadays , evolutionary game theory ( hereinafter egt ) represents an emerging field , whose interdisciplinarity makes it of interest for scientists coming from different communities , from biology to social sciences .the emergence of cooperation in structured populations , in particular in games characterized by a nash equilibrium of defection , is one of the main topics studied in this field .for instance , different mechanisms and strategies have been proposed to foster cooperation in paradigmatic games representing pairwise interactions , such as the prisoner s dilemma , and group interactions , namely the public goods game ( hereinafter pgg ) . since these models are mainly studied by means of numerical simulations , they often lack of an analytical description .however , the latter can be provided once some specific assumptions are introduced , as for instance considering memory - aware agents ( i.e. able to save their payoff over time ) . in the modern area of complex systems , investigations driven by statistical physics , trying to relate the macroscopic emergent behavior to the microscopic dynamics of the agents , are useful to get insights on a wide range of topics , from socio - economic systems to biological phenomena .therefore , in this work we aim to provide a description of the pgg by the lens of statistical physics , focusing in particular on the impact of noise in the population dynamics .notably , the noise is controlled by a parameter adopted in the strategy revision phase ( srp ) , i.e. , the process that allows agents to revise their strategy .the srp can be implemented in several ways , e.g. considering rational agents that aim to increase their payoff .usually , rational agents tend to imitate their richer neighbors , while irrational agents are those that randomly change their strategy .remarkably , the level of noise in the system strongly affects the macroscopic behavior of a population .although previous works ( e.g. ) focused on this topic ( i.e. the role of noise ) in this game , a complete analysis is still missing .in particular , we study the effect of noise in two different scenarios .we first consider the case of a homogeneous population , where the intensity of noise in the system is controlled by tuning the level of stochasticity of all agents during the srp , by means of a global parameter .the latter is usually indicated by , and defined as temperature or as an inverse degree of rationality .then , we consider a heterogeneous population , characterized by the two species : rational and irrational agents .while the former take their decision considering the payoff of their neighbors , the latter take decisions randomly . in both cases , we study the macroscopic dynamics of the population and the related equilibria , achieved for different amount of noise and values of the synergy factor .it is worth to emphasize that the synergy factor , before mentioned , is a parameter of absolute relevance in the pgg , as it supports cooperative behaviors by enhancing the value of the total contributions provided by cooperators .eventually , we recall that the influence of rationality in the pgg has been studied , by a probabilist approach , in where authors implemented agents able to select ( with a given probability ) between a rational and an irrational behavior .the remainder of the paper is organized as follows : section [ sec : model ] introduces the dynamics of the pgg , and the setup of our model .section [ sec : results ] shows results of numerical simulations .eventually , section [ sec : conclusions ] ends the paper .the pgg is a simple game involving agents that can adopt one of the following strategies : cooperation and defection .those playing as cooperators contribute with a coin ( representing the individual effort in favor of the collectivity ) to a common pool , while those playing as defectors do not contribute . then , the amount of coins in the pool is enhanced by a synergy factor , andeventually equally divided among all agents . the cooperators payoff ( i.e. ) and that of defectors ( i.e. ) read where is the number of cooperators among the agents involved in the game , synergy factor , and agents contribution . without loss of generality , we set for all agents .let us proceed focusing the attention on the synergy factor .in well - mixed populations of infinite size , where agents play in group of players , the two absorbing states appear separated at a critical point .notably , these populations fall into full defection for and into full cooperation for . when agents are placed in the nodes of a network , surprisingly, some cooperators can survive for values of lower than .this effect is known as _ network reciprocity _ , since a cooperative behavior emerges as a result of the same mutualistic interactions taking place repeatedly over time .at the same time , the network structure allows a limited number of defectors to survive also beyond .we refer to the two critical values of at which cooperators first appear , and defectors eventually disappear from the population , respectively as and . in a networked population , depending on the values of and on how agents are allowed to update their strategy , it is possible to observe different phases : two ordered equilibrium absorbing phases , where only one strategy survives ( either cooperation or defection ) , and an active but macroscopically stable disordered phase corresponding to the coexistence between the two species ( i.e. cooperators and defectors ) .now , it is worth to introduce the mechanism that allows the population to evolve , i.e. the process previously mentioned and defined as strategy revision phase . in principle , the srp can be implemented in several ways , as agents may follow different rules or may be provided with particular behaviors .usually this process is payoff - driven , i.e. agents tend to imitate richer neighbors . however , as shown in previous works social behaviors like conformity can have a relevant impact on the way agents choose their next strategy . in this workwe consider a payoff - driven srp , computing the probability that one agent imitates a neighbor according to the following fermi - like equation \right)^{-1}\ ] ] where and correspond to the payoffs of two linked agents , and and indicate their strategy , i.e. , that of and of , respectively .notably , equation [ eq : fermi_function ] is related to a srp performed by the -th agent that evaluates whether to imitate the -th one .a crucial parameter appearing in equation [ eq : fermi_function ] is , which plays the role of noise and then parametrizes the uncertainty in adopting a strategy .notably , a low noise entails agents strongly consider the difference in payoff while deciding their next strategy , while increasing the noise the payoff difference plays a more marginal role . in the case of a homogeneous population equal for all individuals , so by tuning its value we are able to control the level of noise in the system . in the limit of ,the -th agent will imitate the strategy of the -th agent with probability if , and otherwise .conversely , in the limit the srp becomes a coin flip , and the imitation occurs with probability no matter the value of the synergy factor . in the latter case the behavior of the pgg is analogous to that of a classical voter model , where imitation between a pair of selected agents takes place with probability .our study aims to confirm computationally results reported in , and to evaluate the relation between and , in order to provide a complete description of the pgg , from the microscopic dynamics to the global behavior of the population . according to previous investigations , setting often considered as a good choice to describe a rational population , where a limited number of irrational updates may still occur . in the case of bidimensional lattices , it has been shown in that for such the values of , at which cooperators emerge , and , where defectors completely disappears from the population , are respectively equal to and .it is also possible to consider the case of heterogenous populations where the value of is an agent - dependent parameter . in such scenario ,the simplest way to control the noise is considering two different species of agents : those with and those with .thus , by varying the density of one species , with , it is possible to study the outcome of the model in different conditions .such second version is useful to analyze the behavior of a population whose agents have a different sensibility to their payoff and , from a social point of view , it allows to study the influence of rationality in driving the population towards an equilibrium ( or steady state ) .for instance , setting and , it is possible to evaluate the influence of a density of rational agents in driving the population towards a particular state . in the sociophysics literature , studying the effects of social behaviors in the dynamics of a population constitutes one of the major aims .just to cite few , investigations on nonconformity , extremisms and stubbornness showed their relevant role in processes of opinion dynamics .eventually , in a heterogeneous population , it would be interesting to consider more complicated cases where agents are characterized by a broad distribution of values of , and can possibly change their own degree of rationality , for instance by thermalization - like processes ( i.e. when two agents play , they modify their degree of rationality taking the average value of their current ) . finally , we consider an asynchronous dynamics , where the agents sequentially update their strategy . indoing so , the population dynamics can be summarized as follows : 1 . at a population with the same amount of cooperators and defectors ; 2 .select randomly two connected agents : and ; 3 .selected agents play the pgg with their communities of belonging ( accumulating a payoff ) ; 4 . performs the srp using as reference ( by equation [ eq : fermi_function ] ) ; 5 .repeat from until the population reaches an ordered phase ( or up to a limited number of time steps elapsed ) .we indicate with the average number of updates per agents at which the dynamics has been stopped , either because the system reached an absorbing phase or because no significant macroscopic changes were observed in the system over time .the considered maximum length of the simulation is . since in the pgg, the strategy of the -th agent can be described by a binary variable , with representing cooperation and defection the average magnetization reads where corresponds to full cooperation , while to full defection .since we are not interested in the sign of the prevalent strategy , but only to which extent the system is ordered , we consider the absolute value of the magnetization . from equation [ eq : magnetization ]it is straightforward to derive the density of cooperators in the population so that we can identify the two ( ordered ) absorbing states corresponding to ( i.e. full cooperation ) and ( i.e. full defection ) . in the following section ,the system is analyzed by reporting the average value of , and over 100 simulations .at last , another interesting order parameter useful to detect fluctuations in the system s behavior is the standard deviation of the fraction of cooperator obtained over the different runs .we performed several numerical simulations of the pgg , for different values of the synergy factor and the noise ( measured either in terms of or density of irrational agents ) , in a population of agents distributed on a bidimensional lattice with periodic boundary conditions . * homogeneous populations .* let us here show results for the homogeneous case , where the level of noise in the system is controlled by the global variable used in the srp .we first analyze the phase diagram of the average density of cooperators as a function of and figure [ fig : figure_1 ] . here , we observe that the pgg has a very rich behavior .for instance , plot * a * of figure [ fig : figure_1 ] shows different regions ( below described ) of interest when studying the density of cooperators at equilibrium .notably , low values of ( i.e. , ) let emerge three phases as a function of in the considered range ( i.e. , from to ) : two ordered phases ( i.e. , full defection and full cooperation ) for low and high values of , and a mixed phase ( i.e. , coexistence ) for intermediate values of . therefore , at a first glance , an order - disorder phase transition of second kind emerges crossing the region labeled in the first phase diagram ( i.e. , * a * of figure [ fig : figure_1 ] ) . for higher values of , next to , the active phase vanishes and the population always reaches an ordered phase .a more abrupt phase transition between the two ordered phases , separating region ( full defection ) and ( full cooperation ) , appears , resembling analytical results obtained for the well - mixed approximation , even if fluctuations are possible near the critical point . for greater values of , the region of around such that both ordered states are attainable increases .in such range of values the system behaves as a biased voter model , where the absorbing states of cooperation ( defection ) is favored for ( ) . in the limit , the behavior of the system approaches that of a classical unbiased voter model , no matter the value of the adopted synergy factor .plots * b * and * c * of figure [ fig : figure_1 ] confirm the main differences among the five regions of plot * a*. the former shows that the number of time steps reaches the maximum value of for intermediate entered around with low , while is reduced when the population reaches an absorbing state ( i.e. , full defection or cooperation ) .instead , plot * c * shows that the variance reaches a maximum value ( as expected ) , , when the pgg behaves like a voter model , while smaller non - null values are also obtained for the active phase , due to the existence of fluctuations . in order to obtain a deeper characterization of the phase transitions occurring in the pgg, we study the average absolute value of the magnetization , as a function of the synergy factor for different values . as shown in plot * a * of figure [ fig : magnetization ] , only for values of there are values of the synergy factors such that , since at a more abrupt phase transition between full defection and full cooperation emerges , resembling the first - order first transition predicted analytically in the case of well - mixed population of infinite size .then , we note that for all values in the range $ ] , it is possible to find a synergy factor such that .higher values of strongly affect the pgg .notably , the distance between the two thresholds of ( i.e. , and ) reduces by increasing see plot * b * of figure [ fig : magnetization ] .it is interesting to note that these two values converge to the same point as for .furthermore , we also observe that the value of , for which cooperators and defectors coexist in equal number in the active phase , is always smaller than see plot * b * of figure [ fig : magnetization ] .eventually , both plots * c * and * d * of figure [ fig : magnetization ] clearly confirms the previous investigations . for instance , for the density of cooperators becomes almost flat as in a voter model ( see plot * c * of figure [ fig : magnetization ] ) . * heterogeneous populations .* we now consider the case of a heterogenous population , where a density of agents ( ) with is inserted in a population of irrational individuals , performing coin flip to decide their strategy . in this configuration , the level of noise is controlled by the variable , and the lower its value the higher the stochasticity in the population .as shown in figure [ fig : transition ] , the phase diagram obtained as a function of the different values of noise is qualitatively comparable to the previously considered case .as goes to the pgg turns its behavior to the expected one for a population composed of only rational individuals ( i.e. , and .remarkably , the outcomes shown in figure [ fig : transition ] suggest that for values as small as , the pgg shows an active phase ( i.e. , the network reciprocity still holds ) .thus , very few rational agents are able to provide the population an overall rational behavior at equilibrium. see plot * b * of figure [ fig : transition ] to observe the scaling for the critical values of the synergy factor : ( at which cooperators first appear ) , ( at which defectors disappears ) , and ( where cooperators and defectors coexist in equal proportion ) .the aim of this work is to provide a detailed study of the role of noise in the pgg by the lens of statistical physics .notably , the proposed model allows to define a clear relation between the noise introduced at the microscopic level in the process named strategy revision phase , and the macroscopic behavior of a population . to achieve this goal, we start from the theoretical considerations presented in , then considering a richer scenario and controlling the noise in two different cases : a homogeneous population ( i.e. all agents have the same degree of rationality ) and a heterogeneous one ( i.e. more degrees of rationality are considered ) .the phase diagram resulting from numerical simulations shows the influence of the synergy factor and of the noise in the macroscopic behavior of the population .so , beyond confirming results reported in works as , our investigation extends to further insights .notably , the phase diagram ( see figure [ fig : figure_1 ] ) shows many interesting regions . for a finite range of values of low noise , there exists a second order phase transition between two absorbing states as a function of , with the presence of a metastable regime between them ( region see plot * a * of figure [ fig : figure_1 ] ) . for higher values of noisethe active phase and network reciprocity disappears ( regions and see plot * a * of figure [ fig : figure_1 ] ) and the system always reaches an ordered state . in particular , cooperation ( defection )is usually reached if is greater ( smaller ) than the group size , even if fluctuations are possible next to the critical point due to the finite size of the system . as the level of noise increases , the system approaches the behavior of a classical voter model ( region see plot * a * of figure [ fig : figure_1 ] ) , where either one of the two ordered phase is reachedno matter the value of the synergy factor . from the analysis of the heterogenous population case ,we note that even a very small density of rational agents , , allows to observe a network reciprocity effect . in such sense , beyond the physical interpretation of our results ,we deem important to highlight that , from the perspective of egt and from that of sociophysics , the pgg is a system that correctly works even in the presence of few rational players . here , saying that the system correctly works means that the equilibrium predicted for given a by the analysis of the nash equilibria of the system in the well - mixed approximation is achieved .finally , it might be interesting to investigate the behavior of the pgg for populations with greater heterogeneity in the rationality of the agents , and possibly change it over time . as a possible interesting case, we suggest that where agents modify their due to a contact process , in a thermalization - like process .even if the average value of in the population is constant , we reckon that differences might emerge with regard to the final steady state of the corresponding homogeneous population , due to the existence of a different initial transient where the distribution of is non - trivial . to conclude , our work extends results reported in previous analyses and , moreover , aims to define a statistical physics interpretation of the spatial public goods game . in particular as shown in an analytical description , defined as the populationwere a classical spin system , may shed new light and even explain the behavior of those egt models studied only by a computational approach .the authors wish to thank matjaz perc for his helpful comments and suggestions .
in this work we aim to analyze the role of noise in the spatial public goods game , one of the most famous games in evolutionary game theory . the dynamics of this game is affected by a number of parameters and processes , namely the topology of interactions among the agents , the synergy factor , and the strategy revision phase . the latter is a process that allows agents to change their strategy . notably , rational agents tend to imitate richer neighbors , in order to increase the probability to maximize their payoff . by implementing a stochastic revision process , it is possible to control the level of noise in the system , so that even irrational updates may occur . in particular , in this work we study the effect of noise on the macroscopic behavior of a finite structured population playing the public goods game . we consider both the case of a homogeneous population , where the noise in the system is controlled by tuning a parameter representing the level of stochasticity in the strategy revision phase , and a heterogeneous population composed of a variable proportion of rational and irrational agents . in both cases numerical investigations show that the public goods game has a very rich behavior which strongly depends on the amount of noise in the system and on the value of the synergy factor . to conclude , our study sheds a new light on the relations between the microscopic dynamics of the public goods game and its macroscopic behavior , strengthening the link between the field of evolutionary game theory and statistical physics .
we study point - to - point block fading channels , depicted in figure [ fig : sysmdl ] , in the presence of a hybrid adversary .the hybrid half - duplex adversary can choose to either eavesdrop or jam the transmitter - receiver channel , but not both at a given block .the goal of the transmitter is to communicate a message reliably to the receiver while keeping it asymptotically secret from the hybrid adversary . during the communication ,the state of the adversary ( jamming or eavesdropping ) changes in an _ arbitrary _ manner from one block to the next and is _ unknown _ to the transmitter .we further assume that the transmitter has _ no channel state information _( csi ) of the transmitter - to - receiver channel ( main channel ) , the transmitter - to - adversary channel ( eavesdropper channel ) and the adversary - to - receiver channel ( jamming channel ) .the receiver has perfect causal csi of the main and jamming channels .we study the secrecy capacity of this setting when ( i ) there is no receiver - to - transmitter feedback , and ( ii ) there is -bit of receiver - to - transmitter feedback sent at the _ end of each block_. the main challenge in our problem stems from the fact that simultaneously maintaining reliability and secrecy is difficult because of the adversary s arbitrary strategy in choosing its state , i.e. , jamming or eavesdropping , at each block . if we design a scheme focusing on a particular adversary strategy , with a slight change in that particular strategy , the adversary can cause a decoding error or a secrecy leakage .for instance , if our scheme assumes a fully eavesdropping adversary , then jamming even in a small fraction of the time will lead to a decoding error . likewise ,if the scheme is designed against a full jammer , then the adversary will lead to a secrecy leakage even it eavesdrops for a small fraction of time . a robust schemeshould take into account the entire set of adversary strategies to maintain reliability and secrecy .our technical contributions are summarized as follows : * we show that the secrecy capacity is zero when the receiver feedback is not available and the eavesdropper channel stochastically dominates the _ effective _ main channel gain .however , we also show that even one bit of receiver feedback at the end of each block is sufficient to make the secrecy capacity positive for almost all possible channel distributions .* under an arbitrary adversarial strategy , the receiver can not employ a well known typical set decoder since it can not assume a certain distribution for the received signal . to that end, we propose a receiver strategy in which the receiver generates artificial noise and adds it to the received signal ( i.e. , jams itself to involve typical set decoding ) .we show special cases in which artificial noise generation at the receiver is an optimal way to achieve the secrecy capacity .* for the 1-bit receiver feedback case , we propose a proof technique for the equivocation analysis , that is based on renewal theory . by this technique, we can improve the existing achievable secrecy rates in , which focus on passive eavesdropping attacks only . note that our adversary model covers the possibility of a full eavesdropping attack as well since it allows for the adversary to eavesdrop ( or jam ) for an arbitrary fraction of the time .* we bound the secrecy capacity when there are multiple hybrid adversaries .the challenge in bounding the secrecy capacity for multiple adversaries scenario stems from the fact that , when an adversary jams the legitimate receiver , it also interferes to the other adversaries as well .however , we show that the impact of the interference of one adversary to another adversary does not appear in the bounds , which results in a tighter upper bound .furthermore , the bounds we provide are valid for the cases in which the adversaries collude or do not collude . in the non - colluding case ,we show that the secrecy capacity bounds are determined by the adversary that has the strongest eavesdropper channel .in addition to the aforementioned set - up , we also consider a delay limited communication in which a message of fixed size arrives at the encoder at the beginning of each block , and it needs to be transmitted reliably and securely by the end of that particular block .otherwise , _ secrecy outage _ occurs at that block .we analyze delay limited capacity subject to a secrecy outage constraint .we employ a time sharing strategy in which we utilize a portion of each block to generate secret key bits and use these key bits as a supplement to secure the delay sensitive messages that are transmitted in the other portion of each block .our scheme achieves positive delay limited secrecy rates whenever the secrecy capacity without any delay constraint is positive .the wiretap channel , introduced by wyner , models information theoretically secure message transmission in a point - to - point setting , where a passive adversary eavesdrops the communication between two legitimate nodes by wiretapping the legitimate receiver .while attempting to decipher the message , no limit is imposed on the computational resources available to the eavesdropper .this assumption led to defining ( weak ) secrecy capacity as the maximum achievable rate subject to zero mutual information rate between the transmitted message and the signal received by the adversary .this work was later generalized to the non - degraded scenario and the gaussian channel . by exploiting the stochasticity and the asymmetry of wireless channels , the recent works extended the results in to a variety of scenarios involving fading channels .however , all of the mentioned works consider a passive adversary that can only eavesdrop .there is a recent research interest on hybrid adversaries that can either jam or eavesdrop . in , the authors formulate the wiretap channel as a two player zero - sum game in which the payoff function is an achievable ergodic secrecy rate .the strategy of the transmitter is to send the message in a full power or to utilize some of the available power to produce artificial noise .the conditions under which pure nash equilibrium exists are studied . in , the authors consider fast fading main and eavesdropper channels and a static jammer channel , where the adversary follows an ergodic strategy such that it jams or eavesdrop with a certain probability in each channel use . under this configuration, they propose a novel encoding scheme , called block - markov wyner secrecy encoding . in , the authors introduce a pilot contamination attack in which the adversary jams during the reverse training phase to prevent the transmitter from estimating the channel state correctly .the authors show the impact of the pilot contamination attack on the secrecy performance .note that , neither of these works consider an adversary that has an _ arbitrary strategy _ to either jam or eavesdrop , which is the focus of this paper .channels under arbitrary jamming ( but no eavesdropping ) strategies have been studied in the context of arbitrary varying channel ( avc ) .avc , the concept of which is introduced in , is defined to be the communication channel the statistics of which change in an arbitrary and unknown manner during the transmission of information . in , the authors derive the capacity for gaussian avcs , memoryless gaussian channels disrupted by a jamming signal that changes arbitrarily with unknown statistics . an extensive treatment of avcs , outlining the challenges and existing approaches can be found in .recently , discrete memoryless avcs with a secrecy constraint and no receiver feedback have been studied in , where the states of the channels to the both receiver and the eavesdropper remain unknown to the legitimate pair and change in an arbitrary manner under the control of the adversary .the achievable secrecy rates they propose are zero when the worst possible transmitter - to - receiver channel is a degraded version of the best possible transmitter - to - adversary channel . on the other hand , in addition to the jamming signal of the adversary , we consider the fading channels whose states can not be completely controlled by the adversary .we show the _ secrecy capacity _ is zero when the main channel gain is stochastically dominated by the eavesdropper channel gain .furthermore , under arbitrarily small receiver feedback rate ( 1-bit at the end of each block ) , we show that the secrecy capacity is _ non - zero_. the rest of this paper is organized as follows . in section [ chap :system ] , we explain the system model . in section [ chap : result ] , we present the secrecy capacity bounds for the no feedback case , and in section [ chap:1bit ] , we consider the -bit feedback case . in section [ sec : multi_adv ] , we study the multiple adversaries case . in section [ delay_limited_sec ], we present our results related to the strict delay setting . in section [ chap :numeric ] , we present our numerical results and conclude the paper in section [ chap : conc ] .we study the communication system illustrated in figure [ fig : sysmdl ] . in our systema transmitter has a message to transmit to the receiver over the main channel .the adversary chooses to either jam the receiver over the jammer channel or eavesdrop it over the eavesdropping channel .the actions of the adversary is parametrized by the state , of a switch , shown in figure [ fig : sysmdl ] .thus , our system consists of three channels : main , eavesdropper and jammer channels , all of which are block fading . in the block fading channel model , time is divided into discrete blocks each of which contains channel uses .the channel states are assumed to be constant within a block and vary independently from one block to the next .we assume the adversary is half duplex , i.e. , the adversary can not jam and eavesdrop simultaneously .the observed signals at the legitimate and the adversary in -th block are as follows : where is the transmitted signal , is the signal received by the legitimate receiver , is the signal received by the adversary , , , and are noise vectors distributed as complex gaussian , , , and , respectively , and is the jamming power .indicator function if the adversary is in a jamming state in -th block ; otherwise , .channel gains , , , and are defined to be the complex gains of the main channel , eavesdropper channel , and jammer channel , respectively ( as illustrated in figure [ fig : sysmdl ] ) .associated power gains are denoted with , , and . for any integer ,the joint probability density function ( pdf ) of is here , , , and are the realizations of , , and , respectively .we assume that the joint pdf of instantaneous channel gains , is known by all entities .the transmitter does not know the states of any channel , and also can not observe the strategy of the adversary in any given block .the adversary and the receiver know and , , respectively at the end of block .the receiver can observe the instantaneous strategy of the adversary , in block ( e.g. , via obtaining the presence of jamming ) only at the end of block .we generalize some of our results to the case in which the receiver can not observe .we consider two cases in which feedback from the receiver to the transmitter is not available or some limited feedback is available .in particular , in the latter case , we consider a 1-bit feedback over an error - free public channel at _ the end of each block_. we denote the feedback sent at -th time instant as .for the 1-bit feedback case , is an element of and is a function of if time instant corresponds to the end of a block , i.e. , for any block index . for other time instants, the receiver does not send feedback : if for all .for the no feedback case , for all .the transmitter encodes message over blocks . the transmitted signal at -th instant, can be written as where is the encoding function used at time .we assume the input signals satisfy an average power constraint such that \leq p_t \label{pw_cst}\ ] ] for all , where is the message set . here , the expectation is taken over ] such that \geq 1-\epsilon ] ] ^{+} ] and ] , and and follow from the continuity of the cdf of .hence , satisfy the constraint given in the upper bound . when , the expectation term in is zero .remark [ dom_remark ] is easy to state for the fading scenario in which and are exponentially distributed random variables .\leq \mathbb e[h_e] ] . in ,the authors show that gaussian noise that has the same covariance matrix with the original additive noise component minimizes when is gaussian distributed .hence , we replace with {n\times n}) ] , < \infty ] also positive since > 0 ] to bit sequence }} ] with a stochastic mapping as described in , where }}] ] each of which has size of bits such that }\rceil)] ] are decoded successfully at each renewal point . here , is the random variable given in theorem [ 1bitrate_accum ] , and it represents the number of transmissions for a bit group .thus , can be considered as a random amount of accumulated mutual information at the adversary corresponding to the transmissions of a bit group .theorem [ 1bitrate ] follows when we apply the renewal reward theorem , where the rewards are the successfully decoded secure bits at each renewal instants , i.e. , } \mathbb e\left[r- \log\left(1+p_t\sum_{i=1}^t \tilde{h}_{e}(i)\right ) \right]^{+}\ ] ] with probability 1 , where is defined to be the amount of secure bits ( explained above ) accumulated at the receiver up to block . instead of employing mrc strategy, the receiver can employ a plain automatic repeat request ( arq ) strategy in which the receiver discards the received sequence when the decoding error occurs on -th block .impact of plain arq on the lower bound is captured with the following corollary .( * secrecy capacity lower bound with plain arq*)[1bitrate ] the secrecy capacity , is bounded by where ^{+}\label{1bit}\ ] ] where is provided in . in , , is a random variable with probability mass function ( pmf ) , , , and the proof of corollary [ 1bitrate ] can be found at the end of achievability proof of theorem [ 1bitrate_accum ] .it can be observed that the lower bound in corollary [ 1bitrate ] is not larger than the lower bound in theorem [ 1bitrate_accum ] . in ,the authors consider a scenario in which the adversary is a fully eavesdropper , and the transmitter has no information of the states of main and eavesdropper channels , which change from one block to the next randomly as described in our scenario . for the casein which 1-bit feedback is available at the end of each block , the authors employ the plain arq strategy mentioned above to achieve the secrecy rate in theorem 2 of .however , in the secrecy analysis , the authors consider the impact of the bit groups , that are successfully decoded only in a single transmission on the equivocating rate . in this paper , regardless of the number of the required transmissions for the bit groups , we consider the impact of the each bit group on the equivocation rate with the strategy mentioned in the proof sketch of theorem [ 1bitrate_accum ] .thus , we improve the achievable secrecy rate in by employing a renewal based analysis and mtc .in theorem [ 1bitrate_accum ] and corollary [ 1bitrate ] , we observe that the information corresponding to the retransmissions of a bit group is accumulated at the adversary , which reduces the lower bound .as we will show , we can avoid this situation if the main csi is available at the beginning of each block at the transmitter in addition to the 1-bit feedback at the end of each block . by using the rate adaptation strategy that we will introduce, the legitimate pair can ensure that information corresponding to the retransmissions of a bit group is not accumulated at the adversary .( * achievable secrecy rate with main csi*)[1bitrate_main csi ] if main csi is available at the transmitter and the adversary , the secrecy capacity with 1-bit feedback at the end of each block is lower bounded by ^{+}\leq c_s^{\text{1-bit+csi}},\label{1bit_maincsi}\ ] ] where .we omit the proof since it follows from an identical line of argument as the proof of theorem [ 1bitrate_accum ] .the only difference is that the legitimate pair employs a plain arq strategy as in corollary [ 1bitrate ] , and the transmitter employs a rate adaptation strategy to utilize the main csi such that if ; otherwise , , where is the rate of the gaussian codebook used in the achievability proof of theorem [ 1bitrate_accum ] .since the transmitter keeps silent on the blocks in which condition is satisfied , the decoding error event occurs only when the adversary is in the jamming state .hence , the adversary can not hear the retransmissions because of the half duplex constraint , and information that corresponds to the retransmissions of a bit group is not accumulated as seen in .note that main csi combined with 1 bit feedback provides the transmitter _perfect knowledge _ of the adversary jamming state ( but with one block delay ) since an ack indicates that the adversary is in the eavesdropping state , and a nak indicates that the adversary is in the jamming state in the previous block .therefore , we do not need to employ a conservative secrecy encoder to account for the adversary that eavesdrops at all times .in this section , we study the multiple adversary scenario in which there are half duplex adversaries each of which has an arbitrary strategy from one block to the next .we focus on the no feedback case .the results given in this section can be extended to the 1-bit feedback case straightforwardly .since there are multiple adversaries , the message has to be kept secret from each adversary . moreover ,when an adversary jams the receiver , it also jams the other adversaries .consequently , the observed signals at the legitimate receiver and adversary in -th block can be written as follows : where is the jamming signal of adversary , and is distributed with . as depicted in figure [ fig : sysmdl2 ] , , , and are defined to be the independent complex gains of transmitter - to - adversary channel , adversary -to - receiver channel , and adversary -to - adversary channel , respectively . associated power gains are denoted with , , and .indicator function , if the adversary is in a jamming state in -th block ; otherwise , . for the multi adversary scenario , in is replaced with , and the constraints - have to be satisfied for all .we study two types of multi - adversary scenarios : colluding and non - colluding . in the colluding scenario ,the adversaries share their observations , error free whereas in the non - colluding scenario , the adversaries are not aware of the observations of each other .hence , for the non - colluding scenario , constraint needs to be satisfied for each adversary and for the colluding scenario , equivocation is conditioned on the adversaries joint knowledge , i.e. , in is replaced with .we use notations and to denote the secrecy capacities for the colluding case and the non - colluding case , respectively .we first analyze the non - colluding scenario . *( secrecy capacity bounds for non - colluding adversaries ) * [ multi_no_csi ] the secrecy capacity of the non - colluding multiple adversary scenario , under the no feedback case is bounded by where \right]^{+}\\&c_s^{nc+}= \min_{1\leq s \leq s}\quad\min_{p_{\tilde{h}_{e_1},\dots,\tilde{h}_{e_s},\tilde{h}_m,\tilde{h}_{z_1},\dots,\tilde{h}_{z_s } } } \mathbb e\left[\left(\log\left(1+\frac{p_t\tilde{h}_m}{1+p_j\tilde{h}_z}\right)-\log\left(1+p_t\tilde{h}_{e_s}\right)\right)^{+}\right]\\ & \qquad \qquad\qquad\qquad\text{subject to : } p_{\tilde{h}_{e_1},\dots,\tilde{h}_{e_s}}=p_{h_{e_1},\dots , h_{e_s}},\;\ ; p_{\tilde{h}_m,\tilde{h}_{z_1},\dots,\tilde{h}_{z_s } } = p_{h_m , h_{z_1},\dots , h_{z_s } } \nonumber \end{aligned}\ ] ] where is the number of the adversaries , , and .the proofs of the lower and upper bounds can be found in appendix [ multi_no_csi_proof ] . *( secrecy capacity bounds for colluding adversaries)*[colluding ] the secrecy capacity of the colluding multiple adversary scenario , under the no feedback case is bounded by where ^{+}\nonumber\\ & c_s^{c+}=\min_{p_{\tilde{h}_{e_1},\dots,\tilde{h}_{e_s},\tilde{h}_m,\tilde{h}_{z_1},\dots,\tilde{h}_{z_s } } } \mathbb e\left[\left(\log\left(1+\frac{p_t\tilde{h}_m}{1+p_j\tilde{h}_z}\right)-\log\left(1+p_t\sum_{k=1}^s \tilde{h}_{e_s}\right)\right)^{+}\right]\\ & \qquad \qquad\text{subject to : } p_{\tilde{h}_{e_1},\dots,\tilde{h}_{e_s}}=p_{h_{e_1},\dots , h_{e_s}},\;\ ; p_{\tilde{h}_m,\tilde{h}_{z_1},\dots,\tilde{h}_{z_s } } = p_{h_m , h_{z_1},\dots , h_{z_s } } \nonumber\end{aligned}\ ] ] where , , and are defined in theorem [ multi_no_csi ] .the proof of theorem [ colluding ] is similar to the proof theorem [ thm : nocsi ] since the colluding scenario can be considered as a single adversary scenario , in which the adversary observes instead of . as seen in theorems [ multi_no_csi ] and[ colluding ] , colluding strategy severely affects the achievable secrecy rate . *( independence of upper bound from cross - interference ) * in , we observe that the received signal at -th adversary includes the jamming signals of the other adversaries , i.e. , .we expect that these cross interference terms at the adversaries help the legitimate pair to communicate at high secrecy rates . however , as seen in theorem [ multi_no_csi ] and [ colluding ] , the upper bounds ( and also lower bounds )are independent of these jamming terms .note that the secrecy constraint in the proof of upper bounds makes the minimization of the equivocation rate over the adversary strategies arbitrarily close to the message rate .the strategies that minimize the equivocation rate in the proofs are the ones in which all adversaries eavesdrop the main channel .hence , the upper bound derivation becomes independent of the cross interference across the adversaries .the detailed information can be found in appendix [ multi_no_csi_proof ] .in this section , we address the problem with -block delay constraint : at the beginning of each block , , message becomes available at the encoder , and needs to be securely communicated to the receiver by the end of block .note that , the definition of secrecy capacity needs to be restated with the delay requirement .we consider a set of codes of rate where the transmitter maps message , and the previously transmitted signals also depends on the previously transmitted signals .it is required to utilize secrecy banking argument , in which shared secrets are stored to be utilized in later blocks . ] to , and the decoder maps the received sequence to .the error event is defined as when can not be communicated reliably or securely at block , secrecy outage event occurs .the secrecy outage event ( with parameter ) at block is defined as where information outage occurs if accumulated mutual information on the message remains below its entropy rate and the equivocation outage occurs if the equivocation rate are mutually independent , they may be dependent conditioned on eavesdroppers received signal , therefore equivocation expression includes conditioning on .] of message is less than [ secrecyoutageconstraint] rate is achievable securely with at most probability of secrecy outage if , for any fixed , there exists a sequence of codes of rate no less than such that , for all large enough , and such that , the conditions are satisfied for all such that , and for all possible adversary strategies . the secrecy capacity with outage is the supremum of such achievable secrecy rates .we use to denote -outage secrecy capacity under no feedback , and use to denote -outage secrecy capacity under 1-bit feedback at the end of each block .note that we do not impose a secrecy outage constraint on the first blocks , which is referred to as an initialization phase , used to generate initial common randomness between the legitimate nodes .note that this phase only needs to appear _ once _ in the communication lifetime of that link .in other words , when a session ( which consists of blocks ) between the associated nodes is over , they would have sufficient number of common key bits for the subsequent session , and would not need to initiate the initialization step again .( * time sharing lower bound for -outage secrecy capacity*)[t : delaynofeedback ] for no feedback , , where ^{+}\geq r_s - r_{r0 } \right\}\bigg{)}\geq 1-\alpha \label{delay_nofb_outage}\\ & r_s\leq \tilde{r}_s , r_{r0}=\gamma c_s^{-},\gamma \in [ 0,1],\label{delay_nofb_keyrate}\end{aligned}\ ] ] where is provided in .similarly , for 1-bit feedback , -outage secrecy capacity is lower bounded by , where is in the form ( [ obj]-[delay_nofb_keyrate ] ) , except is replaced with . here, we provide a sketch of achievability .the complete proof is in appendix [ app : delay_nofb ] . in theorem [ t : delaynofeedback ] , ] and , then .we can observe this fact by setting in theorem [ t : delaynofeedback ] .furthermore , note that if ( remark [ nonzero ] ) .hence , by setting , we can get for any ] , =2 ] .a notable observation is that the lower bound for the no feedback case in theorem [ thm : nocsi ] decreases with beyond a certain point .the reason is that the lower bound , given in theorem [ thm : nocsi ] is not always an increasing function of since the positive operator is outside of the expectation term .the lower bound to the -outage capacity without feedback , given in theorem [ t : delaynofeedback ] also decreases with since the achievabilitiy strategy employs a key generation step in which keys are generated with the strategy used in the achievability proof of theorem [ thm : nocsi ] .let us replace in the lower bounds with dummy variable .we conclude that the lower bounds in theorems [ thm : nocsi ] and [ t : delaynofeedback ] can be further tightened by maximizing them over ] , =2 ] , i.e. , the eavesdropper channel _ stochastically dominates _ the effective main channel .as seen in figure [ fig:2 ] , we observe that 1-bit feedback sent at the end of each block is sufficient to make the secrecy capacity non - zero .furthermore , we observe that the secrecy capacity of the no feedback case is zero ( remark [ dom_remark ] ) . the importance of the feedback can also be seen in the delay limited set - up , where no feedback strategy results in a zero achievable rate as opposed to the strategy employing 1-bit feedback .we illustrate corollary [ asym_power ] in figure [ fig:3 ] . for each plot in figure [ fig:3 ], we keep the ratio of transmission power constraint and adversary power same , and we increase the jamming power .as mentioned in corollary [ asym_power ] , in figure [ fig:3 ] , we observe that the secrecy capacity with no feedback goes to zero , when the transmission power constraint and adversary power increase in the same order .= 5 ] , and =2 ] , =2 ] . , scaledwidth=50.0% ] = 1 ] , and =1 ] for some .generate codebook containing independently and identically generated codewords ] , the secrecy encoder draws index from the uniform distribution whose sample space is ] such that , where is the set of jointly typical sequences with _ analysis of the probability error and secrecy : _ random coding argument is used to show that there exists sequences of codebooks that satisfy the constraint ( [ cond1 ] ) and ( [ cond2 ] ) simultaneously . since ] , where =h(w|z^{nm } , g^m , \mathcal{c}).\ ] ] hence , there exists a sequences of codebooks that satisfy both ( [ cond1 ] ) and ( [ cond2 ] ) since we have \to r_s ] and for sufficiently large since \ ] ] , and follows from the fano s inequality .let s define and ] , ( [ fano ] ) is satisfied for sufficiently large .the reason is that since , as from the random coding argument .we now provide the proof of the upper bound in theorem [ thm : nocsi ] .suppose that is achievable rate . from definition( [ cond1])-([cond2 ] ) and fano s inequality , we have for any with .here , , , and go to zero as and .adversary strategy , solves lhs of ( [ convmain1 ] ) and strategy , solves lhs of ( [ conveave1 ] ) .hence , we have where here , the lhs of equals to that of since conditioning reduces the entropy and the lhs of equals to that of since forms a markov chain .we now show that if is achievable , we have where ] .note that here , the message is conditioned on random vector , instead of in . with the similar steps , we can show that where as and .the upper bound , follows when we combine and with the following steps : + \gamma_{nm}\\ & \stackrel{(f)}{\leq } \mathbb e\left[\left(\log\left(1+\frac{\frac{1}{nm}\sum_{i=1}^{m}\sum_{j=1}^n p_{t_{ij}}\tilde{h}_m}{1+p_j\tilde{h}_z}\right)-\log\left(1+\frac{1}{nm}\sum_{i=1}^{m}\sum_{j=1}^n p_{t_{ij}}\tilde{h}_e\right)\right)^{+}\right]+ \gamma_{nm}\label{jensen_1}\\ & \stackrel{(g)}{\leq } \mathbb e\left[\left(\log\left(1+\frac{p_t\tilde{h}_m}{1+p_j\tilde{h}_z}\right)-\log\left(1+p_t\tilde{h}_e\right)\right)^{+}\right]+\gamma_{nm},\end{aligned}\ ] ] where the notation indicates the -th channel use of -th block and .note that as and . in, we define new random variables , i.e. , and , , and . here , are i.i.d random variables with , and is independent from . in a similar way , are i.i.d random vectors with , and are independent from . + for the derivation above , follows from the fact and have the same joint pdf with and , respectively .furthermore , note that and form markov chain . in a similar way, and form markov chain . follows from the fact that forms a markov chain . and follows from the memoryless property of the channel and from the fact conditioning reduces the entropy .the power constraint in implies that \leq p_t ] and are independent random variables .define = \mathbb e\left[|x(i , j)|^2|\tilde{g}(i)=g(i)\right] ] has a finite expectation for all with the following analysis : \leq \mathbb e\left[\log\left(1 + \frac{b h_m}{h_z}\right)\right]\\ & = \mathbb e[\log(h_z+bh_m ) ] -\mathbb e[\log(h_z ) ] \nonumber \\ & \leq \log\left(b\mathbbe[h_m]+\mathbb e[h_z]\right)-\mathbb e[\log(h_z ) ] \label{jensen}\\ & \leq \log\left(b\mathbb e[h_m]+\mathbb e[h_z]\right ) -\int_0 ^ 1 \log(h_z ) f_{h_z}(h_z ) \;d h_z\\ & \leq \log\left(b\mathbb e[h_m]+\mathbbe[h_z]\right ) -a\int_0 ^ 1 \log(h_z ) \;d h_z\\ & = \log\left(b\mathbb e[h_m]+\mathbb e[h_z]\right ) + a\log(e ) \label{log}\\ & < \infty , \label{bound}\end{aligned}\ ] ] for all , where .here , ( [ jensen ] ) follows from the jensen s inequality , ( [ log ] ) follows from the fact that , and ( [ bound ] ) follows from the fact that , \mathbb e[h_z ] < \infty ] with probability 1 .hence , <\infty ] for some . for the secrecy analysis ,let s define , . with the same steps used in the secrecy analysis of the proof of theorem [ thm : nocsi ] , we can get for any and sufficiently large , where .\ ] ] we now show that , for any , for sufficiently large . to prove ( [ m12 ] ) , suppose that codewords correspond to message is partitioned into groups .let s define random variable that represents the group index of .then , we have for any and sufficiently large . here , ( [ m13 ] ) follows from the random coding argument as in ( [ fano1])-([fano ] ) of the proof of theorem [ thm : nocsi ] .the proof follows when we combine ( [ m1234 ] ) and ( [ m12 ] ) .we now provide the upper bound .suppose that is achievable rate . from definition( [ cond1])-([cond2 ] ) and fano s inequality , we have for any where ] , where is defined in theorem [ 1bitrate_accum ] .generate codebook containing independently and identically generated codewords ] with probability 1 . here, is the accumulated reward at the receiver up to -th block for the renewal process explained in the proof sketch of theorem [ 1bitrate_accum ] , where the reward at each renewal is bits . to send a message ] .then , the secrecy encoder maps into bits and decompose bits into groups of bits . to send the index , the channel encoder transmits in each block by using codebook .when nak is received , the channel encoder sends the same bit group transmitted at the previous block .detailed information about the encoding can be found in the proof sketch of theorem [ 1bitrate_accum ] .+ _ decoding : _ let be the received sequence . if the adversary is in the eavesdropping state , i.e. , , the channel decoder draws from and a noise sequence from to obtain the channel decoder collects s that correspond to the same bit group and apply mrc to these observations as explained in the proof sketch .then , the channel decoder employs joint typicality decoding as in the no feedback case ( mentioned in the appendix [ section : nocsi ] ) ._ secrecy analysis : _ for the secrecy analysis , let s define , .the equivocation analysis averaged over codebooks is as follows : =\frac{1}{mn}h(w|\{z^{n}(i)\}_{i:\phi(i)=0 } , g^m , \mathcal{c } ) \\ &\stackrel{(a)}{\geq}\frac{1}{mn } h(w|\hat{z}^{nm } , g^m,\mathcal{c } ) \nonumber \\ & = \frac{1}{mn}h\left(w , \{x^{n}(i)\}_{i : i\in a}|\hat{z}^{nm},g^m , \mathcal{c}\right)\nonumber\\ & \qquad-\frac{1}{mn}h\left ( \{x^{n}(i)\}_{i : i\in a}|\hat{z}^{nm } , w , g^m,\mathcal{c}\right)\\ & \geq \frac{1}{mn}h\left(\{x^{n}(i)\}_{i : i\in a}|\mathcal{c}\right)\nonumber\\ & \qquad-\frac{1}{mn}i\left(\{x^{n}(i)\}_{i : i\in a};\hat{z}^{nm}|g^m , \mathcal{c}\right)\nonumber\\ & \qquad\qquad-\frac{1}{mn}h\left ( \{x^{n}(i)\}_{i : i\in a}|\hat{z}^{nm } , w , g^m,\mathcal{c}\right ) \nonumber\\ & = \frac{1}{mn}\sum_{i\in a } h\left(x^n(i)|\hat{z}^{nm},\mathcal{c},g^m\right)\nonumber\\ & \qquad -\frac{1}{mn}h\left ( \{x^{n}(i)\}_{i : i\in a}|\hat{z}^{nm } , w , g^m,\mathcal{c}\right ) \nonumber\\ & = \frac{1}{mn}\sum_{i\in a } \left[h(x^n(i))- i\left(x^n(i);\hat{z}^{nm}|\mathcal{c},g^m\right)\right]^{+}\nonumber\\ & \qquad-\frac{1}{mn}h\left ( \{x^{n}(i)\}_{i : i\in a}|\hat{z}^{nm } , w , g^m,\mathcal{c}\right)\nonumber\\ & \hspace{-0.5cm}\stackrel{(b)}{=}\frac{1}{mn } \sum_{i\in a}\left[nr - i\left(x^n(i ) ; \hat{z}^n(i - r(i)+1:i)|\mathcal{c},g^m\right)\right]^{+ } \nonumber\\ & \;-\frac{1}{mn}h\left ( \{x^{n}(i)\}_{i : i\in a}|\hat{z}^{nm } , w , g^m,\mathcal{c}\right)\label{l7}\\ & \hspace{-0.5cm}\geq\frac{1}{mn } \sum_{i\in a}\left[nr - i\left(x^n(i),\mathcal{c } ; \hat{z}^n(i - r(i)+1:i)|g^m\right)\right]^{+}\nonumber\\ & \qquad-\frac{1}{mn}h\left ( \{x^{n}(i)\}_{i : i\in a}|\hat{z}^{nm } , w , g^m,\mathcal{c}\right)\nonumber\\ & \stackrel{(c)}{=}\frac{1}{mn } \sum_{i\in a}\left[nr - i\left(x^n ; \hat{z}^n(i - r(i)+1:i)|g^m\right)\right]^{+}\nonumber\\ & \;-\frac{1}{mn}h\left ( \{x^{n}(i)\}_{i : i\in a}|\hat{z}^{nm } , w , g^m,\mathcal{c}\right)\label{l8}\\ & \stackrel{(d)}{= } \frac{1}{m}\sum_{i\in a}\left[r - i\left(x ; \hat{z}_{(i - r(i)+1)\dots , \hat{z}_i } |g^m\right)\right]^{+}\nonumber\\ & \;-\frac{1}{mn}h\left ( \{x^{n}(i)\}_{i : i\in a}|\hat{z}^{nm } , w , g^m,\mathcal{c}\right ) \label{l9}\\ & = \frac{1}{m}\sum_{i\in a } \left[r- \log\left(1+p_t\sum_{j=1}^{r(i ) } h_{e}\left(i - j+1\right)\right ) \right]^{+}\nonumber\\ & -\frac{1}{mn}h\left ( \{x^{n}(i)\}_{i : i\in a}|\hat{z}^{nm } , w , g^m,\mathcal{c}\right)\label{l10}\\ & \stackrel{(e)}{\geq } c_s^{-\text{1-bit } } \nonumber\\ & \quad-\frac{1}{mn}h\left ( \{x^{n}(i)\}_{i : i\in a}|\hat{z}^{nm } , w , g^m,\mathcal{c}\right)-\epsilon\label{l11}\\ & \geq c_s^{-\text{1-bit}}-2\epsilon \label{l12}\end{aligned}\ ] ] for any and for sufficiently large , where is the required number of transmissions for the bit group that is successfully decoded on -th block and is the set of blocks on which decoding occurs successfully , i.e. , .here , follows from the fact that conditioning reduces the entropy . in , ] . in the encoding step ,we select such that }\rvert \leq \epsilon ] . here, and and go to zero as and .the upper bound follows with following steps . where and as and . by using the following lemmas, we can reduce the mutual information term in to a simplier form .since lemma [ ashish_lem1 ] and lemma [ ashish_lem2 ] are similar to lemma 1 and lemma 2 of , respectively , we omit the proofs .[ ashish_lem1 ] for each block , we have that [ ashish_lem2 ] for each block , we have that as in , by successively applying lemma 1 and lemma 2 , we can show the following inequality . hence , we have \mathbb p\left(f(g(i))=1\right)\;\right.\nonumber\\ & \qquad\qquad\qquad\qquad\qquad \qquad + \left. \mathbb e\left[\log\left(1+\frac{p_{t_{ij}}h_m(i)}{1+p_{t_{ij}}h_e(i)}\right)\bigg\vert f(g(i))=0\right ] \mathbb p(f(g(i))=0)\right)\\ & \stackrel{(b)}{\leq } \min_f \bigg{(}\mathbb e\left[\log\left(1+\frac{\frac{1}{mn } \sum_{i=1}^m\sum_{j=1}^np_{t_{ij}}h_m}{1+p_jh_z}\right)\bigg \vert f(g)=1\right ] \mathbb p\left(f(g)=1\right)\;\nonumber\\ & \qquad\qquad\qquad\qquad \qquad + \mathbb e\left[\log\left(1+\frac{\frac{1}{mn}\sum_{i=1}^m\sum_{j=1}^n p_{t_{ij}}h_m}{1+\frac{1}{mn } \sum_{i=1}^m\sum_{j=1}^np_{t_{ij}}h_e}\right)\bigg\vert f(g)=0\right ] \mathbb p(f(g)=0)\bigg{)}\label{jensen_2}\\ & \stackrel{(c)}{\leq } \min_f\bigg{(}\mathbb e\left[\log\left(1+\frac{p_th_m}{1+p_jh_z}\right)\bigg \vert f(g)=1\right ] \mathbb p\left(f(g)=1\right)\;\nonumber\\ & \qquad\qquad\qquad\qquad \qquad + \mathbb e\left[\log\left(1+\frac{p_th_m}{1+ p_th_e}\right)\bigg\vert f(g)=0\right ] \mathbb p(f(g)=0)\bigg{)}\label{inc_fun}\\ & = \min_f\mathbb e\left[\log\left(1+\frac{p_th_m}{1+p_jh_zf(g)+p_th_e ( 1-f(g))}\right)\right]\label{max_bef}\\ & \stackrel{(d)}{=}\mathbb e\left[\log\left(1+\frac{p_th_m}{1+\max\left(p_th_e , p_jh_z\right)}\right)\right ] , \label{max_pro}\end{aligned}\ ] ] where the notation indicates the -th channel use of -th block , ] , and ] , where the expectation is taken over and . also , note that ] .then , follows from the fact that gaussian distribution maximizes the conditional mutual information for both values of . in, follows from jensen s inequality and from the fact that and are concave functions of for any and . in, follows from the fact that and are non - decreasing functions in for any and . in , follows from the fact that minimizes the expectation in , where if ; otherwise , .fix ] , and are defined in a similar way . through this appendix, indicates -th block of -th superblock .encoding and decoding strategies are summarized in figure [ fig : delay1 ] and figure [ fig : delay2 ] .we begin with key generation .let . at the beginning of superblock , the transmitter picks key from random variable which is uniformly distributed in . by using the encoding strategy in the proof of theorem 1 , the transmitter maps to codeword process is repeated for every superblock .next lemma provides a lower bound to achievable key rates .[ mod_nofb ] for any , there exit , and a sequence of length channel codes for which the following are satisfied under any strategy of the adversary , : for any superblock , for any , and for any where .the proof follows from theorem 1 .now , we describe the transmission of delay limited message , we skip the message transmission at first blocks , and declare _ secrecy outage_. ] , illustrated in figure [ fig : delay1 ] .let and .message of size bits is divided to two messages and , of size and , respectively .we also divide key , generated in previous superblock , into equivalent size chunks such that ] .notice that the second event in the probability term is equal to ^{+}\geq r_s - r_{r0 } \right\}$ ] .let s define set containing s that satisfy these four conditions .the lower bound to outage secrecy capacity can be written as .it is easy to observe that if , the corresponding has to be equal to .then , the lower bound can be written as which concludes the proof . 1 a. d.wyner , `` the wire - tap channel '' ._ bell syst ., 54(8):13551387 , october 1975 .i. csiszar and j. korner , `` broadcast channels with confidential messages '' _ ieee trans .inf . theory _ ,3 , pp . 339348 , may 1978 .s. k. leung - yan - cheong and m. e. hellman , `` the gaussian wire - tap channel , '' _ ieee trans .inf . theory _ , vol .it-24 , no .4 , pp . 451456 , jul . 1978 .y. liang , h. poor , and s. shamai , `` secure communication over fading channels , '' _ ieee trans .on inf . theory _54 , no . 6 , pp . 24702492 , june 2008 .p. gopala , l. lai , and h. el gamal , `` on the secrecy capacity of fading channels , '' _ ieee trans .inf . theory _ ,54 , no . 11 , pp .50595067 , nov .d. blackwell , l. breiman , and a. j. thomasian , `` the capacities of certain channel classes under random coding , '' _ ann .31 , pp . 558567 , 1960 .a. lapidoth and p. narayan , `` reliable communication under channel uncertainty , '' _ ieee trans .inf . theory _44 , no . 6 , pp . 21482177 , oct .i. csiszar and p. narayan , `` capacity of the gaussian arbitrarily varying channel , '' _ ieee trans .inf . theory _1 , pp . 1826 , jan . 1991 .e. molavianjazi , m. bloch , and j. laneman , `` arbitrary jamming can preclude secure communications , '' _ in proc .of 47th annual allerton conference on communication , control , and computing _ , oct .2009 , pp .i. bjelakovic , h. boche , and j. sommerfeld , `` strong secrecy in arbitrarily varying wiretap channels '' _ in proc . of the ieee information theory workshop _ , sept .2012 , pp . 617621 .x. tang , r. liu , p. spasojevic , and h. v. poor ,`` on the throughput of secure hybrid - arq protocols for gaussian block - fading channels , '' _ ieee trans . inf . theory_,vol .55 , no . 4 , pp . 15751590 , apr . 2009 .g. amariucai and s. wei , `` half - duplex active eavesdropping in fast fading channels : a block - markov wyner secrecy encoding scheme , '' _ ieee trans . on inf .58 , no . 7 , pp .46604677 , july 2012 .a. mukherjee and a. l. swindlehurst , `` jamming games in the mimo wiretap channel with an active eavesdropper , '' _ ieee trans .signal process .1 , pp . 8291 jan .x. zhou , b. maham , and a. hjrungnes , pilot contamination for active eavesdropping , " _ ieee trans .wireless commun .3 , pp . 903907 , mar .z. rezki , a. khisti , and m. alouini , `` on the ergodic secret message capacity of the wiretap channel with finite - rate feedback , '' in _ proc .inf . theory _, july 2012 , pp . 239243 . t. m. cover and j. a. thomas , elements of information theory .new york : wiley , 1991 s. ross , stochastic processes .john wiley & sons , 1995 .s. n. diggavi and t. m. cover , `` the worst additive noise under a covariance constraint , '' _ ieee trans .inf . theory _ , vol .47 , no . 7 , pp . 30723081 , nov .2001 g. gaire and d. tuninetti , the throughput of hybrid - arq protocols for the gaussian collision channel , " _ ieee trans .inf . theory _ , vol .5 , pp . 19711988 , jul . 2001 .o. gungor , j. tan , c. e. koksal , h. e. gamal and n. b. shroff , `` secrecy outage capacity of fading channels , '' _ ieee trans . on inf .theory _ , vol .9 , pp . 53795397 , sept .2013 a. khisti , `` secret key agreement over non noherent block fading channels with public discussion '' , http://www.comm.utoronto.ca/~akhisti/main.pdf , 2013 . y. o. basciftci , c. e. koksal , and f. ozguner , `` to obtain or not to obtain csi in the presence of hybrid adversary , '' in _ proc .inf . theory _, july 2013 , pp . 28652869 .
we consider a block fading wiretap channel , where a transmitter attempts to send messages securely to a receiver in the presence of a hybrid half - duplex adversary , which arbitrarily decides to either jam or eavesdrop the transmitter - to - receiver channel . we provide bounds to the secrecy capacity for various possibilities on receiver feedback and show special cases where the bounds are tight . we show that , without any feedback from the receiver , the secrecy capacity is zero if the transmitter - to - adversary channel stochastically dominates the _ effective _ transmitter - to - receiver channel . however , the secrecy capacity is non - zero even when the receiver is allowed to feed back only one bit at the end of each block . our novel achievable strategy improves the rates proposed in the literature for the non - hybrid adversarial model . we also analyze the effect of multiple adversaries and delay constraints on the secrecy capacity . we show that our novel time sharing approach leads to positive secrecy rates even under strict delay constraints .
a lack of consistent definition of chiral fermions on the lattice has hampered definitive and convincing investigations of chiral aspects of quantum chromodynamics ( qcd ) until now .thus important physics issues , such as the spontaneous breaking of the chiral symmetry at low temperatures and its restoration at finite temperature , have remained hostages to technical questions such as the fine - tuning of the bare quark mass ( wilson fermions ) or the precise number of massless flavours ( staggered fermions ) .recent developments in defining exact chiral symmetry on the lattice have therefore created exciting prospects of studying an enormous amount of physics in a cleaner manner from first principles . however , the corresponding dirac operators are much more complicated . without good control of the algorithms needed to deal with them ,one is unlikely to derive the full benefit of their better chiral properties .our goal in this paper is to evaluate the efficiency of the most widely used , or most promising , algorithms . by efficiency we mean both the speed of the algorithm , which is measured by the computer time required to achieve a certain accuracy in the solution , and the adaptability , which is measured by how the speed scales as the problem becomes harder .this study is made for various values of the required accuracy along with the corresponding analysis on the accuracy obtained for the expected properties of the resulting dirac operator such as the ginsparg - wilson relation , the central circle relation , hermiticity or normality .in particular , we have observed that these properties can be satisfied accurately only if the sign computation of the wilson dirac operator has high enough precision .one version of chiral fermions on the lattice is the overlap formalism .the overlap dirac operator ( ) is defined in terms of the wilson - dirac operator ( ) by the relation d = 1 + d_w ( d_w^d_w)^-1/2 .[ overlap]in this paper we shall use the shorthand notation m = d_w^d_w .[ defm]the wilson - dirac operator ( for lattice spacing ) is given by d_w = _ + m , [ wilsond]where and are ( gauge covariant ) forward and backward difference operators respectively .it has been shown that as long as the mass is in the range , the above overlap dirac operator is well - defined , and corresponds to a single massless fermion .furthermore , it satisfies the ginsparg - wilson relation _ 5 d + d_5 = d _ 5 d , [ gwrel]which leads to a good definition of chirality on the lattice and has been shown to correspond to an exact chiral symmetry on the lattice .the overlap dirac operator , , enjoys many nice properties in addition to the ginsparg - wilson relation in eq .( [ gwrel ] ) .in particular , it satisfies -hermiticity d^= _ 5d_5 .[ g5her]together with the ginsparg - wilson relation , this implies that is normal , _i.e. _ , [ d , d^]=0 .[ normal]normality clearly means that and have the same eigenvectors .( [ gwrel],[g5her ] ) also imply d+d^= d^d , [ circle]and hence the eigenvalues of and lie on the unit circle centered at unity on the real line , implying that is unitary .we define measures of numerical errors on each of these quantities , and relations between them in section [ sc.errors ] .all computations of hadronic correlators involve the determination of the fermion propagator , and need a nested series of two matrix iterations for their evaluation , since each step in the numerical inversion of involves the evaluation of .this squaring of effort makes a study of qcd with overlap quarks very expensive .this problem defines for us the properties that an efficient algorithm to deal with must have .first , it should achieve a given desired accuracy as quickly as possible .the need for accuracy is clear : the accuracy to which the ginsparg - wilson relation in eq .( [ gwrel ] ) is satisfied depends on the accuracy achieved in the computation of . the second , and equally important , requirement is that the method should adapt itself easily to matrices with widely different condition numbers = , [ cond]where and are , respectively , the minimum and the maximum eigenvalues of .adaptability is needed because in qcd applications the eigenvalue spectrum of can fluctuate over many orders of magnitude from one configuration to the next .since the condition number on a configuration is _ a priori _ unknown , a method with low adaptability will end up either being inefficient or inaccurate or even both .algorithms , which are adaptable in principle , may require tuning of parameters by hand , or they may incorporate a procedure for self - tuning . clearly , self - tuning algorithms are the ones that can best deal with fluctuating condition numbers in any real situation . in this paperwe examine the speed and adaptability of several different algorithms to compute the inverse square root , , of hermitean matrices ( in our applications the eigenvalues of are non - negative ) acting on a vector . several algorithms for thishave been proposed in the literature .we do not consider the first algorithm to be proposed , since this requires a matrix inversion to be performed at each step of an iteration to determine .later algorithms are more efficient .these fall into two classes expansions of in appropriate classes of functions ( rational functions or orthogonal polynomials ) , and iterative methods .we have analyzed four derived algorithms , namely the optimized rational approximation ( ora ) , the zolotarev approximation ( za , which is also a rational expansion ) , the chebychev approximation ( ca , a polynomial expansion ) and the conjugate gradient approximation ( cga , an iterative method ) .we find that an expansion in chebychev polynomials is the fastest when is moderately large , but it suffers from various instabilities including a lack of adaptability .rational expansions cure many of the instabilities of polynomial expansions ; indeed the za is the fastest and is adaptable but not self - tuning .an iterative method is the only fully self - tuned algorithm , and it turns out to be reasonable also from the point of view of speed .we make two different estimates of the cost of each algorithm .the complexity , , counts the number of arithmetic operations required to achieve the solution to the problem and is a measure of speed independent of the specific machine on which the algorithm is implemented .the spatial complexity , , is the memory requirement for the problem . while timing runs on particular machines on chosen test configurations are instructive , the scaling of speed for each algorithm with physical and algorithmic parameters is provided by our estimates of .our estimate of the adaptability , , of each algorithm is the following .if the scalar algorithm for is tuned to have maximum relative error in the range ] where the error is at most .note that the second interval can not be smaller than the first . in terms of these quantitieswe define = -1 .[ adapt]the least adaptable algorithms have small values of ( ) . is a measure of the relative accuracy achieved in a fixed cpu time for the same algorithm running on two different configurations with condition numbers and . in conjunction with estimates of , it also contains information about the scaling of cpu time required to achieve the same accuracy on the two configurations .our numerical tests were performed with three typical gauge configurations on a lattice at ( _ i.e. _ , ) . the configuration a had eigenvalues of in the range ] , giving .finally , configuration c had eigenvalues in the range ] on the computation of leads to the violation of the properties in eqs .( [ gwrel]-[circle ] ) . in our numerical testswe have investigated five measures of the accuracy of the algorithms through norms of the following operators = ml^2 - 1 , & = d_5+_5d - d_5d , & = dd^-d^d , + = d^-_5 d_5 , & = d+d^-d^d , & [ analyse]where and .each of these operators is zero when ] for varying on the configuration a. the last column gives the cpu seconds used on a . [ cols=">,^,^,^,^",options="header " , ] the first adaptive method used to compute was based on lanczs algorithm . in the original suggestion , the number of lanczs steps to be taken in order to reach a given precision was investigated in terms of the variation of the eigenvalues of with the number of lanczs steps . a stopping criterion _ la _conjugate gradient was proposed but its relation to the precision was not direct .a related adaptive method based on the conjugate gradient algorithm was used in . herethe stopping criterion is put on the residual vector in the inversion of .this enables a direct control over the precision .the cga starts with an iteration which is almost the same as the usual cg algorithm for the inversion of 1 .start from , and , 2 .iterate as in regular cg , , , , and .3 . stop when .note that the the only difference from the usual cg is that the vector which is does not need to be obtained during the iteration . in the orthonormal basis of ,the matrix is the composition of the matrix whose -th column is , and a symmetric tridiagonal matrix , , m = q^tq , t_ii=1_i+ , t_i , i+1=- , [ diag]where and are defined in the iteration above .then compute the eigenvalues and eigenvectors of this truncated tridiagonal matrix t , t = uu^[trid]where is the diagonal matrix of the eigenvalues and the matrix of the eigenvectors in the basis .the cga solution is l [ ] = q^t u ^-1/2u^q /|| .[ solve]the adaptability of the algorithm arises from the fact that we retain only the vectors which contribute significantly to the inverse of and we stop the iterations for when .the contribution to of the smallest eigenvalue of will be only .since the stopping criterion is meant to compute it is more stringent than required .one can be more generous for , and use instead the stopping criterion |r_i+1|</ [ stop]where is an upper bound of . fortunately a reasonable estimate can be obtained at each iteration without large overheads .for any tridiagonal matrix of order , the number of eigenvalues greater than a fixed number is the number of positive values of , where this set of numbers is defined by and d^(j)=t_jj-_0^(i)-(t_j-1,j)^2/d^(j-1 ) , [ next]for .an upper bound for can always be fixed by searching for a number for which at least one of is non - positive .this can be done by bisection , starting from the initial estimate at the first step , . while this procedure increases the complexity by order , the new stopping criterion in eq .( [ stop ] ) has two advantages over the usual cg stopping criterion first , is reduced and , second , the method becomes better adaptable since the observed for a given becomes independent of .practically , to do the computation without storing the orthonormal basis , one makes iterations to get the truncated matrix , computes the matrix and the diagonal using standard methods , and then repeats the iterations to compute the solution $ ] .the most stringent restriction on the algorithm seems to be that one can not use any pre - conditioning and must always start the iterations from .this algorithm has only one parameter , .the algorithm automatically adjusts the number of iterations to achieve the specified precision irrespective of the condition number .thus , no configuration dependent tuning of algorithmic parameters is necessary when employing the cga for qcd applications .the complexity of the cga is _cga 2wv+^2 2wv(1)[compcga]where is a number independent of .the term comes from the handling of the tridiagonal matrix , and can be neglected since .the space complexity is the same as that of a standard cg _cga 8n_c(n_c+3)v .[ spaccga]since the method is adaptive , no deflation is necessary .however , deflation reduces the condition number of the matrix , and hence could improve the complexity by reducing .nevertheless , for reasons that we have already discussed in connection with ca and ora , deflation is unlikely to improve the performance at fixed physics when taking the limit of large .the results of our numerical tests for this algorithm are collected in tables [ tb.cga1][tb.cga3 ] .note that for all three test configurations there is a threshold in above which and below which is roughly constant .the threshold value of is somewhat larger than for the configuration .similar thresholds are also seen for the ora and za .this behaviour possibly reflects the existence of a large unconverged subspace in the cg iterations .with and . these are results of computations with configurations a ( dashed lines ) , b ( dotted lines ) and c ( full lines ) . ] in figure [ fg.comp ] we have collected different measures of performance of the four algorithms we investigated in this paper , namely , the optimized rational approximation ( ora ) , the zolotarev approximation ( za , which is also a rational expansion ) , the chebychev approximation ( ca , a polynomial expansion ) and the conjugate gradient approximation ( cga , an iterative method ) .further details can be found in tables [ tb.cheby1][tb.cga3 ] .it is clear that for modest values of the condition number of , , the ca is the preferred algorithm .this is clear from the figure , as well as our results for the algorithmic complexities in eqs .( [ compca ] ) , ( [ compora ] ) , ( [ compza ] ) and ( [ compcga ] ). however , with increasing condition numbers the performance of ca rapidly degrades .this is visible in the figure as well as in our analysis of the adaptability in eq .( [ adaptca ] ) .we have argued earlier that these drawbacks of the ca are generic to all polynomial expansions .the ora , in its present form with fixed , also suffers from a lack of adaptability . in principle , this can be alleviated if the order of the approximation can be chosen adaptively .we have implemented the za , which is another rational approximation , for several different choices of order . as can be seen from tables [ tb.zolo1]- [ tb.zolo3 ] , and from figure [ fg.comp ] ,this improves the performance tremendously . for comparable cpu times , corresponding to low order za , the performance is at least one order better than that of ora on all configurations .the key to improving the performance of rational approximations is the automatic variation of the order with the condition numbers . in our testswe have simulated adaptability by working with several different orders and retained the one corresponding only to a small fraction of the inversion error .the so - tuned order is only slightly higher than that obtained in an automatic procedure defined at the end of section [ sc.za ] .the cga depends on only one parameter . for a given value of , the corresponding errors and are almost independent of the condition number of the matrix , thanks to the relaxed stopping criterion .the price for such a good adaptability is a computing time which is 50% higher than za for a given accuracy ( 70% excess if is small , 20% for large ) .the price , however , ensures that for all the configurations one guarantees the same order of accuracy from a given value of and with a predicted value of .the variation in the number of conjugate gradient iterations , , as the stopping criterion , , is changed for the three configurations is shown in figure [ fg.complex ] .the data for the ora are not shown in the figure because they are very similar to those of the za .note that the curves for the cga lie below that for the za ( despite the shift in za as compared to cga ) , which is the influence of the relaxed stopping criterion discussed in section [ sc.cga ] . has devised a similar modification for za which can reduce in that case .note that our above results for za did not use any such modifications ; using it will further enhance the performance of za reported above .we have noted in section [ sc.errors ] that the relations ( [ rel1 ] ) between the errors are valid for those approximations to which commute with . in particular, we noted that for the iterative algorithms these relations become valid , provided that is sufficiently small . in figure [ fg.nonlin ]we demonstrate this for , which is expected to be zero when is small enough . for the cga and za( data for the ora are not shown because they almost coincide with that for za ) , decreases with .the slopes in this plot correspond to linear decrease when is sufficiently small .clear non - linearities are present for larger when the condition number is large .we believe that these non - linearities are due to large non - converged subspaces , implying a need for high accuracy .for fixed order algorithms the adaptability , , quantifies the configuration dependence of speed .the numerical study can be used more directly to illustrate the adaptability by studying the slowdown in going from configuration a to b ( , from to ) or from a to c ( changes from to ) . as shown in figure [ fg.adapt ] ,both za and cga are adaptable algorithms over a wide range of .since za is faster , as seen in figure [ fg.comp ] , it is thus the method of choice .note , however , that cga is very comparable to it , and may be preferred for its self - tuning ability .we emphasise that a fair test of relative performance of algorithms is to work without deflation .first , deflation improves the performance of each of the algorithms we have investigated .details are given in the sections on each algorithm .nevertheless the algorithms based on rational approximation seems to be less sensitive to deflation than other ones because of the positive shifts introduced in the matrix .second , since the computation of the eigensystem of , necessary to deflation , is done at finite accuracy , it introduces extra errors .if the error in the computation of the eigenvalue is , then the contribution to is .thus , if we want to achieve a given , then we must keep .when the condition number increases , this criterion becomes impossible to satisfy , leading to catastrophic loss of accuracy .we feel it worth pointing out that deflation is only one of many possible methods to decrease the effective condition number of the problem .other preconditioning methods have not been seriously explored for overlap fermions .the cost of accurate numerical methods seems to suggest that numerically stable preconditioning methods will pay a big dividend in this problem .99 h. neuberger , _ phys ._ , b 417 ( 1998 ) 141 [ hep - lat/9707022 ] .p. h. ginsparg and k. g. wilson , _ phys ._ , d 25 ( 1982 ) 2649. t. w. chiu , _ phys ._ , d 58 ( 1998 ) 074511 [ hep - lat/9804016 ] .h. neuberger , _ phys ._ , 81 ( 1998 ) 4060 [ hep - lat/9806025 ] .p. hernndez , k. jansen , m. lscher , _ nucl ._ , b 552 ( 1999 ) 363 [ hep - lat/9808010 ] .a. borii , _ j. comput_ , 162 ( 2000 ) 123 [ hep - lat/9910045 ] .r. g. edwards , u. heller , r. narayanan , _ nucl . phys ._ , b 540 ( 1999 ) 457 [ hep - lat/9905028 ]. j. van den eshof __ , _ computer phys .comm . _ 146 ( 2002 ) 203 [ hep - lat/0202025 ] .w . chiu and t .- h .hsieh , hep - lat/0204009 ; s. j. dong _ et al ._ , hep - lat/0304005 .p. hernndez , k. jansen , l. lellouch , _ phys ._ , b 469 ( 1999 ) 198 [ hep - lat/9907022 ] .r. v. gavai , s. gupta , r. lacaze , _ phys ._ , d 65 ( 2002 ) 094504 [ hep - lat/0107022 ]. for a general analysis of error propagation in gram - schmidt orthogonalization , see g. h. golub and c. f. van loan , _ matrix computations _ , john hopkins university press , 1996 .r. g. edwards , u. heller , r. narayanan , _ phys . rev ._ , d 61 ( 2000 ) 074504 [ hep - lat/9910041 ]. s. j. dong _ et al ._ , _ phys ._ , 85 ( 2000 ) 5051 [ hep - lat/0006004 ] . t. degrand , _ phys . rev . _ ,d 63 ( 2001 ) 034503 [ hep - lat/0007046 ] .t. w. chiu _ et al ._ , hep - lat/0206007 .see , section 11.4 in p. lascaux and r. thodor , _ analyse numrique matricielle applique lart de lingnieur , ( 2 ) _ , masson , paris , 1994 ; see also section 8.5.2 in .see , for example , the section on matrix square roots in r. a. horn and c. r. johnson , _ topics in matrix analysis _, cambridge university press , 1991 .
we compare the efficiency of four different algorithms to compute the overlap dirac operator , both for the speed , _ i.e. _ , time required to reach a desired numerical accuracy , and for the adaptability , _ i.e. _ , the scaling of speed with the condition number of the ( square of the ) wilson dirac operator . although orthogonal polynomial expansions give good speeds at moderate condition number , they are highly non - adaptable . one of the rational function expansions , the zolotarev approximation , is the fastest and is adaptable . the conjugate gradient approximation is adaptable , self - tuning , and nearly as fast as the za .
living organisms can be modeled as an ensemble of complex physiological systems , each with its own regulatory mechanism and all continuously interacting between them . therefore inferring the interactions within and between these modulesis a crucial issue . over the last yearsthe interaction structure of many complex systems has been mapped in terms of networks , which have been successfully studied using tools from statistical physics .dynamical networks have modeled physiological behavior in many applications ; examples range from networks of neurons , genetic networks , protein interaction nets and metabolic networks .the inference of dynamical networks from time series data is related to the estimation of the information flow between variables ; see also .granger causality ( gc ) has emerged as a major tool to address this issue .this approach is based on prediction : if the prediction error of the first time series is reduced by including measurements from the second one in the linear regression model , then the second time series is said to have a granger causal influence on the first one .it has been shown that gc is equivalent to transfer entropy in the gaussian approximation and for other distributions .see for a discussion about applicability of this notion in neuroscience , and for a discussion on the reliability of gc for continuous dynamical processes .it is worth stressing that several forms of coupling may mediate information flow in the brain , see .the combination of gc and complex networks theory is also a promising line of research .the pairwise granger analysis ( pwgc ) consists in assessing gc between each pair of variables , independently of the rest of the system .it is well known that the pairwise analysis can not disambiguate direct and indirect interactions among variables .the most straightforward extension , the conditioning approach , removes indirect influences by evaluating to which extent the predictive power of the driver on the target decreases when the conditioning variable is removed .it has to be noted however that , even though its limitations are well known , the pairwise gc approach is still used in situations where the number of samples is limited and a fully conditioned approach is unfeasible . as a convenient alternative to this suboptimal solution ,a partially conditioned approach , consisting in conditioning on a small number of variables , chosen as the most informative ones for the driver node , has been proposed ; this approach leads to results very close to those obtained with a fully conditioned analysis and even more accurate in the presence of a small number of samples .we remark that the use of partially conditioned granger causality ( pcgc ) may be useful also in non - stationary conditions , where the gc pattern has to be estimated on short time windows . sometimes though a fully conditioned ( cgc ) approach can encounter conceptual limitations , on top of the practical and computational ones : in the presence of redundant variables the application of the standard analysis leads to underestimation of influences .redundancy and synergy are intuitive yet elusive concepts , which have been investigated in different fields , from pure information theory , to machine learning and neural systems , with definitions that range from the purely operative to the most conceptual ones . when analyzing interactions in multivariate time series , redundancy may arise if some channels are all influenced by another signal that is not included in the regression ; another source of redundancy may be the emergence of synchronization in a subgroup of variables , without the need of an external influence .redundancy manifests itself through a high degree of correlation in a group of variables , both for instantaneous and lagged influences .several approaches have been proposed in order to reduce dimensionality in multivariate sets eliminating redundancy , relying on generalized variance , principal components analysis , or granger causality itself .a complementary concept to redundancy is synergy .the synergetic effects that we address here , related to the analysis of dynamical influences in multivariate time series , are similar to those encountered in sociological and psychological modeling , where _ suppressors _ is the name given to variables that increase the predictive validity of another variable after its inclusion into a linear regression equation ; see for examples of easily explainable suppressor variables in multiple regression research . redundancy and synergy have been further connected to information transfer in , where an expansion of the information flow has been proposed to put in evidence redundant and synergetic multiplets of variables .other information - based approaches have also addressed the issue of collective influence .the purpose of this paper is to provide evidence that in addition to the problem related to indirect influence , pwgc shows another relevant pitfall : it fails to detect synergetic effects in the information flow , in other words it does not account for the presence of subsets of variables that provide some information about the future of a given target only when all the variables are used in the regression model .we remark that since it processes the system as a whole , cgc evidences synergetic effects ; when the number of samples is low , pcgc can detect synergetic effects too , after an adequate selection of the conditioning variables .the paper is organized as follows . in the next sectionwe briefly recall the concepts of gc and pcgc .in section iii we describe some toy systems illustrating how redundancy can affect the results of cgc , whilst indirect interactions and synergy are the main problems inherent to pwgc . in sectioniv we provide evidence of synergetic effects in epilepsy : we analyze electroencephalographic recordings from an epileptic patient corresponding to ten seconds before the seizure onset ; we show that the two contacts which constitute the putative seizure onset act as synergetic variables driving the rest of the system . the pattern of information transfer evidences the actual seizure onset only whensynergy is correctly considered . in sectionv we propose an approach that combines pwgc and cgc to evidence the pairwise influences due only to redundancy and not recognized by cgc .the conditioned gc pattern is used to partition the pairwise links in two sets : those which are indirect influences between the measured variables , according to cgc , and those which are not explained as indirect relationships . the unexplained pairwise links , presumably due to redundancy , are thus retained to complement the information transfer pattern discovered by cgc . in cases where the number of samples is so low that a fully multivariate approach is unfeasible , pcgc may be applied instead of cgc .we also address here the issue of variables selection for pcgc , and consider a novel strategy for the selection of variables : for each target variable , one selects the variables sending the highest amount of information to that node as indicated by a pairwise analysis . by construction , this new selection strategy works more efficiently when the interaction graph has a tree structure : indeed in this case conditioning on the parents of the target node ensures that indirect influences will be removed .in the epilepsy example the selection based on the mutual information with the candidate driver provides results closer to those obtained by cgc .we finally apply the proposed approach on time series of gene expressions , extracted from a data - set from the hela culture .section vi summarizes our conclusions .granger causality is a powerful and widespread data - driven approach to determine whether and how two time series exert direct dynamical influences on each other .a convenient nonlinear generalization of gc has been implemented in , exploiting the kernel trick , which makes computation of dot products in high - dimensional feature spaces possible using simple functions ( kernels ) defined on pairs of input patterns .this trick allows the formulation of nonlinear variants of any algorithm that can be cast in terms of dot products , for example support vector machines .hence in the idea is still to perform linear granger causality , but in a space defined by the nonlinear features of the data .this projection is conveniently and implicitly performed through kernel functions and a statistical procedure is used to avoid over - fitting .quantitatively , let us consider time series ; the lagged state vectors are denoted being the order of the model ( window length ) . let be the mean squared error prediction of on the basis of all the vectors ( corresponding to the kernel approach described in ) .the multivariate granger causality index is defined as follows : consider the prediction of on the basis of all the variables but and the prediction of using all the variables , then the gc is the ( normalized ) variation of the error in the two conditions , i.e. in it has been shown that not all the kernels are suitable to estimate gc .two important classes of kernels which can be used to construct nonlinear gc measures are the _ inhomogeneous polynomial kernel _ ( whose features are all the monomials in the input variables up to the -th degree ; corresponds to linear granger causality ) and the _ gaussian kernel_. the pairwise granger causality is given by : the partially conditioned granger causality is defined as follows .let be the variables in , excluding and , then ( [ mv ] ) can be written as : when is only a subset of the total number of variables in not containing and , then is called the partially conditioned granger causality ( pcgc ) . in set is chosen as the most informative for . herewe will also consider an alternative strategy : fixing a small number , we select as the variables with the maximal pairwise gc w.r.t . that target node , excluding .in this section we provide some typical examples to remark possible problems that pairwise and fully conditioned analysis may encounter . we consider the following lattice of ten unidirectionally coupled noisy logistic maps , with and with .variables are unit variance gaussian noise terms .the transfer function is given by .in this system the first map is evolving autonomously , whilst the other maps are influenced by the previous ones with coupling , thus forming a cascade of interactions . in figure [ fig1]a we plot as a function of the number of gc interactions found by pwgc and cgc , using the method described in with the inhomogeneous polynomial kernel of degree two .the cgc output is the correct one ( nine links ) whilst the pwgc output also accounts for indirect influences and therefore fails to provide the underlying network of interactions . on this examplewe have also tested pcgc , see figure [ fig1]b .we considered just one conditioning variable , chosen according to the two strategies described above .firstly we consider the most informative w.r.t .the candidate driver , as described in ; we call this strategy _ information based _ ( ib ) . secondly, we choose the variable characterized by the maximal pairwise influence to the target node , a _ pairwise based _ ( pb ) rule .the pb strategy yields the correct result in this example , whilst the ib one fails when only one conditioning variable is used and requires more than one conditioning variables to provide the correct output .this occurrence is due to the tree topology of the interactions in this example , which favors pb selecting by construction the parents of each node .we show here how redundancy constitutes a problem for cgc .let be a zero mean and unit variance hidden gaussian variable , influencing variables , and let be another variable who is influenced by but with a larger delay .the variables are unit variance gaussian noise and s controls the noise level . in figure[ fig1]c we depict both the linear pwgc and the linear cgc from one of the x s to w ( note that h is not used in the regression model ) . as increases, the conditioned gc vanishes as a consequence of redundancy .the gc relation which is found in the pairwise analysis is not revealed by cgc because variables are maximally correlated and thus drives only in the absence of any other variables . the correct way to describe the information flow pattern in this example , where the true underlying source is unknown , is that all the variables are sending the same information to w , i.e. that variables constitute a redundant multiplet w.r.t .the causal influence to w. this pattern follows from observing that for all x s cgc vanishes whilst pwgc does not vanish .this example shows that , in presence of redundancy , the cgc pattern alone is not sufficient to describe the information flow pattern of the system , and also pwgc should be taken into account .let us consider three unit variance iid gaussian noise terms , and .let considering the influence , the cgc reveals that is influencing , whilst pwgc fails to detect this causal relationship , see figure [ fig1]d , where we use the method described in with the inhomogeneous polynomial kernel of degree two ; is a suppressor variable for w.r.t .the influence on .this example shows that pwgc fails to detect synergetic contributions .we remark that use of nonlinear gc is mandatory in this case to put in evidence the synergy between and . as another example, we consider a toy system made of five variables .the first four constitute a multiplet made of a fully coupled lattice of noisy logistic maps with coupling , evolving independently of the fifth .the fifth variable is influenced by the mean field from the coupled map lattice .the equations are , for , : and where are unit variance gaussian noise terms . increasing the coupling among the variables in the multiplet , the degree of synchronization among these variables ( measured e.g. by pearson correlations ) increases and they become almost synchronized for greater than 0.1 ( complete synchronization can not be achieved due to the noise terms ) ; redundancy , in this example , arises due to complex inherent dynamics of the units . in figure [ fig2 ]we depict both the causality from one variable in the multiplet ( ; the same results hold fo r , and ) to , and the causality between pairs of variables in the multiplet : both linear and nonlinear pwgc and cgc are shown for the two quantities .concerning the causality , we note that , for low coupling , both pwgc and cgc , with linear or nonlinear kernel , correctly detect the causal interaction . around the transition to synchronization , in a window centered at , all the algorithms fail to detect the causality . in the _ almost _ synchronized regime , , the fully conditioned approach continues to fail due to redundancy , whilst the pwgc provides correctly the causal influence , both using the linear and the nonlinear algorithm .as far as the causal interactions within the multiplet are concerned , we note that using the linear approach we get small values of causality just at the transition , whilst we get zero values far from the transition . using the nonlinear algorithm , which is the correct one in this example as the system is nonlinear, we obtain nonzero causality among the variables in the multiplet , using both pwgc or cgc : the resulting curves are non - monotonous as one may expect due to the inherent nonlinear dynamics . for nonzero gcis observed because of the noise which prevents the system to go in the complete synchronized state .this example again shows that in presence of redundancy one should take into account both cgc and pwgc results .moreover it also shows how nonlinearity may render extremely difficult the inference of interactions : in this system there is a range of values , corresponding to the onset of synchronization , in which all methods fail to provide the correct causal interaction .as a real example we consider intracranial eeg recordings from a patient with drug - resistant epilepsy with an implanted array of cortical electrodes ( ce ) and two depth electrodes ( de ) with six contacts each .the data are available at and further details on the dataset are given in .data were sampled at 400 hz .we consider here a portion of data recorded in the preictal period , 10 seconds preceding the seizure onset . to handle this data , we use linear granger causality with equal to five . in figure [ fig3 ]we depict the pwgc between des ( panel a ) , from des to ces ( panel b ) , between ces ( panel c ) and from ces to des ( panel d ) .we note a complex pattern of bivariate interactions among ces , whilst the first de seems to be the subcortical drive to the cortex .we remark that there is no pwgc from the last two contacts of the second de ( channels 11 and 12 ) to ces and neither to the contacts of the first de . in figure [ fig4 ]we depict the cgc among des ( panel a ) , from des to ces ( panel b ) , among ces ( panel c ) and from ces to des ( panel d ) .the scenario in the conditioned case is clear : the contacts 11 and 12 , from the second de , are the drivers both for the cortex and for the subcortical region associated to the first de .these two contacts can be then associated to the seizure onset zone ( soz ) .the high pairwise gc strength among ces is due to redundancy , as these latter are all driven by the same subcortical source .since the contact 12 is also driving the contact 11 , see figure [ fig4]a , we conclude that the contact 12 is the closest to the soz , and that the contact 11 is a _ suppressor _ variable for it , because it is necessary to include it in the regression model to put in evidence the influence of 12 on the rest of the system .conversely , the contact 12 acts as a suppressor for contact 11 .we stress that the influence from contacts 11 and 12 to the rest of the system emerges only in the cgc and it is neglected by pwgc : these variables are synergetically influencing the dynamics of the system . to our knowledgethis is the first time that synergetic effects are found in relation with epilepsy . on this datawe also apply pcgc using one conditioning variable .the results are depicted in figure [ fig5 ] : using the ib strategy we obtain a pattern very close to the one from cgc , while this is not the case of pb . these results seem to suggest that ib works better in presence of redundancy , however we have not arguments to claim that this a general rule .it is worth mentioning that in presence of synergy the selection of variables for partial conditioning is equivalent to the search of suppressor variables .in the last sections we have shown that cgc encounters issues resulting in poor performance in presence of redundancy , and that information about redundancy may be obtained from the pwgc pattern . we develop here a strategy to combine the two approaches : some links inferred from pwgc are retained and added to those obtained from cgc .the pwgc links that are discarded are those which can be derived as indirect links from the cgc pattern . in the followingwe describe the proposed approach in detail .let be the matrix of influences from cgc ( or pcgc ) .let be the matrix from pwgc .non - zero elements of and correspond to the estimated influences .let these matrices be evaluated using a model of order .the matrix contains paths of length with delays in the range ] , indeed : any nonzero element of the matrix describes an indirect causal interaction between nodes and where sends information to through a cascade of links : , , , .the circuit corresponding to is depicted in figure [ fig6]b . the indirect causal interaction , corresponding to the non - zero element might be detected by pwgc if .let us now consider the matrix where the first sum is over pairs satisfying if is non vanishing , then according to cgc there is an indirect causal interaction between and : therefore pwgc might misleadingly reveal such interaction considering it a direct one . in the approachjust described we discard ( as indirect ) the links found by pwgc for which . therefore in the pairwise matrix we set to zero all the elements such that ( pruning ) . the resulting matrix contains links which can not be interpreted as indirect links of the multivariate pattern , and will be retained and ascribed to redundancy effects .for we have that the only terms in the first sum are those with , so the first non trivial terms are for , the simplest terms are : since , due to the finite number of samples , a mediated interaction is more unlikely to be detected ( by the pairwise analysis ) if it corresponds to a long path , we limit the sum in the matrix to the simplest terms . as a toy example to illustrate an application of the proposed approach, we consider a system made of five variables .the first four constitute a multiplet made of an unidirectionally coupled logistic maps , eqs.([catena1]-[catena2 ] ) with ranging in , coupling and interactions , and .the fifth variable is influenced by the mean field from the coupled map lattice , see equation ( [ synch2 ] ) .the four variables in the multiplet become _ almost _ synchronized for . in figure [ fig7 ] we depict both the average influence from the variables in the multiplet to , and the average influence between pairs of variables in the multiplet : both linear and nonlinear pwgc and cgc are shown for the two quantities .note that only the nonlinear algorithm correctly evidences the causal interactions within the multiplet of four variables , whilst the linear algorithm detects a very low causal interdependency among them .the driving influence from the multiplet to detected by cgc vanishes at high coupling redundancy , both in the linear and nonlinear approach , due to the redundancy induced by synchronization . to explain how the proposed approach works we describe two situations , corresponding to low and high coupling . at low coupling, the cgc approach estimates the correct causal pattern in the system , and the nonzero elements of are , , , , , and .the nonzero elements of the matrix , from pwgc analysis , are the same as plus , , , corresponding to indirect causalities ; however these three interactions lead to non - zero elements of ( and , therefore , of ) , hence they must be discarded .it follows that does not provide further information than at low coupling . on the contrary at high coupling , due to synchronization, the cgc approach does not reveal the causal interactions , , and , whilst still they are recognized by pwgc ; is still nonzero in correspondence of , , and , while the corresponding elements of are vanishing . according to our previous discussion, the interactions , , and , detected by pwgc , should not be discarded : combining the results by cgc and pwgc we obtain the correct causal pattern even in presence of strong synchronization .hela is a famous cell culture , isolated from a human uterine cervical carcinoma in 1951 .hela cells have acquired cellular immortality , in that the normal mechanisms of programmed cell death after a certain number of divisions have somehow been switched off .we consider the hela cell gene expression data of .data corresponds to genes and time points , with an hour interval separating two successive readings ( the hela cell cycle lasts 16 hours ) .the 94 genes were selected from the full data set described in , on the basis of the association with cell cycle regulation and tumor development .we apply linear pwgc and linear cgc ( using just another conditioning variable , and using both the selection strategies ib and pb described in section iii ) .we remark that the cgc approach is unfeasible in this case due to the limited number of samples . due to the limited number of samples , in this case we do not use statistical testing for assessing the significance of the retrieved links ,rather we introduce a threshold for the influence and analyze the pattern as the threshold is varied . in figure [ fig8 ] resultsare reported as a function of the number of links found by pwgc ( which increases as the threshold is decreased ) ; we plot ( 1 ) the number of links found by pgc , ( 2 ) the number of links found by pgc and not by pwgc , which are thus a signature of synergy , ( 3 ) the percentage of pairwise links which can be explained as direct or indirect causalities of the pgc pattern ( thus being consistent with the partial causality pattern ) , found using the matrix to detect the indirect links , which correspond to circuits like the one described in figure [ fig6]a , ( 4 ) the number of causality links found by pwgc and not consistent with pwgc , corresponding to redundancy .the two curves refers to the two selection strategies for partial conditioning .the low number of samples here allowed us just to use one conditioning variable , and therefore to analyze only circuits of three variables ; a closely related analysis , see , has been proposed to study how a gene modulates the interaction between two other genes . on the other hand , the true underlying gene regulatory network being unknown, we can not assess the performances of the algorithms in terms of correctly detected links .we note that both and assume relatively large values , hence both redundancy and synergy characterize this data - set . the selection strategy pb yields slightly higher values of and , emerging then as a better discriminator of synergy and redundancy than ib .a comparison iwth the fully conditioned approach is not possible in this case . on the other hand , as far as the search for synergetic effects is concerned , we find that the synergetic interactions found by pcgc with the two strategies are not coinciding , indeed only of all the synergetic interactions are found by both strategies .this suggests that when searching for suppressors , several sets of conditioning variables should be used in cgc in order to explore more possible alternative pathways , especially when there is not a priori information on the network structure .in this paper we have considered the inference , from time series data , of the information flow between subsystems of a complex network , an important problem in medicine and biology . in particularwe have analyzed the effects that synergy and redundancy induce on the granger causal analysis of time series .concerning synergy , we have shown that the search for synergetic contributions in the information flow is equivalent to the search for suppressors , i.e. variables that improve the predictive validity of another variable .pairwise analysis fails to put in evidence this kind of variables ; fully multivariate granger causality solves this problem : conditioning on suppressors variables leads to nonzero granger causality . in cases when the number of samples is low ,we have shown that partially conditioned granger causality is a valuable option , provided that the selection strategy , to choose the conditioning variables , succeeds in picking the suppressors . in this paperwe have considered two different strategies : choosing the most informative variables for the candidate driver node , or choosing the nodes with the highest pairwise influence to the target . from the several examples analyzed here we have shown that the first strategy is viable in presence of redundancy , whilst when the interaction pattern has a tree - like structure , the latter is preferable ; however the issue of selecting variables for partially conditioned granger causality deserves further attention as it corresponds to the search for suppressor variables and correspondingly of synergetic effects .we have also provided evidence , for the first time , that synergetic effects are present in an epileptic brain in the preictal condition ( just before the seizure ) .we have then shown that fully conditioned granger approaches do not work well in presence of redundancy . to handle redundancy , we propose to split the pairwise links in two subsets : those which correspond to indirect connections of the multivariate granger causality , and those which are not .the links that are not explained as indirect connections are ascribed to redundancy effects and they are merged to those from cgc to provide the full causality pattern in the system .we have applied this approach to a genetic data - set from the hela culture , and found that the underlying gene regulatory networks are characterized by both redundancy and synergy , hence these approaches are promising also w.r.t .the reverse engineering of gene regulatory networks . in conclusion , we observe that the problem of inferring reliable estimates of the causal interactions in real dynamical complex systems , when limited a priori information is available , remains a major theoretical challenge . in the last years the most important results in this direction are related to the use of data - driven approaches like granger causality and transfer entropy . in this workwe have shown that in presence of redundancy and synergy , combining the results from the pairwise and conditioned approaches may lead to more effective analyses .99 a. bashan , r.p .bartsch , j. w. kantelhardt , s. havlin , p. ch .ivanov , nature communications 3 , 702 , ( 2012 ) a.l .barabasi , _ linked : the new science of networks_. ( perseus publishing , cambridge mass . , 2002 ) ; s. boccaletti , v. latora , y. moreno , m. chavez and d .- u .hwang , phys .rep . * 424 * , 175 ( 2006 ) .abbott , c. van vreeswijk , phys .* e 48 * , 1483 ( 1993 ) .d. bernardo , d. lorenz , j.j .collins , science * 301 * , 102 ( 2003 ) . c.l .tucker , j.f .gera , p. uetz , trends cell biol . *11 * , 102 ( 2001 ) .h. jeong , b. tombor , r. albert , z.n .oltvai , a.l .barabasi , nature * 407 * , 651 ( 2000 ) .r. guimer , l.a .nunes amaral , nature * 433 * , 895 ( 2005 ) i m de la fuente , jm cortes , mb perez - pinilla , v ruiz - rodriguez and j veguillas , plos one * 6 * , e27224 ( 2011 ) e. pereda , r. quiroga , j. bhattacharya , progress in neurobiology * 77 * , 1 ( 2005 ) m. wibral , r. vicente and j.t .lizier ( eds . ) _ directed information measures in neuroscience _( springer , berlin , 2014 ) ; k. sameshima , l.a .baccala ( eds . ) _ methods in brain connectivity inference through multivariate time series analysis _( crc press , 2014 ) ; c.w.j .granger , econometrica * 37 * , 424 ( 1969 ) .bressler , a.k .seth , neuroimage * 58 * , 323 ( 2011 ) t. schreiber , phys .. lett . * 85 * , 461 ( 2000 ) .l. barnett , a. b. barrett , and a. k. seth , phys .103 , no . 23 , 2009 .k. hlavackova - schindler , applied mathematical sciences * 5 * , 3637 ( 2011 ) r. vicente , m. wibral , m. lindner , g. pipa , j comput neurosci * 30 * , 45 ( 2011 ) d. zhou , y. zhang , y. xiao and d. cai , new j. phys . * 16 * , 043016 ( 2014 ) r. p. bartsch , p. ch .ivanov , communications in computer and information science , 438 , pp 270 - 287 ( 2014 ) r. p. bartsch , a. y. schumann , j. w. kantelhardtd , t. penzele , and p. ch .ivanov , proceedings of the national academy of sciences 109 ( 26 ) , 10181 - 10186 ( 2012 ) t. ge , y. cui , w. lin , j. kurths and c. liu , new j. phys .* 14 * 083028 ( 2012 ) j. f. geweke , journal of the american statistical association , vol .907 - 915 , 1984 .d. marinazzo , m. pellicoro , and s. stramaglia , computational and mathematical methods in medicine , volume 2012 ( 2012 ) , article i d 303601 .g. wu , w. liao , h. chen , s. stramaglia , d. marinazzo brain connectivity , 3(3 ) : 294 - 301 ( 2013 ) l. angelini et al ., phys . rev . *e 81 * , 037201 ( 2010 ) . v. griffith and c. koch , ( 2014 ) .quantifying synergistic mutual information, in guided self - organization : inception , vol .9 , ed . m. prokopenko ( berlin : springer ) , 159190 . m. harder , c. salge and d. polani , phys .e * 87 * , 012130 ( 2013 ) j.t .lizier , b. flecker , p.l .williams artificial life ( alife ) , 2013 ieee symposium on , pp.43,51 , doi : 10.1109/alife.2013.6602430 ( 2013 ) s. yang , j. gu , jour of zhejiang university science * 5 * , 1382 ( 2004 ) e. schneideman , w. bialek , m.j .berry , j neurosci * 23 * , 11539 ( 2003 ) i. gat and n. tishby , nips page 111 - 117 , the mit press ( 1998 ) .a. b. barrett , l. barnett , and a. k. seth , phys.rev .4 , article i d 041907 , 14 pages , 2010 .z. zhou , y. chen , m. ding , p. wright , z. lu , and y. liu , human brain mapping , vol .7 , pp . 2197 ( 2009 ) d. marinazzo , w. liao , m. pellicoro , and s. stramaglia , physics letters , section a , vol .39 , pp . 4040 ( 2010 ) aj conger , educational and psychological measurement april 1974 vol .34 no . 1 35 - 46 ft thompson , du levine , multiple linear regression viewpoints , 24 : 11 - 13 ( 1997 ) s. stramaglia , g. wu , m. pellicoro , d. marinazzo physical review e , 86 , 066211 ( 2012 ) d. chicharro , a. ledberg physical review e , 86 , 041901 ( 2012 ) k. hlavackova - schindler , m. palus , m. vejmelka , j. bhattacharya , physics reports * 441 * , 1 ( 2007 ) .d. marinazzo , m. pellicoro and s. stramaglia , phys .e * 77 * , 056215 ( 2008 ) .v. vapnik .the nature of statistical learning theory .springer , n.y . , 1995 . j. shawe - taylor and n. cristianini , _ kernel methods for pattern analysis_. ( cambridge university press , london , 2004 ) d. marinazzo , m. pellicoro , s. stramaglia , phys .lett . * 100 * , 144103 ( 2008 ) .n. ancona and s. stramaglia , neural comput . *18 * , 749 ( 2006 ) .http://math.bu.edu/people/kolaczyk/datasets.html , accessed may 2012 m.a .kramer , e.d .kolaczyk , h.e .kirsch , epilepsy research * 79 * , 173 , 2008 j.r .masters , nature reviews cancer * 2 * , 315 ( 2002 ) .a. fujita et al . ,bmc system biology * 1:39 * , 1 ( 2007 ) .whitfield et al .cell * 13 * , 1977 ( 2002 ) . , over 100 runs of 2000time points , retrieved by pwgc and cgc on the coupled map lattice described in the text , eqs .( [ catena1]-[catena2 ] ) .( b ) on the coupled map lattice , the average error ( sum of type i errors and type ii errors in the recovery of causal interactions ) by pcgc , obtained by the ib strategy and by pb , is plotted versus the coupling .errors are averaged over 100 runs of 2000 time points .( c ) for the example dealing with redundancy , cgc and pwgc are plotted versus the number of variables .results are averaged over 100 runs of 1000 time points ( d ) for the example dealing with synergy , cgc and pwgc are plotted versus the coupling .results are averaged over 100 runs of 1000 time points [ fig1],width=453 ] is plotted as a function of the coupling for the example dealing with redundancy due to synchronization , eqs .( [ synch1]-[synch2 ] ) .the linear algorithms are used here and results are averaged over 100 runs of 2000 time points ( b ) the cgc and pwgc of the causal interaction between two variables in the multiplet is plotted as a function of the coupling for the example dealing with redundancy due to synchronization . the linear algorithms are used here and results are averaged over 100 runs of 2000 time points ( c ) the nonlinear cgc and pwgc of the causal interaction is plotted as a function of the coupling for the example dealing with redundancy due to synchronization . the algorithm with the polynomial kernel of order 2is used here and results are averaged over 100 runs of 2000 time points ( d ) the nonlinear cgc and pwgc of the causal interaction between two variables in the multiplet is plotted as a function of the coupling for the example dealing with redundancy due to synchronization .the algorithm with the polynomial kernel of order 2 is used here and results are averaged over 100 runs of 2000 time points .[ fig2],width=491 ] corresponding to a nonzero element of the matrix .if , then a common source influences and but with different lags .( b ) the indirect causality corresponding to nonzero elements of the matrix . if , then a third node acting as a mediator of the interaction .[ fig6],width=491 ] is plotted as a function of the coupling for the toy model for the proposed approach combining pwgc and cgc .the linear algorithms are used here and results are averaged over 100 runs of 2000 time points ( b ) the cgc and pwgc of the causal interaction between two variables in the multiplet is plotted as a function of the coupling for the toy model for the proposed approach combining pwgc and cgc . the linear algorithms are used here and results are averaged over 100 runs of 2000 time points ( c ) the nonlinear cgc and pwgc of the causal interaction is plotted as a function of the coupling for the toy model for the proposed approach combining pwgc and cgc .the algorithm with the polynomial kernel of order 2 is used here and results are averaged over 100 runs of 2000 time points ( d ) the nonlinear cgc and pwgc of the causal interaction between two variables in the multiplet is plotted as a function of the coupling for the toy model for the proposed approach combining pwgc and cgc .the algorithm with the polynomial kernel of order 2 is used here and results are averaged over 100 runs of 2000 time points .[ fig7],width=491 ] ) and pb ( o ) .( b ) the number of retrieved links by pcgc , with strategies ib ( ) and pb ( o ) , which are not present in the bivariate pattern .( c ) the percentage of retrieved links , by bvgc , which are consistent with the pcgc with strategies ib ( ) and pb ( o ) .( d ) the number of retrieved links by bvgc , which are not consistent with pcgc ( with strategies ib ( ) and pb ( o ) ) .[ fig8],width=491 ]
we analyze by means of granger causality the effect of synergy and redundancy in the inference ( from time series data ) of the information flow between subsystems of a complex network . whilst we show that fully conditioned granger causality is not affected by synergy , the pairwise analysis fails to put in evidence synergetic effects . in cases when the number of samples is low , thus making the fully conditioned approach unfeasible , we show that partially conditioned granger causality is an effective approach if the set of conditioning variables is properly chosen . we consider here two different strategies ( based either on informational content for the candidate driver or on selecting the variables with highest pairwise influences ) for partially conditioned granger causality and show that depending on the data structure either one or the other might be valid . on the other hand , we observe that fully conditioned approaches do not work well in presence of redundancy , thus suggesting the strategy of separating the pairwise links in two subsets : those corresponding to indirect connections of the fully conditioned granger causality ( which should thus be excluded ) and links that can be ascribed to redundancy effects and , together with the results from the fully connected approach , provide a better description of the causality pattern in presence of redundancy . we finally apply these methods to two different real datasets . first , analyzing electrophysiological data from an epileptic brain , we show that synergetic effects are dominant just before seizure occurrences . second , our analysis applied to gene expression time series from hela culture shows that the underlying regulatory networks are characterized by both redundancy and synergy .
the problem of enclosed pedestrian evacuation has been studied from various points of view . on the one hand it presents academic interest because of its similarity with makeshift linked to granular media . on the other hand ,understanding the dynamics of the movement of pedestrians and anticipating the problems that may arise in an emergency situation is critical in the design of large spaces that will be occupied by many people .the efficient evacuation of the occupants of such places under a state of emergency is fundamental when trying to minimize the negative effects of panic and confusion , clogging and avalanches .predicting evacuation patterns is one of the first steps towards this goal .evacuation is only a particular aspect of a broader problem : the pedestrian movement .pedestrian dynamics has been extensively studied from a theoretical and experimental point of view .the associated pedestrian flow is usually modeled as a many - body system of interacting individuals .the literature on this subject is rather extensive , exposing several different approaches to the problem . in ,the authors introduce the active walker model to describe human trail formation and they show that the pedestrian flow system exhibits various collective phenomena interpreted as self - organized effects . in author suggest that the behaviors of pedestrian crowds are similar to gases or fluids .other authors prefer the formalism of cellular automata to frame their models .this is the approach we are going to adopt in the present work . in the present lattice gas model , are set on the sites of a lattice , with the restriction that each site can not hold more than one walker .the pedestrians move to empty sites according to a preferential direction dictated by the need of escaping from the room .conflicts between agents that want to get to the same position are solved using specific game rules and taking into account previously defined strategies ( of cooperation or defection ) which represent the characters of the agents .hence , the considered pedestrian dynamics include games between agents which affect their possibilities of leaving the room .real pedestrians are entities much more complex than automata with the sole idea of escaping from a room .one of the most difficult aspects of modeling pedestrian flow is to simulate the effects of subjective aspects that affect the interaction among the pedestrians .most of the models have focused on modeling the flow of individuals in pure mechanistic approaches but the behavioral reaction of the evacuees during their movement has not been so far investigated in detail . by including a game dynamics in the resolution of conflicts between pedestrians, the present work aims at opening a door to the analysis of this important and rather complex feature .first , we want to introduce some basic concepts of game theory and 2 x 2 symmetric games that are relevant for the pedestrian dynamics that we will consider . a two players game can be characterized by the set of the strategies that the players can adopt and by the payoff received by each strategy when confronting any other .if the game is symmetric , i.e. both players have access to the same set of strategies and payoffs , the information can be uploaded in an matrix , with the number of strategies . in 1966 , m. guyer and a. rapoport cataloged all the games .there are twelve symmetric games , eight of them are trivial , in the sense that there is no conflict of interest as both players prefer the same outcome .the remaining four games represent four distinct social dilemmas . in a gamethere are only two different strategies that can be defined as cooperative ( c ) and defective ( d ) .we can consider only relative payoff values , and this let us with four generic payoffs represented in the following matrix [ cols="^,^,^",options="header " , ] the contents of the matrix can be summarized as follows . in a game between cooperators and no defectors , there is always a winner .one of the players is randomly chosen to move ( with probability equal to ) . in a game with at least one defector , the existence of a winner is not ensured . in this case , the cooperators always loose and and each defector has probability of winning , with the number of cooperators in the game .the probability of having a winner is . as stated before ,once all the conflicts have been solved , all the winners as well as all the agents that can move without conflict are finally moved to their selected sites . in casethat one agent gets to a position with ( i.e. it reaches the exit ) , it is taken out of the system and the number of escaped agents is increased by one .we characterize the evacuation dynamics of the system by computing the _ mean exit time _ which is defined as the average over realizations of the number of time steps required to evacuate completely the room .we also study the time dependence of the _ mean number of escaped agents _ ,i.e. the average over realizations of the number of agents that have reach the exit at a given time .before considering the general case of a population with both cooperators and defectors , we study the extreme cases in which all the agents have the same strategy .first we consider a population in which all the agents are cooperators . in this case ,any conflict leads to a random game in which there is always a winner .namely , for a game between cooperators , each of them has a probability of becoming the winner and , thus , moving to the desired site .note that the parameter results irrelevant . in figure 2we show the positions of all the agents at six instances of the evolution for a single realization of the dynamics for a system of ..,scaledwidth=80.0% ] figure ( [ figrandom ] ) shows results for the mean exit times and the number of escaped agents as a function of time for the evacuation dynamics from a square room considering different system parameters and . in fig.([figrandom]).a we see that the mean exit time increases with more rapidly than exponentially .this means that the randomness of the dynamics strongly slows the evacuation process .note that the randomness may be associated to an uncertainty of the agents in their knowledge of the position of the door .we can also see that the mean exit time increases with the initial density , as could be expected .figure ( [ figrandom]).b shows that such a growth with is linear at fixed .in fig.([figrandom]).c we study the dependence of the mean exit time on the system size .we see that the growth is slower than exponential and faster than linear .finally , fig.([figrandom]).d shows the evolution of the mean number of escaped agents for different values of and . here, we see that the growth is linear along most of the evolution .small deviations from the linear regime occur for very short times and very long times .this is due to the fact that the flux of pedestrian out of the room is controlled mainly by the local density at the exit ( and by the parameter which is constant ) .the local density at the exit at small times coincides with , then it increases until it reaches a quasi - stationary value .such quasi - stationary density determines the slope of the linear growth of the evacuation profile .finally , at large times , when the evacuation process is about to finish , the density at the exit decreases and the flux at the door is reduced .fig.([figrandom]).d also shows that the slope of the linear growth is essentially independent of the initial density .this indicate that the quasi - stationary value of the density at the door depends only on , as could be expected .results for a population of only cooperators in a square room ( ) .a ) exit time as a function of for different values of and .b ) exit time as a function of for different values of and .c ) exit time as a function of for different values of .the small inset shows the same curves in logarithmic scale .d ) evolution of the number of escaped agents for different values of and for .,scaledwidth=80.0% ] now we consider a population in which all the agents are defectors .this is an extreme case opposite to the one considered in the previous subsection .we focus on the dependence of the results on the parameter which now becomes relevant as it rules the probabilities of motion resulting from all the conflicts .figure ( [ figalld]).a shows the mean exit time as a function of the initial density for different values of , while figure ( [ figalld]).b shows the mean exit time as a function of for different values of . for the sake of comparison , in both insets we also include the results for the only - cooperators case studied in the previous subsection .it can be seen that the mean exit time grows linearly both with ( fig.([figalld]).a ) and with ( fig.([figalld]).b ) .moreover the mean exit time for the only - collaborators case is always smaller than that for the only - defector case at any value of .this last fact can be understood as a consequence of the delays for stepping after the conflicts , which are present only for games with defectors .evacuation dynamics for systems with only defectors .a ) mean exit time as a function of the initial density of agents for different values of . for the sake of comparisonwe also indicate the results for a system with only cooperators ( squares ) .b ) mean exit time as a function of for different values of .the segments on the left indicate the values for systems with only cooperators.,scaledwidth=80.0% ] now we study the evacuation problem for heterogeneous populations with both collaborators and defectors . in figure5.a we show the mean exit time as a function of the initial fraction of defectors for fixed values of and considering different values of .it can be seen that for the exit time is almost independently of .this is so because in this case the penalization for defectors is null excepting for games with more than two defectors , which are likely only at large . at larger values of exit time grows approximately linearly with . in figure5.b we show the normalized exit time , defined as , where is the mean exit time obtained for an initial fraction of defectors .the plot shows us that the curves for large enough are close to collapse , indicating thus the approximately linear behavior with .however a close look to the figure reveals that the dependence on may be exactly linear only for a value of close to ( i.e. the limit between pd and sh games ) , while it is superlinear for smaller values of ( pd game ) and slightly sublinear for larger values of ( sh game ) .evacuation dynamics for heterogeneous populations .a ) exit time as a function of the initial fraction of defectors for different values of .b ) normalized exit time as a function of the fraction of defectors for the same systems as in ( a ) .all the calculations are for and .,scaledwidth=80.0% ] one important questions we pose is whether cooperators can or can not take advantages from mutual cooperation as it was verified in other systems where cooperation arise as an emerging phenomena . with this goal in mindwe characterize the dynamics of the system by three quantities , all of them aiming at revealing the relative success of the cooperators in finding the exit .first , we sample the composition of the population inside the room , comparing the instantaneous fraction of cooperators with the initial one .we define a normalized instantaneous fraction of cooperators in the room as where is the fraction of cooperators in the room at time .a departure of the derivative of from zero indicates that the leaving individuals do not represent a random sample of the population in the room .a positive ( negative ) value of indicates that the population in the room has a greater ( lower ) fraction of cooperators than the initial .meanwhile , a positive ( negative ) derivative of indicates that defectors ( cooperators ) are being more successful in finding the exit .figure [ room ] show the time behavior of this quantity for two values of the initial density of defectors and several values of for two different values of .we observe that for small values of we have throughout the evolution meaning that the defectors clearly surpass the cooperators in finding the exit .note that , not only is positive but its derivative increases with time , indicating the continuous increment of the fraction of cooperators in the room .but as increases and even still in the pd regime ( in the figure ) , the values of become negative meaning that the cooperators start to profit from mutual cooperation . while defectors loose time in futile arguments , the cooperators leave the room . and its derivative are negative , reflecting the ability of cooperators to reach the exit .normalized instantaneous fraction of cooperators within the room for different values of .results for , and considering an initial fraction of cooperators equal to ( a ) and ( b).,scaledwidth=80.0% ] we have studied several initial configurations with different initial conditions including varying values for the initial density of individuals , the initial fraction of cooperators and defectors and several sizes . with some small variations ,we have found qualitatively the same results indicating the prevalence of defectors at small and the prevalence of collaborators at large .to complement this measure , we analyze the strategy of the individuals exiting the room at each time step and calculate the fraction of cooperators among them .we compare this fraction with the corresponding to the cooperators remaining in the room and define the normalized fraction of exiting cooperators as here is the ensemble averaged fraction of cooperators among exiting individuals at time .figures [ exit ] show the time behavior of for the same conditions situations analyzed in figure [ room ] .again , the fraction of exiting defectors is higher than the expected one for small values of but the situation is reversed as increases . normalized fraction of cooperators exiting the room for different values of .results for , and considering an initial fraction of cooperators equal to ( a ) and ( b).,scaledwidth=80.0% ] as cooperators can only advantage defectors when interacting only among them , any evidence of cooperators performing better than defectors at leaving the room must be reflected in the formation of clusters of cooperators where mutual cooperation occurs . if cooperators are not clustered , the defectors will outstrip them. the effect of clustering of cooperators can be visually appreciated , but the assertion could be subjective .so we define a quantity that let us measure the instantaneous degree of clustering of cooperators by counting the fraction cooperating neighbors of each cooperating individual in the room and then we define where and are the number and the fraction of cooperators in the room at time .we exclude isolated individuals from this count .clustering of cooperators in the room for different values of .results for , and considering an initial fraction of cooperators equal to ( a ) and ( b).,scaledwidth=80.0% ] the results in figure [ clust ] confirm the occurrence of the expected behavior . when the quantities and indicate that cooperators are performing better than defectors in reaching the exit , the results for indicate an increase of the clustering of cooperators .the dynamics of the system leads to a partial segregation into cluster of cooperators and defectors , that leave the former in a situation of taking advantage from the benefits of mutual cooperation .figure [ clust ] shows also an interesting behavior for low values of .the curves show an increase of the clustering of cooperator , more apparent for in fig 1.a .this effect is different in nature and shape from those observed for higher values of .the origin of this increment correspond to the fact that defectors , freely to move due to low values of punishment manage to approach the exit , displacing the cooperators and producing a segregation that isolates the last from the door .this effect is verified in figs .[ room ] and [ exit ] , where we can observe that cooperator have difficulties in reaching the exit for values of close to 1 .this effect is also reflected in the fact that when the fraction of defectors is higher ( 0.6 in fig .[ clust].a vs. 0.2 in fig [ clust].b ) , the displacement of cooperators from the door and their corresponding segregation is enhanced .the results thus indicate that cooperators can take advantage from mutual cooperation for large enough values of , typically , well within the pd regime where still the dc payoff is larger than the cc one .the benefits from mutual cooperation are further enhanced in the sh regime ( ) .one of the shortcomings of the lattice gas models used in pedestrian dynamics is the lack of inclusion of behavioral aspects .considering this situation , the present work has a clear goal : to develop a simple model able to consider some individual attitudes that may affect the movement of interacting pedestrians . motivated by this objective , we have amalgamated models for pedestrian evacuation with game theory concepts and analyzed the emergence of non trivial effects that may manifest .the results show that the emerging phenomena observed in an evolutionary prisoner s dilemma are also present here . in these works the systems are spatially extended and allow for the cooperators to form clusters capable to resist the invasion of defectors and even to expand . in the present work the evolutionary aspects has not been so far included but the growth of clusters is promoted by the mobility of the individuals . as a result of this clustering , the cooperators can profit from the only advantage they have over the defectors : the mutual cooperation .when this happens , cooperators have more success in reaching the exit than defectors , as reflected in the plots .it is important to understand the limitation and scope of the present model .we are not intending to reproduce a real situation but to point out the effects that behavioral aspects may have on the denouement of an emergency evacuation scenario .future work envisions the inclusion of evolutionary strategies , differentiated social roles , off lattice dynamics and different geometries and obstacles .a. rapoport and m. guyer , general systems * 11 * , 203 ( 1966 ) .m. doebeli and c. hauert , ecol . lett .* 8 * , 748 ( 2005 ) .v. m. eguiluz , m. g. zimmermann , c. j. cela - conde , m. san miguel , am .j. sociol .* 110 * , 977 ( 2005 ) .f. fu , x. chen , l. liu and l wang , phys lett a * 371 * , 58 ( 2007 ) .j. gmez - gardees , m. campillo , l. m. flora , y. moreno , phys .98 * , 108103 ( 2007 ) .p. langer , m.a .nowak , c. hauert , j. theor .biol * 250 * , 634 ( 2008 ) . c. lei , j. jia , x. chen , r. cong and l. wang , chin .* 26 * , 080202 , ( 2009 ) .szab , g. and g. fth .phys . rep . *446 * , 97 - 216 ( 2007 ) .m. n. kuperman , s. risau - gusman , phys .e * 86 * , 016104 ( 2012 ) g. abramson , m. kuperman , phys .e * 63 * , 030901r ( 2001 ) . c. p. roca , j. a. cuesta , a. snchez , phys .e * 80 * , 046106 ( 2009 ) .
we analyze the pedestrian evacuation of a rectangular room with a single door considering a lattice gas scheme with the addition of behavioral aspects of the pedestrians . the movement of the individuals is based on random and rational choices and is affected by conflicts between two or more agents that want to advance to the same position . such conflicts are solved according to certain rules closely related to the concept of strategies in game theory , cooperation and defection . we consider game rules analogous to those from the prisoner s dilemma and stag hunt games , with payoffs associated to the probabilities of the individuals to advance to the selected site . we find that , even when defecting is the rational choice for any agent , under certain conditions , cooperators can take advantage from mutual cooperation and leave the room more rapidly than defectors .
thermal convection has been studied intensely in the laboratory for years .a relatively simple arrangement which is commonly used to study thermal convection is a thin horizontal layer of fluid confined between flat horizontal plates subjected to a vertical temperature gradient . in this arrangement ,when the temperature of the bottom plate exceeds that of the top plate by a critical value , the fluid layer becomes mechanically unstable .this , the so called rayleigh bnard instability , is identified by a marked increase in the thermal conductivity of the fluid layer due to the onset of rayleigh bnard convection .shadowgraph visualization of the rayleigh - bnard instability in optically transparent fluids has enabled comparison between theoretical and experimental work on a well defined nonlinear system . despite the simplicity of this arrangement and knowledge of the underlying dynamical equations of fluid motion , the nature of the sequence of transitions from diffusive to time dependent and finally to turbulent heat transport in this system is not understood .the weakly non linear theory of convection is able to predict the critical rayleigh number , the critical wavenumber , and the pattern selected at the onset of convection . when the critical temperature difference across the fluid layer is relatively small , such that the fluid properties vary little within the fluid layer , one expects to observe a continuous transition from a uniform base state to a pattern consisting of stationary two dimensional convection rolls .such patterns have been observed in simple fluids at onset in both rectangular and axisymmetric convection cells .the weakly non linear theory is incapable , however , of describing moderately supercritical convection . the non - linear properties of moderately supercritical convection have been studied in detail by busse and collaborators . by numerically solving the non linear oberbeck boussinesq equations of fluid motion for a laterally unbounded fluid layer confined between horizontal plates in the presence of a vertical thermal gradient , they have been able to construct a stability diagram , known as the `` busse balloon , '' for convective flow covering a wide range of rayleigh ( ) and prandtl ( ) numbers . rayleigh number serves as a control parameter for the onset of convection .the prandtl number dictates the nature of the secondary instabilities to which convection rolls are subject above onset . in these equations, is the horizontal plate separation , is the temperature difference between the horizontal plates , is the gravitational field strength , and , and are the isobaric thermal expansion coefficient , the kinematic viscosity and the thermal diffusivity of the fluid .early studies using helium revealed a time dependent nusselt number just above onset .the nusselt number , , is just the dimensionless ratio of the observed thermal conductivity to that of the quiescent fluid .such a time dependent response to a constant driving force suggests a time dependent flow pattern . unfortunately , flow visualization was not available for these experiments , and measurements of the nusselt number alone only provide information on the global , i.e. the spatially averaged , character of heat transport .direct visualization of the fluid , on the other hand , allows features in the global heat transport to be identified with specific dynamic events and spatial structures .in particular , shadowgraph visualization from above the convection cell has been used extensively to obtain flow images which have been indispensable for comparisons of theoretical calculations and experimental measurements in optically transparent fluids . a similar comparison between theory and experiment in many opaque fluids has not been made , owing primarily to the difficulty of imaging flow patterns in optically opaque fluids .for example , of the extant studies of rayleigh - bnard convection in liquid metals , only fauve , laroche , and libchaber have visualized convective flow from above .they used a temperature - sensitive liquid crystal sheet beneath a transparent ( sapphire ) top plate to visualize localized temperature differences associated with convective structures in liquid mercury confined to small aspect ratio cells .these studies , however , lacked detailed images of convective flow patterns .there has been no systematic experimental study of the convective planform for rayleigh - bnard convection in a liquid metal .this is troubling , since the prandtl numbers of liquid metals are typically an order of magnitude ( or two , or three ) lower than that of other fluids , and since the prandtl number is expected to play an important role in determining the nature of the instabilities to which straight convective rolls are subject above the onset of convection .the liquid crystal method has proved rather more fruitful in the study of ferrofluid convection .bozhko and putin , for instance , have used this method to visualize convection patterns in a kerosene - based magnetic fluid confined to a cylindrical cell . in the absence of an applied magnetic field , they observed oscillatory convection for raleigh numbers up to . on the other hand , huke and lckehave calculated the convection patterns for typical ferrofluid parameters , and found that squares should be the first stable pattern to appear above onset . for ferrofluids , like other binary mixtures ,the nature of the instabilities to which the squares are subject depends upon other fluid parameters such as the separation ratio and the lewis number . generally speaking, the liquid crystal method is limited by the thermal response of the liquid crystal . in the experiments of bozhko and putin , for instance, the entire color change of the liquid crystal occurs between 24 and 27 degrees celsius .moreover , the liquid crystal must be thermally isolated from the sample fluid in order to protect the liquid crystal from the chemical action of the ferrofluid . as an alternative to the liquid crystal method ,the use of ultrasound has been explored for use in acoustic imaging of rayleigh - bnard convection .most recently , xu and andereck used an array of 11 contact transducers fixed to the side of a small aspect ratio convection cell to obtain a two dimensional map of the thermal field in liquid mercury undergoing convection .their method , however , does not allow one to study the convective planform from above , as one can with standard shadowgraph and liquid crystal techniques .in the present paper , we describe an apparatus which allows real - time acoustic imaging of thermal convection in a rayleigh - bnard cell .this apparatus can be used to visualize thermal convection , from above the convection cell , in optically opaque fluids such as liquid metals , ferrofluids and opaque gels .the paper is organized as follows . in sec .[ sec : design ] , we describe the design and construction of the apparatus . here, we provide an overview of the apparatus , followed by detailed descriptions of the ultrasound imaging system , the convection cell , thermometer construction and temperature regulation . in sec .[ sec : experiments ] , we describe experimental procedures and results . in particular , we provide images of convective structures observed in silicone oil and ferrofluid .we also describe prospects for continuing studies of ferrofluids and for planned work using liquid mercury .a sectional view of the apparatus design is shown in fig . [ fig01 ] .the sample fluid is introduced _ via _ one of two entrance ports ( ) to a sample space located between two sapphire windows ( ) .these windows form the top and bottom boundaries of the convection cell and are separated by a thin spacing ring ( ) .a detailed description of the convection cell is provided in sec .[ subsec : cell ] .a vertical temperature gradient is maintained across the convection cell by thermally regulating the water circulating through the spaces above the top , and below the bottom , sapphire windows .first , we describe the thermal regulation of the top sapphire window .cool water from a refrigerated bath circulator enters the apparatus through a port ( ) and flows through a channel which runs around the circumference of the apparatus ( ) .the chilled water is forced downward through a series of six vertical holes ( not shown ) in the floor of this channel and into a second flow distribution channel ( ) . from the flow distribution channel ,the water is injected into the top bath space through six jets ( ) spaced evenly about the perimeter of the bath space .these jets direct the chilled water onto the top surface of the upper sapphire window .heat which has propagated vertically through the cell is absorbed by the chilled water .the chilled water leaves the bath space through six exit ports ( ) , which are spaced evenly about the top of the bath space . after flowing through another circumferential channel ( ) , the fluid returns to the bath circulator _ via _ the exit port ( ) .thermal regulation of the bottom sapphire window is accomplished in a similar fashion by circulating warm water through the lower bath space .the temperature of the upper and lower bath spaces are monitored using thermometers inserted through water - tight radial ports ( ) in the wall of the upper and lower bath spaces . upto four thermometers may be inserted into each bath space .a detailed description of the thermometer construction is provided in sec .[ subsec : thermometry ] .the primary difference between the upper and the lower bath space construction is that the ultrasound transducer ( ) is situated just below the bottom sapphire window in the lower bath space .it is kinematically mounted atop three stainless steel threaded posts ( ) which are inserted through bushings in the bottom bath flange ( ) .the transducer is pulled downward against the top of the three mounting posts using a small spring ( not shown ) which is stretched between the bottom of the transducer and the bottom bath flange .there are 80 threads per inch on the mounting posts , and by adjusting them appropriately , the transducer can be placed very near to the bottom sapphire window and leveled optimally . an exploded view of the lower bath space and the transducer is shown in fig . [ fig02 ] .an rg-174 coaxial cable ( not shown ) which carries the ultrasound signal to the transducer is fed through a water - tight compression seal in a small hole near one of the mounting posts in the bottom bath flange . during an experiment, the transducer generates collimated ultrasonic plane waves which propagate vertically through the sample cell and the upper bath space .these ultrasound waves are detected by a modified version of a commercially available ultrasound camera ( acoustocam model i400 , imperium inc . , silver spring , md ) mounted atop an acrylic window ( labeled in fig .[ fig01 ] ) which seals the upper bath space .images of convection patterns within the convection cell are generated by observing the lateral variation of the temperature dependent speed of sound _ via _ refraction of acoustic plane waves passing vertically through the sample fluid layer . a photograph of the ultrasound camera ( turned lens - side up ) is shown in fig . [ fig03 ] .labeled are the acoustic lenses ( ) which focus the ultrasound waves onto a small piezoelectric transducer array inside the body of the camera .the array measures 12 mm on a side and consists of individual electrodes with 100micron center to center spacing .voltage variations in each pixel induced by incident ultrasound waves are read out using a multiplexer and frame grabber electronics .the image capture rate is 30 frames per second , similar to that of a standard ccd camera .the camera may be focused and the field of view may be varied by moving the acoustic lens using a knurled knob ( ) on the side of the camera .a field of view as large as 1 - 2 inches has been achieved .the acoustic lenses are enclosed in a cylindrical shield ( not shown ) which is filled with an impedance matching fluid _ via _ an external port ( ) .once the apparatus is assembled , it is placed on a tripod mount atop an aluminum platform , as shown in the photograph of fig .[ fig04 ] .a shallow tray placed under the platform catches any spillage in the event of a leak .the black tubes in the figure are thermally insulated tygon tubes which carry cooling and heating water to and from the bath spaces .the white wires in the figure are electrically insulated leads for the thermometers . also shownis a thin tygon tube which is attached to a syringe and which is used to fill the cell with a sample fluid . a vertical instrumentation panel on the front of the aluminum platformis fitted with feed - throughs to facilitate electrical and fluid connections between the apparatus and the external devices such as the sample fluid syringe , the bath circulators , the data acquisition / switch unit and the ultrasound controller .the external connections to the apparatus are depicted schematically in fig .[ fig05 ] .they can be divided into three separate systems : bath control , temperature monitoring , and ultrasound imaging .these systems are controlled using software running on a pc , as described in the next paragraph .the top and bottom bath space temperatures are regulated using separate refrigerated bath circulators ( neslab rte-7 digital plus refrigerated bath , thermo fisher scientific , waltham , ma ) .set - point temperatures are assigned from software by sending rs-232 commands _ via _ a cable connection between the pc and the bath circulators .the temperature of the upper and lower bath spaces inside the apparatus are monitored using six thermometers .temperature values of the six thermometers are scanned twice per second using a digital data acquisiton unit ( model 2700 dmm , keithley instruments , cleveland , oh ) connected to a usb port on the pc using a usb - to - gpib interface adapter ( model kusb-488a , keithley instruments , cleveland , oh ) .the ultrasound imaging system is controlled using a commercial software package ( acoustovision400 software , imperium , inc . , silver springs , md ) .the software allows the user to communicate with an ultrasound controller box which in turn attaches to both the transducer and the camera .the ultrasound images were calibrated by placing an object of known dimensions in the sample space and recording the number of image pixels corresponding to its size .for this purpose we fabricated an array of small holes drilled at precisely measured intervals into a thin sheet of aluminum .we placed this phantom " between the sapphire windows of the convection cell in order to calibrate our images .we also placed some more mundane objects into the cell so as to illustrate the resolution limit which is imposed by diffraction effects . shown in fig .[ fig06 ] are acoustic images of an ordinary paper clip and a small stainless steel washer which were placed on top of the convection cell .the successive images , ( a ) through ( d ) , illustrate the observations as we progressively moved the acoustic lenses so as to focus on the objects .the small ripples which can be observed are artifacts caused by diffraction .the resolution of the camera is diffraction limited by the wavelength of the ultrasound , which in turn depends upon the speed of sound in the medium .the wavelength of 5 mhz ultrasound in water is about 0.3 mm .another imaging difficulty which was encountered involved the accumulation of bubbles on the top bath window ( in fig .[ fig01 ] ) due to outgassing of the circulating bath water .these bubbles seriously degraded the ultrasound image quality during the course of consecutive experiments . to resolve this issue , the bath water was typically degassed before being added to the circulatorthis , however , did not solve the problem , since over time the circulating water would eventually reabsorb air .the final solution was to design a small wiper which could be swept across the surface of the bath window in - situ so as to clear the bubbles from the ultrasound path during the course of an experiment , if necessary .the bubble wiper consisted of a plastic rectangular prism 1/8 inch 1/8 inch 2.75 inches .a strip of doll eyelashes ( michaels , west allis , wi ) was attached to the long edge of the prism with water - resistant glue .a thin stainless steel guide wire was slid through a small hole drilled in the long edge of the prism and secured in place with a set screw . in this way, the bubble wiper resembled a tiny broom or rake , with the guide wire the handle .the guide wire was fed through a stainless steel capillary tube which was itself slid through a water - tight seal in the upper edge of the bath space just below the top bath window . pushing on the guide wire caused the bubble wiper to slide along a set of tracks so that the doll hairs brushed along the top bath window . in this way, the bubbles were pushed to one edge of the window . after pushing the bubbles aside ,they were removed from the bath space through a tiny capillary tube which was inserted through another tiny hole in the wall of the bath .the tip of the capillary tube resided near the top bath window , and by using a syringe , the accumulated air bubbles could be easily removed from the bath space .this contraption proved very effective in clearing bubbles from the field of view and permitting clear ultrasound images to be obtained .the windows which form the horizontal boundaries of the convection cell are four inch diameter and inch thick optically flat single - crystal sapphire disks ( hemlite sapphire windows , crystal systems inc . ,salem , ma ) .sapphire was chosen because of its high thermal conductivity and its small ultrasonic attenuation coefficient .the thermal conductivity of sapphire is approximately 50 times greater than that of quiescent liquid mercury , the sample fluid with the highest thermal conductivity which we plan to study .a high relative thermal conductivity minimizes undesirable horizontal thermal gradients in the windows .a thin spacing ring establishes the height , , of the convection cell .recall that the critical temperature difference , , for the rayleigh bnard instability is inversely proportional to . , in turn , establishes the minimum heating and cooling power of the bath circulators : a shallow fluid layer requires a large temperature difference ; a deep fluid layer requires a conveniently small temperature difference , but increases the aspect ratio of the cell , which is undesirable .the aspect ratio of the convection cell , , is defined as the dimensionless ratio of the cell width to its depth .both the critical rayleigh number and the convective planform are generally influenced by the existence of lateral sidewalls . if one wishes to study the stability of the fluid layer itself , sidewall effects are considered a spurious complication . in order to minimize the effects of the experimentally unavoidable lateral sidewalls , it is desirable to employ a large aspect ratio cylindrical cell , since this has the smallest circumference of any shape of a given cross sectional area . moreover, the thermal conductivity of the spacing ring should ideally match that of the fluid so as to prevent lateral thermal gradients near the sidewall .hence the material and dimensions of the spacing ring depends upon the choice of fluid being studied .we constructed cells with five different spacing ring materials and dimensions .the properties of these cells are summarized in tab .[ tab : cells ] .the first , and tallest , cell is designed specifically to study liquid mercury .stainless steel was chosen as the spacing ring material because its thermal conductivity ( erg / s ) is only slightly greater than that of liquid mercury ( erg / s ) . since above onset the nusselt number increases with rayleigh number , the thermal conductivity of convecting mercury increasingly matches that of stainless steel until it reaches approximately 30,000 , which lies significantly above the expected stability region for straight rolls .the remaining four cells were constructed with acetal sidewalls and have different heights so as to allow the onset of convection to occur at different values of . after assembling the cell and before assembling the rests of the apparatus ,it is important to verify that the sapphire windows are parallel .this was done using two techniques .the first was by using a micrometer .the spacing from the top of the top sapphire window to the bottom of the bottom sapphire window was measured at several locations across the cell .the second method was by using an interferometer setup , as shown in fig .[ fig07 ] .a he - ne laser beam was expanded and collimated to a beam width of approximately one inch . a pellicle beam splitter ( melles griot , rochester , ny )was used to direct the collimated beam at the cell and also at a screen , on which appeared interference fringes when the cell windows were not perfectly aligned . typically , the interference fringes were concentric rings , less than ten in number , and clustered near the outside perimeter of the cell .this implies that the central region of the cell is quite parallel , and that the lack of parallelism is typically less than about 30 microns and occurs near the cell edges . in the event that the windows are not aligned properly , the screws which hold the cell togethermay be adjusted so as to establish optimal parallelism .the temperature near the sapphire windows was monitored using thermometers constructed from glass encapsulated thermistors .one such thermometer is shown in fig .[ fig08 ] .a thermistor hermetically sealed in a thin glass rod ( p60 ntc bead - in - glass thermoprobe , p.n .p60db303 m , ge thermometrics , edison , ny ) was inserted into a 1/16 inch hole in a nylon female luer bulkhead adapter ( p.n .b-1005126 , small parts , inc . , miami lakes , fl ) .waterproof epoxy ( loctite fixmaster high performance epoxy , p.n . 99393 ,henkel corporation , rocky hill , ct ) was used to seal the thermistor to the luer connector .the two leads of the thermistor were electrically insulated with a short length of 0.025 inch diameter spaghetti tubing ( p.n .ett-30 , weico , edgewood , ny ) .a 40 centimeter long twisted pair of manganin wires ( mw 36-awg , lakeshore cryogenics , westerville , oh ) was then soldered to the tip of each of the leads of the thermistor .this configuration allowed for four - wire resistance measurements .the solder joints were insulated with heat - shrink tubing and each twisted pair was inserted into a length of teflon tubing ( ptfe 20 tw , mcmaster - carr , chicago , il ) .heat shrink tubing was wrapped around the joint between the luer connector and the wires for added strain relief .finally , pin connectors were soldered to the end of the wires .the assembled thermometers are screwed into threaded ports in the sidewall of the top and bottom bath spaces .a tiny o - ring establishes a water - tight seal .the tip of each thermistor projects about a half an inch into the bath space and resides approximately 1/16 of an inch from the outside surface of the sapphire windows . during an experiment ,the bath temperatures are sampled twice per second . shown in the top graph of fig .[ fig09 ] is the temperature versus time of one of the thermometers during a typical experiment . in this particular case ,the top and bottom bath spaces were both regulated near 20 degrees celsius .the root mean square temperature noise is less than 10 mk . shown in the bottom graph of fig .[ fig09 ] is the power spectral density computed from this same data .the following procedure is typically followed in performing a set of experiments .first , the convection cell is assembled with the sidewall appropriate for the sample fluid to be studied .the convection cell is tested for leaks and the sapphire windows are tested for parallelism .the remainder of the apparatus is assembled around the cell and the apparatus is placed on the tripod on the aluminum platform . before screwing on the top window ( in fig .[ fig01 ] ) , a small bubble level is placed on the top sapphire window to ensure that the apparatus is level .next , the bath circulators are filled with degassed water mixed with algicide ( chloramine - t , thermo fisher scientific , waltham , ma ) .the bath circulator tubing is then attached to the apparatus and the top and bottom bath spaces are filled with water . while the bath spaces are being filled , nitrogen gas is circulated through the convection cell to keep it clean . at this point ,the thermometers are plugged in to the data acquisition unit and the temperature monitoring software is started .next , a syringe is used to push the sample fluid into the convection cell .typically , the apparatus is tipped onto its side , with the cell entrance port ( in fig .[ fig01 ] ) on the bottom , while filling the convection cell .the cell is seen to be full when sample fluid squirts out of the exit port on the top , opposite the entrance port .the apparatus is returned to its upright position on the aluminum platform and it is visibly inspected for leaks . the ultrasound camera is then attached to the top of the apparatus and is plugged into the ultrasound controller box . the image capture software is started and the camera settings are adjusted for a clear ultrasound image . in a typical set of experiments ,the temperature difference across the cell , , is initially maintained at close to zero degrees .several background images of the quiescent fluid are captured .next , is increased stepwise , maintaining the average fluid temperature constant .since the prandtl number is known to be weakly temperature dependent , this technique serves to keep the mean prandtl number of the fluid constant . at each step ,a series of ultrasound images is obtained .the system is allowed to equilibrate for a minimum of one horizontal diffusion time after each step .the horizontal diffusion time is given by and the vertical diffusion time is given by ( see tab . [ tab : properties ] ) .after data collection , the images are analyzed on a personal computer using image processing software ( igor pro v.5 , wavemetrics , inc . , lake oswego , or ) .the onset of convection is determined by the appearance of spatial variations in the temperature dependent acoustic refractive index .such spatial variations are analyzed so as to determine the convective planform and its wavenumber distribution .we present here the first acoustic images of the planform of rayleigh - bnard convection viewed from above .shown in fig .[ fig10 ] is the onset of convection in 5 cst polydimethylsiloxane ( pdms ) polymer fluid ( dow corning 200 fluid , dow corning , midland , mi ) , otherwise known as silicone oil .part ( a ) of the figure depicts the uniform conductive state , below the onset of convection . here , degrees and . just below ,in part ( b ) of the figure , is shown the convective state . here , degrees and . in both cases ,the average fluid temperature was 28.5 degrees and the prandtl number was 67.8 .the short tick marks along the horizontal and vertical axes indicate lateral distances in units of the cell height , .the images have been normalized by ( i ) subtracting a background image obtained at very low rayleigh number and then ( ii ) dividing the resulting image by the same background image . to the immediate right of images ( a ) and ( b )are shown their respective power spectral densities .these were obtained by the following procedure .first , a kaiser window was applied to the real - space image .next , the two - dimensional fourier transform of the image was computed .a high - pass filter was applied to the real and imaginary parts of the transformed image to filter out the very low frequency background components .the one - sided power spectral density was then computed from the sum of the squares of the real and imaginary parts , and this power spectral density was normalized so as to satisfy parseval s theorem .the axes of the computed power spectral density indicate the cartesian components of the wavenumber , .the wavenumber is defined as the ratio of the spatial wavelength , , and the cell height , .the lack of structure in the power spectral density of image ( a ) reflects a uniform conductive state .the ( black ) peak in the power spectral density of image ( b ) reflects the appearance of straight convection rolls at the indicated value of .investigation of the variation of the power spectral density with rayleigh number provides a quantitative method for studying the onset and the evolution of rayleigh - bnard convection . shown in fig .[ fig11 ] are acoustic images of two convective states observed in efh1 ferrofluid ( p.n .ff-310 , educational innovations , inc . ,norwalk , ct ) .this opaque fluid is a colloidal dispersion comprised of magnetite particles ( 3 - 15% by volume ) and oil soluble dispersant ( 6 - 30% by volume ) suspended in a carrier liquid ( 55 - 91% by volume ) . for this particular set of experiments ,the convection cell was tilted by 2.89 degrees so as to induce a large scale lateral flow and to facilitate roll formation .part ( a ) of the figure depicts a state of cellular convection just above onset . here , degrees and .part ( b ) of the figure depicts a straight - roll convection state at somewhat higher rayleigh number . here , degrees and .for these experiments , the average fluid temperature was 29.9 degrees and the prandtl number was 101.2 .as in fig .[ fig10 ] , to the immediate right of each image is shown a grayscale image of its computed power spectral density .the broad peak near in the power spectral density associated with image ( a ) reflects the lack of orientational order observed in the cellular convection state , whereas the localized peak in the power spectral density associated with image ( b ) reflects the distinct straight roll pattern . as mentioned in sec .[ sec : intro ] , there exist detailed theoretical predictions for the stability diagram of binary fluids such as ferrofluids .a systematic experimental study of the variation of the power spectral density as a function of rayleigh number in ferrofluids using acoustic imaging is being prepared for a future publication .in addition to work on ferrofluids , future experiments will focus on the onset of rayleigh - bnard convection in liquid mercury .some of the properties of liquid mercury are presented in tab .[ tab : properties ] .liquid mercury is a good choice from among the low prandtl number liquid metals due to its relatively large thermal expansion coefficient ( witness the mercury thermometer ) and hence its propensity to exhibit convection .based on the theoretical stability diagram for liquid mercury , we anticipate that straight rolls should be stable over a range of rayleigh numbers , between 1708 and approximately 1900 .thus , in our cell number 1 , whose height is 0.514 centimeter , we anticipate straight rolls to be stable when the temperature difference across the cell is between 4.82 and 5.37 degrees celsius , giving a stability range of approximately 0.55 degrees celsius . for shorter cells ,the stability range is larger .this tendency is summarized in tab .[ tab : hg ] , where we list the anticipated stability ranges for straight rolls in liquid mercury for each of our five convection cells .thanks to jack gurney , formerly at imperium , inc . , for providing extensive engineering consultation and support during the initial stages of the project .thanks to richard scherr in the biophysics machine shop of the medical college of wisconsin for helping to build a preliminary version of our convection cell .finally , thanks to professor daniel ebeling in the chemistry department at wisconsin lutheran college and cindy kuehn for helpful suggestions .this work was supported by a major research instrumentation grant from the national science foundation ( dmr-0416787 ) and a grant from the office of basic energy sciences at the u.s .department of energy ( de - fg02 - 04er46166 ) ..[tab : cells]experimental cells constructed for study of different fluids .the sample space is circular and has a diameter of approximately 9 cm .cell 3 has an optional rectangular insert which converts its aspect ratio to . [cols="<,^,^,>",options="header " , ]
we have designed and built an apparatus for real - time acoustic imaging of convective flow patterns in optically opaque fluids . this apparatus takes advantage of recent advances in two - dimensional ultrasound transducer array technology ; it employs a modified version of a commercially available ultrasound camera , similar to those employed in non - destructive testing of solids . images of convection patterns are generated by observing the lateral variation of the temperature dependent speed of sound _ via _ refraction of acoustic plane waves passing vertically through the fluid layer . the apparatus has been validated by observing convection rolls in both silicone oil and ferrofluid .
this article investigates the violation of bell inequalities in macroscopic situations and analyses how this indicates the presence of genuine quantum structure .we explicitly challenge the common belief that quantum structure is present only in micro - physical reality ( and macroscopic coherent systems ) , and present evidence that quantum structure can be present in the macro - physical reality .we also give an example showing the presence of quantum structure in the mind .let us begin with a brief account of the most relevant historical results . in the seventies ,a sequence of experiments was carried out to test for the presence of nonlocality in the microworld described by quantum mechanics ( clauser 1976 ; faraci at al .1974 ; freeman and clauser 1972 ; holt and pipkin 1973 ; kasday , ullmann and wu 1970 ) culminating in decisive experiments by aspect and his team in paris ( aspect , grangier and roger , 1981 , 1982 ) .they were inspired by three important theoretical results : the epr paradox ( einstein , podolsky and rosen , 1935 ) , bohm s thought experiment ( bohm , 1951 ) , and bell s theorem ( bell 1964 ) .einstein , podolsky , and rosen believed to have shown that quantum mechanics is incomplete , in that there exist elements of reality that can not be described by it ( einstein , podolsky and rosen , 1935 ; aerts 1984 , 2000 ) .bohm took their insight further with a simple example : the ` coupled spin- entity ' consisting of two particles with spin , of which the spins are coupled such that the quantum spin vector is a nonproduct vector representing a singlet spin state ( bohm 1951 ) .it was bohm s example that inspired bell to formulate a condition that would test experimentally for incompleteness .the result of his efforts are the infamous bell inequalities ( bell 1964 ) .the fact that bell took the epr result literally is evident from the abstract of his 1964 paper : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the paradox of einstein , podolsky and rosen was advanced as an argument that quantum theory could not be a complete theory but should be supplemented by additional variables .these additional variables were to restore to the theory causality and locality . in this notethat idea will be formulated mathematically and shown to be incompatible with the statistical predictions of quantum mechanics .it is the requirement of locality , or more precisely that the result of a measurement on one system be unaffected by operations on a distant system with which is has interacted in the past , that creates the essential difficulty . " __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ bell s theorem states that statistical results of experiments performed on a certain physical entity satisfy his inequalities if and only if the reality in which this physical entity is embedded is local .he believed that if experiments were performed to test for the presence of nonlocality as predicted by quantum mechanics , they would show quantum mechanics to be wrong , and locality to hold .therefore , he believed that he had discovered a way of showing experimentally that quantum mechanics is wrong .the physics community awaited the outcome of these experiments . today, as we know , all of them agreed with quantum predictions , and as consequence , it is commonly accepted that the micro - physical world is incompatible with local realism .one of the present authors , studying bell inequalities from a different perspective , developed a concrete example of a situation involving macroscopic ` classical ' entities that violates bell inequalities ( aerts 1981 , 1982 , 1985a , b ) .this example makes it possible to more fully understand the origin of the violation of the inequalities , as well as in what sense this violation indicates the presence of quantum structure .in this section we review bell inequalities , as well as clauser and horne inequalities .we first consider bohm s original example that violates these inequalities in the microworld .finally we we put forth an example that violates them in the macroworld .bell inequalities are defined with the following experimental situation in mind .we consider a physical entity , and four experiments , , , and that can be performed on the physical entity .each of the experiments has two possible outcomes , respectively denoted and .some of the experiments can be performed together , which in principle leads to ` coincidence ' experiments .for example and together will be denoted .such a coincidence experiment has four possible outcomes , namely , , and .following bell , we introduce the expectation values for these coincidence experiments , as > from the assumption that the outcomes are either + 1 or -1 , and that the correlation can be written as an integral over some hidden variable of a product of the two local outcome assignments , one derives bell inequalities : when bell introduced the inequalities , he had in mind the quantum mechanical situation originally introduced by bohm ( bohm 1951 ) of correlated spin- particles in the singlet spin state . here and refer to measurements of spin at the left location in spin directions and , and and refer to measurements of spin at the right location in spin directions and .the quantum theoretical calculation in this situation , for well chosen directions of spin , gives the value for the left member of equation ( [ bellineq ] ) , and hence violates the inequalities .since bell showed that the inequalities are never violated if locality holds for the considered experimental situation , this indicates that quantum mechanics predicts nonlocal effects to exist ( bell 1964 ) .we should mention that clauser and horne derived other inequalities , and it is the clauser horne inequalities that have been tested experimentally ( clauser and horne 1976 ) .clauser and horne consider the same experimental situation as that considered by bell .hence we have the coincidence experiments , , and , but instead of concentrating on the expectation values they introduce the coincidence probabilities , , and , together with the probabilities and .concretely , means the probability that the coincidence experiment gives the outcome , while means the probability that the experiment gives the outcome .the clauser horne inequalities then read : although the clauser horne inequalities are thought to be equivalent to bell inequalities , they are of a slightly more general theoretical nature , and lend themselves to pitowsky s generalization , which will play an important role in our theoretical analysis .let us briefly consider bohm s original example .our physical entity is now a pair of quantum particles of spin- that ` fly to the left and the right ' along a certain direction of space respectively , and are prepared in a singlet state for the spin ( see fig .1 ) . we consider four experiments , that are measurements of the spin of the particles in directions , that are four directions of space orthogonal to the direction of flight of the particles .we choose the experiments such that and are measurements of the spin of the particle flying to the left and and of the particle flying to the right ( see fig .1 ) . 1 cm 5 cm fig . 1 : the singlet spin state example .we call the probability that the experiment gives outcome . according to quantum mechanical calculation and considering the different experiments, it follows : for the bohm example , the experiment can be performed together with the experiments and , which leads to experiment and , and the experiment can also be performed together with the experiments and , which leads to experiments and .quantum mechanically this corresponds to the expectation value which gives us the well known predictions : let us first specify the situation that gives rise to a maximal violation of bell inequalities .let be coplanar directions such that , and ( see fig .then we have and .this gives : which shows that bell inequalities are violated .3.1 cm fig . 2 : the violation of bell inequalities by the singlet spin state example . to violate the clauser horne inequalities , we need to make another choice for the spin directions .let us choose again coplanar with and ( see fig .the set of probabilities that we consider for the clauser horne inequalities is then given by .this gives : which shows that also the clauser horne inequalities are violated ( see fig .3 ) . 3.1 cm fig . 3 : the violation of clauser horne inequalities by the singlet spin state example .we now review an example of a macroscopic situation where bell inequalities and clauser horne inequalities are violated ( aerts 1981 , 1982 , 1985a , b ) . following this ,we analyze some aspects of the example in a new way . 4 cm = 7pt fig . 4 : the vessels of water example violating bell inequalities .the entity consists of two vessels containing 20 liters of water that are connected by a tube .experiments are performed on both sides of the entity by introducing syphons and in the respective vessels and pouring out the water and collecting it in reference vessels and .carefully chosen experiments reveal that bell inequalities are violated by this entity .consider an entity which is a container with 20 liters of transparent water ( see fig .4 ) , in a state such that the container is placed in the gravitational field of the earth , with its bottom horizontal .we introduce the experiment that consists of putting a siphon in the container of water at the left , taking out water using the siphon , and collecting this water in a reference vessel placed to the left of the container .if we collect more than 10 liters of water , we call the outcome , and if we collect less or equal to 10 liters , we call the outcome .we introduce another experiment that consists of taking with a little spoon , from the left , a bit of the water , and determining whether it is transparent .we call the outcome when the water is transparent and the outcome when it is not .we introduce the experiment that consists of putting a siphon in the container of water at the right , taking out water using the siphon , and collecting this water in a reference vessel to the right of the container .if we collect more or equal to 10 liters of water , we call the outcome , and if we collect less than 10 liters , we call the outcome . we also introduce the experiment which is analogous to experiment , except that we perform it to the right of the container . clearly , for the container of water being in state , experiments and give with certainty the outcome and , which shows that .experiments and give with certainty the outcome and , which shows that . the experiment can be performed together with experiments and , and we denote the coincidence experiments and .also , experiment can be performed together with experiments and , and we denote the coincidence experiments and . for the container in state , the coincidence experiment always gives one of the outcomes or , since more than 10 liters of water can never come out of the vessel at both sides .this shows that and .the coincidence experiment always gives the outcome which shows that and , and the coincidence experiment always gives the outcome which shows that and .clearly experiment always gives the outcome which shows that and .let us now calculate the terms of bell inequalities , and of the clauser horne inequalities , this shows that bell inequalities and clauser horne inequalities can be violated in macroscopic reality .it is even so that the example violates the inequalities more than the original quantum example of the two coupled spin- entities . in section 5we analyze why this is the case , and show that this sheds new light on the underlying mechanism that leads to the violation of the inequalities .in the macroscopic example of the vessels of water connected by a tube , we can see and understand why the inequalities are violated .this is not the case for the micro - physical bohm example of coupled spins .szabo ( pers . com . ) suggested that the macroscopic violation of bell inequalities by the vessels of water example does not have the same ` status ' as the microscopic violation in the bohm example of entangled spins , because events are identified that are not identical .this idea was first considered in aerts and szabo , 1993 . herewe investigate it more carefully and we will find that it leads to a deeper insight into the meaning of the violation of the inequalities .let us state more clearly what we mean by reconsidering the vessels of water example from section [ section03 ] , where we let the experiments , , and correspond with possible events and , and , and and and .this means that event is the physical event that happens when experiment is carried out , and outcome occurs .the same for the other events .when event occurs together with event , hence during the performance of the experiments , then it is definitely a different event from event that occurs together with event , hence during the performance of the experiment .szabo s idea was that this ` fact ' would be at the origin of the violation of bell inequalities for the macroscopic vessel of water example . in this sensethe macroscopic violation would not be a genuine violation as compared to the microscopic .of course , one is tempted to ask the same question in the quantum case : is the violation in the microscopic world perhaps due to a lack of distinguishing events that are in fact not identical ? perhaps what is true for the vessels of water example is also true of the bohm example ?let us find out by systematically distinguishing between events at the left ( of the vessels of water or of the entangled spins ) that are made together with different events at the right .in this way , we get more than four events , and unfortunately the original bell inequalities are out of their domain of applicability. however pitowsky has developed a generalization of bell inequalities where any number of experiments and events can be taken into account , and as a consequence we can check whether the new situation violates pitowsky inequalities . if pitowsky inequalities would not be violated in the vessels of water model , while for the microscopic bohm example they would , then this would ` prove ' the different status of the two examples , the macroscopic being ` false ' , due to lack of correctly distinguishing between events , and the microscopic being genuine .let us first introduce pitowsky inequalities to see how the problem can be reformulated .pitowsky proved ( see theorem [ theorem01 ] ) that the situation where bell - type inequalities are satisfied is equivalent to the situation where , for a set of probabilities connected to the outcomes of the considered experiments , there exists a kolmogorovian probability model . or , as some may want to paraphrase it , the probabilities are classical ( pitowsky 1989 ) . to put forward pitowsky inequalities, we have to introduce some mathematical concepts .let be a set of pairs of integers from that is , let denote the real space of all functions .we shall denote vectors in by , where the appear in a lexicographic order on the .let be the set of all -tuples of zeroes and one s .we shall denote elements of by where .for each let be the following vector in : the classical correlation polytope is the closed convex hull in of all possible vectors , : [ pitowsky , 1989 ] [ theorem01 ] let be a vector in .then if there is a kolmogorovian probability space and ( not necessarily distinct ) events such that : where is ... , is the space of events and the probability measure to illustrate the theorem and at the same time the connection with bell inequalities and the clauser horne inequalities , let us consider some specific examples of pitowsky s theorem .the case and .the condition is then equivalent to the clauser - horne inequalities ( see ( [ bellineq ] ) ) : the case and .we find then the following inequalities equivalent to the condition : it can be shown that these inequalities are equivalent to the original bell inequalities ( pitowsky 1989 ) .let us now introduce a new situation wherein the events are systematically distinguished . for the vessels of water example ,we introduce the following events : event corresponds to the physical process of experiment , leading to outcome , performed together with experiment leading to outcome .event corresponds to the physical process of experiment leading to outcome , performed together with experiment , leading to outcome . in order to introduce the other events and avoid repetition ,we abbreviate event as follows : \ ] ] the other events can then be written analogously as : & \ \ \e_4= [ o(e_2)=o_2(up ) \ ; \ & \ ; o(e_4)=o_4(up ) ] \cr e_5= [ o(e_3)=o_3(up ) \ ; \ & \ ; o(e_1)=o_1(up ) ] & \ \\ e_6= [ o(e_3)=o_3(up ) \ ; \ & \ ; o(e_2)=o_2(up ) ] \cr e_7= [ o(e_4)=o_4(up ) \ ; \ & \ ; o(e_1)=o_1(up ) ] & \ \ \ e_8= [ o(e_4)=o_4(up ) \ ; \ & \ ; o(e_2)=o_2(up ) ] \end{array}\ ] ] the physical process of the joint experiment corresponds then to the joint event , the physical process of the joint experiment to the joint event , the physical process of the joint experiment to the joint event , and the physical process of the joint experiment to the joint event .having distinguished the events in this way , we are certain the different joint experiments give rise to real joint events .we can now apply pitowsky s theorem to the set of events .suppose that there is an equal probability of experiment being performed with or , and similarly for the joint performance of with or .according to this assumption , the observed probabilities are : the obtained probability vector is then .applying pitowsky s approach , we could directly calculate that this probability vector is contained in the convex hull of the corresponding space , and hence as a consequence of pitowsky s theorem it allows a kolmogorovian probability representation ( aerts and szabo 1993 ) .this means that after the distinction between events has been made , the vessels of water example no longer violates pitowsky inequalities .an important question remains : would the violation of the inequalities similarly vanish for the microscopic spin example ?let us , exactly as we have done in the vessels of water example , distinguish the events we are not certain we can identify , for the case of the correlated spin situation .again we find 8 events : event corresponds to the physical process of experiment leading to outcome , performed together with experiment , leading to outcome .analogously , events , , , , , , , are introduced .again , the physical process of the joint experiment corresponds then to the joint event , the physical process of the joint experiment to the joint event , the physical process of the joint experiment to the joint event , and the physical process of the joint experiment to the joint event .suppose that directions or , as well as or , are chosen with the same probability at both sides .according to this assumption the observed probabilities are : the question is whether the correlation vector admits a kolmogorovian representation .to answer this question , we have to check whether it is inside the corresponding classical correlation polytope .there are no derived inequalities for , expressing the condition .lacking such inequalities we must directly check the geometric condition .we were able to do this for the vessels of water example because of the simplicity of the correlation vector , but we had no general way to do this .it is however possible to prove the existence of a kolmogorovian representation in a general way : let events and a set of indices be given such that non of the indices appears in two different elements of .assume that for each pair the restricted correlation vector has an kolmogorovian representation .then the product space provides a kolmogorovian representation for the whole correlation vector .this theorem shows that if the distinctions that we have explained are made , the inequalities corresponding to the situation will no longer be violated .this also means that we can state that the macroscopic violation , certainly with respect to the distinction or identification of events , is as genuine as the microscopic violation of the inequalities .in this section we show how bell inequalities are violated in the mind in virtue of the relationship between abstract concepts and specific instances of them .we start with a thought experiment that outlines a possible scenario wherein this sort of violation of bell inequalities reveals itself .this example was first presented in aerts and gabora 1999 .we then briefly discuss implications for cognition . to make things more concrete we begin with an example .keynote players in this example are the two cats , glimmer and inkling , that live at our research center ( fig .the experimental situation has been set up by one of the authors ( diederik ) to show that the mind of another of the authors ( liane ) violates bell inequalities .the situation is as follows . on the table where liane preparesthe food for the cats is a little note that says : ` think of one of the cats now ' . to show that bell inequalities are violated we must introduce four experiments and .experiment consists of glimmer showing up at the instant liane reads the note . if , as a result of the appearance of glimmer and liane reading the note , the state of her mind is changed from the more general concept ` cat ' to the instance `glimmer ' , we call the outcome , and if it is changed to the instance ` inkling ' , we call the outcome . experiment consists of inkling showing up at the instant that liane reads the note. we call the outcome if the state of her mind is changed to the instance ` inkling ' , and if it is changed to the instance ` glimmer ' , as a result of the appearance of inkling and liane reading the note. the coincidence experiment consists of glimmer and inkling both showing up when liane reads the note .the outcome is ( , ) if the state of her mind is changed to the instance ` glimmer ' , and ( , ) if it changes to the instance ` inkling ' as a consequence of their appearance and the reading of the note .3.2 cm = 7pt fig 5 : inkling ( left ) and glimmer ( right ) .this picture was taken before glimmer decided that the quantum cat superstar life was not for him and started to remove his bell .now it is necessary to know that occasionally the secretary puts bells on the cats necks , and occasionally she takes the bells off .thus , when liane comes to work , she does not know whether or not the cats will be wearing bells , and she is always curious to know . whenever she sees one of the cats , she eagerly both looks and listens for the bell .experiment consists of liane seeing inkling and noticing that she hears a bell ring or does nt .we give the outcome to the experiment when liane hears the bell , and when she does not .experiment is identical to experiment except that inkling is interchanged with glimmer .the coincidence experiment consists of liane reading the note , and glimmer showing up , and her listening to whether a bell is ringing or not .it has four possible outcomes : ( , ) when the state of liane s mind is changed to the instance ` glimmer ' and she hears a bell ; ( , ) when the state of her mind is changed to the instance ` glimmer ' and she does not hear a bell ; ( , ) when the state of her mind is changed to the instance ` inkling ' and she hears a bell and ( , ) when the state of her mind is changed to the instance ` inkling ' and she does not hear a bell .the coincidence experiment is defined analogously .it consists of liane reading the note and inkling showing up and her listening to whether a bell is ringing or not .it too has four possible outcomes : ( , ) when she hears a bell and the state of her mind is changed to the instance ` inkling ' ; ( , ) when she hears a bell and the state of her mind is changed to the instance ` glimmer ' ; ( , ) when she does not hear a bell and the state of her mind is changed to the instance ` inkling ' and ( , ) when she does not hear a bell and the state of her mind is changed to the instance ` glimmer ' .the coincidence experiment is the experiment where glimmer and inkling show up and liane listens to see whether she hears the ringing of bells .it has outcome ( , ) when both cats wear bells , ( , ) when only inkling wears a bell , ( , ) when only glimmer wears a bell and ( , ) when neither cat wears a bell .we now formulate the necessary conditions such that bell inequalities are violated in this experiment : ( 1 ) : : the categorical concept ` cat ' is activated in liane s mind .( 2 ) : : she does what is written on the note . ( 3 ) : : when she sees glimmer , there is a change of state , and the categorical concept ` cat ' changes to the instance glimmer , and when she sees inkling it changes to the instance inkling. ( 4 ) : : both cats are wearing bells around their necks .the coincidence experiment gives outcome ( , ) or ( , ) because indeed from ( 2 ) it follows that liane will think of glimmer or inkling .this means that .the coincidence experiment gives outcome ( , ) , because from ( 3 ) and ( 4 ) it follows that she thinks of glimmer and hears the bell .hence .the coincidence experiment also gives outcome ( , ) , because from ( 3 ) and ( 4 ) it follows that she thinks of inkling and hears the bell .hence .the coincidence experiment gives ( , ) , because from ( 4 ) it follows that she hears two bells .hence . as a consequencewe have : the reason that bell inequalities are violated is that liane s state of mind changes from activation of the abstract categorical concept ` cat ' , to activation of either ` glimmer ' or ` inkling ' .we can thus view the state ` cat ' as an entangled state of these two instances of it .we end this section by saying that we apologize for the pun on bell s name , but it seemed like a good way to ring in these new ideas .our example shows that concepts in the mind violate bell inequalities , and hence entail nonlocality in the sense that physicists use the concept .this violation of bell inequalities takes place within the associative network of concepts and episodic memories constituting an internal model of reality , or worldview .we now briefly investigate how this cognitive source of nonlocality arises , and its implications for cognition and our understanding of reality .as a first approximation , we can say that the nonlocality of stored experiences and concepts arises from their distributed nature .each concept is stored in many memory locations ; likewise , each location participates in the storage of many concepts . in order for the mind to be capable of generating a stream of meaningfully - related yet potentially creative remindings ,the degree of this distribution must fall within an intermediate range .thus , a given experience activates not just one location in memory , nor does it activate every memory location to an equal degree , but activation is distributed across many memory locations , with degree of activation falling with distance from the most activated one .6 shows schematically how this feature of memory is sometimes modeled in neural networks using a radial basis function ( rbf ) ( hancock et al ., 1991 ; holden and niranjan , 1997 ; lu et al . 1997 ; willshaw and dayan , 1990 ) . 3.2 cm = 7pt fig 6 : highly schematized diagram of a stimulus input activating two dimensions of a memory or conceptual space .each vertex represents a possible memory location , and black dots represent actual location in memory .activation is maximal at the center k of the rbf , and tapers off in all directions according to a gaussian distribution of width s. memory is also content addressable , meaning that there is a systematic relationship between the content of an experience , and the place in memory where it gets stored ( and from which material for the next instant of experience is sometimes evoked ) .thus not only is it is not localized as an episodic memory or conceptual entity in conceptual space , but it is also not localized with respect to its physical storage location in the brain .the more abstract a concept , the greater the number of other concepts that are expected to fall within a given distance of it in conceptual space , and therefore be potentially evoked by it .for example , fig .7 shows how the concept of ` container ' is less localized than the concept of ` bag ' .the concept of ` container ' does not just activate concepts like ` cup ' , it derives its very existence from them .similarly , once ` cup ' has been identified as an instance of ` container ' , it is forever after affected by it . to activate ` bag ' is to instantaneously affect ` container ' , which is to instantaneously affects the concept ` thing ' , from which ` container ' derives it s identity , and so forth .4.5 cm = 7pt fig 7 : a four - dimensional hypercube that schematically represents a segment of memory space .the stimulus dimensions ` made of paper ' , flimsy , ` has holes ' and ` concave ' lie on the , , , and axes respectively .three concepts are stored here : ` cup ' , ` bag ' , and ` bag with holes ' .black - ringed dots represent centers of distributed regions where they are stored .fuzzy white ball shows region activated by ` cup ' .emergence of the abstract concept ` container ' implicit in the memory space ( central yellow region ) is made possible by the constrained distribution of activation .an extremely general concept such as ` depth ' is probably even more nonlocalized .it is latent in mental representations as dissimilar as ` deep swimming pool ' , ` deep - fried zucchini ' , and ` deeply moving book ' ; it is deeply woven throughout the matrix of concepts that constitute one s worldview . in ( gabora 1998 , 1999 , 2000 ) one author , inspired by kauffman s ( 1993 ) autocatalytic scenario for the origin of life , outlines a scenario for how episodic memories could become collectively entangled through the emergence of concepts to form a hierarchically structured worldview .the basic idea goes as follows .much as catalysis increases the number of different polymers , which in turn increases the frequency of catalysis , reminding events increase concept density by triggering abstraction - the formation of abstract concepts or categories such as ` tree ' or ` big'-which in turn increases the frequency of remindings . and just as catalytic polymers reach a critical density where some subset of them undergoes a phase transition to a state where there is a catalytic pathway to each polymer present , concepts reach a critical density where some subset of them undergoes a phase transition to a state where each one is retrievable through a pathway of remindings events or associations .finally , much as autocatalytic closure transforms a set of molecules into an interconnected and unified living system , conceptual closure transforms a set of memories into an interconnected and unified worldview .episodic memories are now related to one another through a hierarchical network of increasingly abstract and what for our purposes is more relevant increasingly nonlocalized , concepts . over the past several decades, numerous attempts have been made to forge a connection between quantum mechanics and the mind . in these approaches , it is generally assumed that the only way the two could be connected is through micro - level quantum events in the brain exerting macro - level effects on the judgements , decisions , interpretations of stimuli , and other cognitive functions of the conscious mind . from the preceding arguments, it should now be clear that this is not the only possibility .if quantum structure can exist at the macro - level , then the process by which the mind arrives at judgements , decisions , and stimulus interpretations could itself be quantum in nature .we should point out that we are not suggesting that the mind is entirely quantum .clearly not all concepts and instances in the mind are entangled or violate bell inequalities .our claim is simply that the mind contains some degree of quantum structure .in fact , it has been suggested that quantum and classical be viewed as the extreme ends of a continuum , and that most of reality may turn out to lie midway in this continuum , and consist of both quantum and classical aspects in varying proportions ( aerts 1992 ; aerts and durt 1994 ) .we have seen that quantum and macroscopic systems can violate bell inequalities .a natural question that arises is the following : is it possible to construct a macroscopical system that violates bell inequalities in exactly the same way as a photon singlet state will ? aerts constructed a very simple model that does exactly this ( aerts 1991 ) .this model represents the photon singlet state before measurement by means of two points that live in the center of two separate unit spheres , each one following its own space - time trajectory ( in accordance with the conservation of linear and angular momentum ) , but the two points in the center remain connected by means of a rigid but extendable rod ( fig . 8) .next the two spheres reach the measurement apparatuses .when one side is measured , the measurement apparatus draws one of the entities to one of the two possible outcomes with probability one half .however , because the rod is between the two entities , the other entity at the center of the other sphere is drawn toward the opposite side of the sphere as compared with the first entity . only then this second entity is measured .this is done by attaching a piece of elastic between the two opposite points of the sphere that are parallel with the measurement direction chosen by the experimenter for this side .the entity falls onto the elastic following the shortest path ( i.e .. orthogonal ) and sticks there .next the elastic breaks somewhere and drags the entity towards one of the end points ( fig .9 ) . to calculate the probability of the occurrence of one of the two possible outcomes of the second measurement apparatus , we assume there is a uniform probability of breaking on the elastic .next we calculate the frequency of the coincidence counts and these turn out to be in exact accordance with the quantum mechanical prediction . 3 cm = 7pt fig 8 : symbolic representation of the singlet state in the model as two dots in the centers of their respective spheres . also shown is the connecting rod and the measurement directions chosen at each location a and b. there are two ingredients of this model that seem particularly important .first we have the rigid rod , which shows the non - separable wholeness of the singlet coincidence measurement ( i.e. the role of the connecting tube between the vessels of water , or the associative pathways between concepts ) .second , we have the elastic that breaks which gives rise to the probabilistic nature ( the role of the siphon in the vessels of water , or the role of stimulus input in the mind ) of the outcomes .these two features seem more or less in accordance with the various opinions researchers have about the meaning of the violation of bell inequalities .indeed , some have claimed that the violation of bell inequalities is due to the non - local character of the theory , and hence in our model to the ` rigid but extendable rod ' , while others have attributed the violation not to any form of non - locality , but rather to the theory being not ` realistic ' or to the intrinsic indeterministic character of quantum theory , and hence to the role of the elastic in our model . as a consequence, researchers working in this field now carefully refer to the meaning of the violation as the non - existence of local realism " .because of this dichotomy in interpretation we were curious what our model had to say on this issue . to explore thiswe extended the above model with the addition of two parameterizations , each parameter allowing us to minimize or maximize one of the two aforementioned features ( aerts d. , aerts s. , coecke b. , valckenborgh f. , 1995 ) .the question was of course , how bell inequalities would respond to the respective parameterizations .we will briefly introduce the model and the results .the way the parameterized model works is exactly analogous to the measurement procedure described above , with two alterations .first , we impose the restriction that the maximum distance the rigid rod can ` pull ' the second photon out of the center is equal to some parameter called ] . setting equal to 1 meanswe restore the model to the state it was in before parameterization . setting equal to zero meanswe have a classical ` deterministic ' situation , since the elastic can only break in the center ( there remains the indeterminism of the classical unstable equilibrium , because indeed the rod can still move in two ways , up or down ) .in fact , to be a bit more precise , we have as a set of states of the entity the set of couples \}\ ] ] each element of the couple belongs to a different sphere with center ( resp due to linear momentum conservation ) and whose radius is parameterized by the correlation parameter . at each sidewe have a set of measurements , \epsilon \in [ 0,1 ] \}\ ] ] the direction can be chosen arbitrarily by the experimenter at each side and denotes the direction of the polarizer .( of course , for the sake of demonstrating the violation of bell inequalities , the experimenter will choose at random between the specific angles that maximize the value the inequality takes ) . 3 cm = 7pt fig 9 : the situation immediately before and after the measurement at one side ( left in this case ) has taken place . in the first picture the breakable part of the elastic is shown ( i.e. the interval ] , and it will do so with a probability that is uniform over the entire interval ] we have a maximal lack of knowledge about the measurement and we recover the exact quantum predictions . as before ,the state of the entity before measurement is given by the centers of the two spheres .the first measurement projects one center , say orthogonally onto the elastic that is placed in the direction chosen by the experimenter at location a and that can only break in the interval ] , the breakable part between ] .if is in ] , the outcome will always be down. if , however , is in ] .this is the lebesgue measure of the interval divided by the total lebesgue measure of the elastic : this settles the probabilities related to .as before , we define the coincidence experiment as having four possible outcomes , namely , , and ( see section 2 ) . following bell , we introduce the expectation values for these coincidence experiments , as one easily sees that the expectation value related to the coincidence counts also splits up in three parts .* : + in this case we have and .hence the expectation value for the coincidence counts becomes we see that putting we get , which is precisely the quantum prediction .* : + in this case we have and .hence the expectation value for the coincidence counts becomes * : in this case we have and .hence the expectation value for the coincidence counts becomes let us now see what value the left hand side of the bell inequality takes for our model for the specific angles that maximize the inequality . for these angles and ,the condition is satisfied only if . in this case, we obtain : if , on the other hand we would have chosen and such that we find : we can summarize the results of all foregoing calculations in the following equation : for , we have two limiting cases that can easily be derived : for the left side of the inequality takes the value 0 , while for it becomes 4 . for what couples do we violate the inequality ?clearly , we need only consider the case . demanding that the inequality be satisfied, we can summarize our findings in the following simple condition : this result indicates that the model leaves no room for interpretation as to the source of the violation : for any , we can restore the inequality by increasing the amount of lack of knowledge on the interaction between the measured and the measuring device , that is by increasing .the only way to respect bell inequalities for all values of is by putting .likewise , for any it becomes impossible to restore the validity of the bell inequalities .the inevitable conclusion is that the correlation is the source of the violation .the violation itself should come as no surprise , because we have identified as the _ correlation _ between the two measurements , which is precisely the non - local aspect .this is also obvious from the fact that it is this correlation that makes not representable as an integral of the form as bell requires for the derivation of the inequality .what may appear surprising however , is the fact that increasing the indeterminism ( increasing ) , decreases the value the inequality takes ! for example , if we take and , we see that the value of the inequality is 4 , which is the largest value the inequality possibly can have , just as in the case of the vessels of water model .we have presented several arguments to show that bell inequalities can be genuinely violated in situations that do not pertain to the microworld . of course, this does not decrease the peculiarity of the quantum mechanical violation in the eprb experiment .what it does , is shed light on the possible underlying mechanisms and provide evidence that the phenomenon is much more general than has been assumed .the examples that we have worked out the ` vessels of water ' , the ` concepts in the mind ' , and the ` spheres connected by a rigid rod ' each shed new light on the origin of the violation of bell inequalities .the vessels of water and the spheres connected by a rigid rod examples , show that ` non - local connectedness ' plays an essential role in bringing the violation about .the spheres connected by a rigid rod example shows that the presence of quantum uncertainty does not contribute to the violation of the inequality ; on the contrary , increasing quantum uncertainty decreases the violation .all three examples also reveal another aspect of reality that plays an important role in the violation of bell inequalities : the potential for different actualizations that generate the violation .the state of the 20 liters of water as present in the connected vessels is potentially , but not actually , equal to ` 5 liters ' plus ` 15 liters ' of water , or ` 11 liters ' plus ` 9 liters ' of water , _etc_. similarly , we can say that the concept ` cat ' is potentially equal to instances such as our cats , ` glimmer ' and ` inkling ' .it is this potentiality that is the ` quantum aspect ' in our nonmicroscopic examples , and that allows for a violation of bell - type inequalities . indeed , as we know , this potentiality is the fundamental characteristic of the superposition state as it appears in quantum mechanics .this means that the aspect of quantum mechanics that generates the violation of bell inequalities , as identified in our examples , is the potential of the considered state . in the connected vessels example, it is the potential ways of dividing up 20 liters of water . in the concepts in the mind example , it is the potential instances evoked by the abstract concept ` cat ' . in the rigid rod example, it is the possible ways in which the rod can move around its center .aspect , a. , grangier , p. and roger , g. , 1981 , experimental tests of realistic local theories via bell s theorem " , _ phys ._ , * 47 * , 460 .aerts , d. , 1985a , the physical origin of the epr paradox and how to violate bell inequalities by macroscopical systems " , in _ on the foundations of modern physics _ , eds .lathi , p. and mittelstaedt , p. , world scientific , singapore , 305 .aerts , d. , 1985b , a possible explanation for the probabilities of quantum mechanics and a macroscopical situation that violates bell inequalities " , in _ recent developments in quantum logic _ , eds .p. mittelstaedt et al . , in grundlagen der exacten naturwissenschaften , vol.6 , wissenschaftverlag , bibliographisches institut , mannheim , 235 .aerts , d. , 1991 , a mechanistic classical laboratory situation violating the bell inequalities with , exactly in the same way as its violations by the epr experiments " , helv .acta , * 64 * , 1 - 24 .aerts , d. and gabora , l. , 1999 , quantum mind web course lecture week 10 " part of ` consciousness at the millennium : quantum approaches to understanding the mind ' , an online course offered by consciousness studies , the university of arizona , september 27 , 1999 through january 14 , 2000 .gabora , l. , 2000 , conceptual closure : weaving memories into an interconnected worldview " , in closure : emergent organizations and their dynamics " , eds .van de vijver , g. and chandler , j. , vol . 901 of the annals of the new york academy of sciences .lu , y.w . , sundararajan , n. and saratchandran , p. , 1997, a sequential learning scheme for function approximation using minimal radial basis function neural networks " , _ neural computation _ ,* 9 * , ( 2 ) , 461 - 478 .
we show that bell inequalities can be violated in the macroscopic world . the macroworld violation is illustrated using an example involving connected vessels of water . we show that whether the violation of inequalities occurs in the microworld or in the macroworld , it is the identification of nonidentical events that plays a crucial role . specifically , we prove that if nonidentical events are consistently differentiated , bell - type pitowsky inequalities are no longer violated , even for bohm s example of two entangled spin 1/2 quantum particles . we show how bell inequalities can be violated in cognition , specifically in the relationship between abstract concepts and specific instances of these concepts . this supports the hypothesis that genuine quantum structure exists in the mind . we introduce a model where the amount of nonlocality and the degree of quantum uncertainty are parameterized , and demonstrate that increasing nonlocality increases the degree of violation , while increasing quantum uncertainty decreases the degree of violation . center leo apostel , brussels free university krijgskundestraat 33 , 1160 brussels , belgium . diraerts.ac.be , saerts.ac.be jbroekae.ac.be , lgabora.ac.be _ dedication : _ marisa always stimulated interdisciplinary research connected to quantum mechanics , and more specifically she is very enthusiastic to the approach that we are developing in clea on quantum structure in cognition . it is therefore a pleasure for us to dedicate this paper , and particularly the part on cognition , to her for her 60 th birthday .
gene networks are extremely robust against genetic perturbations . for example , systematic gene knock - out studies on yeast showed that almost 40% of genes on chromosome v have no detectable effects on indicators like cell division rate .similar studies on other organisms agree with these results .it is also known that phenotypically , most species do not vary much , although they experience a wide range of environmental and genetic perturbations .this striking resilience makes one wonder about the origins , evolutionary consequences , and mechanistic causes of genetic robustness .it has been proposed that genetic robustness evolved through stabilizing selection for a phenotypic optimum .wagner showed that this in fact could be true by modeling a developmental process within an evolutionary scenario , in which the genetic interaction sequence represents organismal development , and the equilibrium configuration of the gene network represents the phenotype .his results show that the genetic robustness of a population of model genetic regulatory networks can increase through stabilizing selection for a particular equilibrium configuration ( phenotype ) of each network . in this paperwe investigate the effects of the biological evolution of genetic robustness on the dynamics of gene regulatory networks in general .in particular , we want to answer the question whether the evolution process moves the system to a different point in the phase diagram .below , we present some preliminary results .we use a model by wagner , which has also been used by other researchers with minor modifications . each individualis represented by a regulatory gene network consisting of genes .the expression level of each gene , has only two values , or , expressed or not , respectively .the expression states change in time according to regulatory interactions between the genes .the time evolution of the system configuration represents an ( organismal ) developmental pathway .the discrete - time dynamics are given by a set of nonlinear difference equations representing a random threshold network ( rtn ) , where sgn is the sign function and is the strength of the influence of gene on gene .nonzero elements of the matrix are independent random numbers drawn from a standard normal distribution .( the diagonal elements of * * are allowed to be nonzero , corresponding to self - regulation . )the ( mean ) number of nonzero elements in is controlled by the connectivity density , , which is the probability that a is nonzero .the dynamics given by eq .( [ eq : main ] ) can have a wide variety of features .for a specified initial configuration , the system reaches either a fixed - point attractor or a limit cycle after a transient period .the lengths of transients , number of attractors , distribution of attractor lengths , etc . can differ from system to system , depending on whether the dynamics are ordered , chaotic , or critical .the fitness of an individual is defined by whether it can reach a developmental equilibrium , a certain fixed gene - expression pattern , , in a `` reasonable '' transient time .further details of the model are explained in the next section .we studied populations of random networks ( founding individuals ) with .each network was assigned a matrix and an initial configuration . was generated as follows .each was independently chosen to be nonzero with probability .if so , it was assigned a random number drawn from a standard gaussian distribution , .then , each `` gene '' of the initial configuration , , was assigned either or at random , each with probability 1/2 .after and were created , the dynamics were started and the network s stability was evaluated . if the system reached a fixed point , in timesteps , then it was considered stable and kept .otherwise it was considered unstable , both and were discarded , and the process was started over and repeated until a stable network was generated . for each stable network , its fixed point , , was regarded as the `` optimal '' gene - expression state ( phenotype ) of the system .this is the only modification we made to wagner s model : he generated networks with preassigned and , whereas we accept any as long as it can be reached within timesteps from .after generating individual stable networks , we analyzed their state - space structures and evaluated their robustness as discussed in subsection [ sub : assesment - of - epigenetic ] . in order to generate a breed of more robust networks , a mutation - selection process was simulated for all of the random , stable networks as followsfirst , a clan of identical copies of each network was generated .for each clan , a four - step process was performed for generations : 1 .recombination : each pair of the rows of consecutive matrices in the clan were swapped with probability 1/2 .since the networks were already shuffled in step 4 ( see below ) , there was no need to pick random pairs .2 . mutation : each nonzero was replaced with probability by a new random number drawn from the same standard gaussian distribution .thus , on average , one matrix element was changed per matrix per monte carlo step .fitness evaluation : each network was run starting from the original initial condition , . if the network reached a fixed point , within timesteps , then its fitness , , was calculated .here , denotes the normalized hamming distance between and , and denotes the strength of selection , is the optimal gene - expression state , which is the final gene - expression state of the original network that `` founded '' the clan .we used .if the network could not reach a fixed point , then it was assigned the minimum nonzero fitness value , 4 .selection / asexual reproduction : the fitness of each network was normalized to the fitness value of the most fit network in the clan .then , a network was chosen at random and duplicated into the descendant clan with probability equal to its normalized fitness .this process was repeated until the size of the descendant clan reached . then the old clan was discarded , and the descendant clan was kept as the next generation .note that this process allows multiple copies ( offspring ) of the same network to appear in the descendant clan , while some networks may not make it to the next generation due to genetic drift . at the end of the generation selection ,any unstable networks were removed from the evolved clan .the mutational robustness of a network was assessed slightly differently for random and evolved networks . for a random network ,first , one nonzero was picked at random and replaced by a new random number with the same standard gaussian distribution .then , the dynamics were started , and it was checked if the system reached the same equilibrium state , , within timesteps .this process was repeated times using the original matrix ( i.e. , each mutated matrix was discarded after its stability was evaluated ) . the robustness of the original network before evolution was defined as the fraction of singly - mutated networks that reached . for the evolved networks ,clan averages were used . for each of networks in a clan ,robustness was assessed as described above with one difference : the number of perturbations was reduced to per network to keep the total number of perturbations used to estimate robustness of networks before and after evolution approximately equal .the mean robustness of the those networks was taken as the robustness of the founder network after evolution .therefore , the robustness of a network after evolution is the mean robustness of its descendant clan of stable networks .as wagner pointed out , the stabilizing selection described above increases the robustness of the model population against mutations .however , it is not very clear what kind of a reorganization in the state space occurs during the evolution .also , it is not known whether this robustness against mutations leads to robustness against environmental perturbations . in this paper , we focus on the effects of evolution in terms of moving the system to another point in the phase diagram . in other words ,we investigate whether the system becomes more chaotic or more ordered after evolution .( a ) shown vs. for and .the theory , eq .( [ eq : wagneroverlapmap ] ) , is in good agreement with the simulations .the deviations are due to the small size of the simulated system as the theoretical calculation assumes .( b ) damage - spreading rate , vs. , for random and evolved networks with and and 7 , showing the difference between the `` random '' and `` evolved '' curves .only the first half of the curves are shown since vs. is point - symmetric about . the results were averaged over 10 random networks and all of their evolved descendants ( evolved networks per random network ) . the evolved curves for each lie very close to their `` random '' counterparts .however , they are outside twice the error bar range of each other at most data points . , title="fig : " ] ( a ) shown vs. for and . the theory , eq .( [ eq : wagneroverlapmap ] ) , is in good agreement with the simulations .the deviations are due to the small size of the simulated system as the theoretical calculation assumes .( b ) damage - spreading rate , vs. , for random and evolved networks with and and 7 , showing the difference between the `` random '' and `` evolved '' curves .only the first half of the curves are shown since vs. is point - symmetric about .the results were averaged over 10 random networks and all of their evolved descendants ( evolved networks per random network ) .the evolved curves for each lie very close to their `` random '' counterparts . however , they are outside twice the error bar range of each other at most data points ., title="fig : " ] a standard method for studying damage spreading in systems such as the one considered here is the derrida annealed approximation , in which one calculates changes with time of the overlap of two distinct states , and , the change of the overlap over one time step for is given by , \label{eq : wagneroverlapmap}\ ] ] where the poisson distribution , is the probability of finding a gene , , with inputs , the binomial distribution is the probability of finding of these inputs in the overlapping parts of or , and ( for ) is the probability of the sum of matrix elements being larger than the sum of matrix elements , which are independent and distributed . here , , the mean number of inputs per node . for most rtns that have been studied so far , eq .( [ eq : wagneroverlapmap ] ) can be iterated as a map to give the full time evolution of the overlap .changes in the fixed - point structure of this map with changing would then signify phase transitions of the system . as seen in fig . [fig : derridaplot]a , for , such a map would have a stable fixed point at .one can also show that for all ( this implies and ) , and so it would seem that the system has no phase transition and always stays chaotic for nonzero .however , simulations of damage spreading for longer times indicate that the system studied here has strong memory effects due to the update rule for spins with no inputs , given by the last line in eq .( [ eq : main ] ) , which retard the damage spreading .in fact , like other rtns the system undergoes a phase transition near from a chaotic phase at larger to an ordered phase at smaller .the strong , retarding memory effects mean that eq .( [ eq : wagneroverlapmap ] ) can not be iterated as a map , and the nave prediction based on the derrida annealed approximation is erroneous . despite its irrelevance for the long - time damage spreading , the damage - spreading rate shown in fig .[ fig : derridaplot]b properly describes the short - time dynamical character of the system .however , as eq .( [ eq : wagneroverlapmap ] ) assumes that the interaction constants , , are statistically independent , it may not apply to evolved networks as we do not know whether the selection process creates correlations between the matrix elements .nevertheless , we can still compute as a function of numerically to see if there is a change in the degree of chaoticity ( or order ) of the dynamics . as seen in fig .[ fig : derridaplot]b , the damage - spreading rates for evolved networks are slightly ( but statistically significantly ) lower than for their random predecessors , which are thus slightly more chaotic . to summarize, we have presented preliminary results on some general properties of a popular rtn model of a gene regulatory network and on how the biological evolution of genetic robustness affects its dynamics .we have also shown that the update rule for spins without inputs leads to strong memory effects that invalidate nave iteration of the derrida annealed approximation as a map . the evolutionary process that improves the genetic robustness of such networks has only a very small effect on their dynamical properties : after evolution , the system moves slightly toward the more ordered part of the phase diagram .we thank d. balcan , b. uzunolu , and t. f. hansen for helpful discussions .this research was supported by u.s. national science foundation grant nos .dmr-0240078 and dmr-0444051 , and by florida state university through the school of computational science , the center for materials research and technology , and the national high magnetic field laboratory .
we study a genetic regulatory network model developed to demonstrate that genetic robustness can evolve through stabilizing selection for optimal phenotypes . we report preliminary results on whether such selection could result in a reorganization of the state space of the system . for the chosen parameters , the evolution moves the system slightly toward the more ordered part of the phase diagram . we also find that strong memory effects cause the derrida annealed approximation to give erroneous predictions about the model s phase diagram .
let be independent random variables collected along observation points according to the model where the s are i.i.d .symmetric random variables with distribution . in _isotonic regression _ the trend term is monotone non - decreasing , i.e. , , but it is otherwise arbitrary . in this set - up ,the classical estimator of is the function which minimizes the distance between the vector of observed and fitted responses , i.e , it minimizes , ^{2 } \label{eqn : l2}\ ] ] in the class of non - decreasing piecewise continuous functions .it is trivial but noteworthy that equation ( [ eqn : l2 ] ) posits a finite dimensional convex constrained optimization problem .its solution was first proposed by brunk ( 1958 ) and has received extensive attention in the statistical literature ( see e.g. , robertson , wright and dyskra ( 1988 ) for a comprehensive account ) .it is also worth noting that any piecewise continuous non - decreasing function which agrees with the optimizer of ( [ eqn : l2 ] ) at the s will be a solution .for that reason , in order to achieve uniqueness , it is traditional to restrict further the class to the subset of piecewise constant non - decreasing functions .another valid choice consists in the interpolation at the knots with non - decreasing cubic splines or any other piecewise continuous monotone function , e.g. , meyer ( 1996 ) .we will call this estimator the _ _ l__ _ isotonic estimator_. the sensitivity of this estimator to extreme observations ( outliers ) was noted by wang and huang ( 2002 ) , who propose minimizing instead using the norm , i.e. , minimizing this estimator will be call here _ _isotonic estimator_. wang and huang ( 2002 ) developed the asymptotic distribution of the trend estimator at a given observation point and obtained the asymptotic relative efficiency of this estimator compared with the classical l .interestingly , this efficiency turned out to be , the same as in the i.i.d . location problem . in this paper we will propose instead a robust _ isotonic m - estimator _ aimed at balancing robustness with efficiency .specifically we shall seek the minimizer of where is a an estimator of the error scale previously obtained and satisfies the following properties a1 : : \(i ) is non - decreasing in , ( ii ) , ( iii ) is even , ( iv ) is strictly increasing for and ( v ) has two continuous derivatives and is bounded and monotone non - decreasing .clearly , the choice corresponds to taking while the option is akin to opting for .these two estimators do no require the scale estimator note that the class of m - estimators satisfying a1 does not include estimators with a redescending choice for * *. we believe that the strict differentiability conditions on required in a1 are not strictly necessary , but they make the proofs for the asymptotic theory simpler .moreover , some functions which are not twice differentiable everywhere * such * as or the huber**s * * functions defined below in ( [ hubfam ] ) can be approximated by functions satisfying a1 .the asymptotic distribution of the l isotonic estimators at a given point was found by brunk ( 1970 ) and wright ( 1981 ) and the one of the l estimator by wang ( 2002 ) .they prove that the distribution of these estimators conveniently normalized converge to the distribution of the slope at zero of the greatest convex minorant of the two - sided brownian motion with parabolic drift . in this paper, we prove a similar result for isotonic m - estimators .the focus of this paper is on estimation of the trend term at a single observation point .we do not address the issue of distribution of the whole stochastic process .recent research along those lines are given by kulikova and lopuha ( 2006 ) and a related result with smoothing was also obtained simultaneously in pal and woodroofe ( 2006 ) .this article is structured as follows . in section [ sec : rire ] we propose the robust isotonic m - estimator . in section[ sec : ad ] we obtain the limiting distribution of the isotonic m - estimator when the error scale is known . in section [ scaleeq ]we prove that under general conditions the m - estimators with estimated scale have the same asymptotic distribution than when the scale is known . in section[ sec : if ] we define an influence function which measures the sensitivity of the isotonic m - estimator to an infinitesimal amount of pointwise contamination . in section [ sec: bp ] we calculate the breakdown point of the isotonic m - estimators . in section [ simul ]we compare by monte carlo simulations the finite sample variances of the estimators for two error distributions : normal and student with three degrees of freedom . in section [ sec: gw ] we analyze two real dataset using the l and the isotonic m - estimators .section [ appendix ] is an appendix containing the proofs .in similarity with the classical setup , we consider isotonic m - estimators that minimize the objective function ( [ eqn : mrepres ] ) within the class of piecewise constant non - decreasing functions . as in the l and l cases ,the isotonic m - estimator is a step function with knots at ( some of ) the s . in robertson and waltman ( 1968 )it is shown that maximum - likelihood - type estimation under isotonic restrictions can be calculated via _ min - max _ formulae .assume first that we know that the scale parameter ( e.g. , the mad , of the ) is .since we are considering m - estimators with non - decreasing ( see a1 ) , they can be view as the maximum likelihood estimators corresponding to errors with density } \ ] ] then we can compute the isotonic m - estimator at a point using the min - max calculation formulae where is the unrestricted m - estimator which minimizes where . alternatively ,if is convex and differentiable , as we are assuming , the terms in ( [ eqn : minmax ] ) can be represented uniquely as a zero of in particular , when where is a probability density , the isotonic m - estimator coincides with the maximum likelihood estimator when is is assumed to have density . in particularif is the n density , the mle is the m - estimator which defined by and therefore it coincides with the classical estimator .when is the density of a double exponential distribution , the mle is the m - estimator defined by and therefore it coincides with the l isotonic estimator . in these two cases the estimators are independent of the value of popular family of functions to define m - estimators is the huber family clearly , when is replaced by equations ( [ eqn : minmax])-([eqn : partialsums ] ) still holds with replaced by since is non - decreasing , the function defined in equation ( [ eqn : partialsums ] ) is non - increasing as a function of .this entails the fundamental identities given below these identities will be very useful in the development of the asymptotic distribution .in this section we derive the asymptotic distribution of the isotonic m - estimator of we first make the sample size explicit in the formulation of the model by postulating where the errors form a triangular array of i.i.d .random variables with distribution and is a triangular array of observation points .their exact location is described by the function .the values may be fixed or random but we will assume that there exists a continuous distribution function which has as support a finite closed interval such that without loss of generality we shall assume in the sequel it is the interval ] the classical l isotonic estimator with at the boundary of the support of is known to suffer from the so - called _ spiking problem _( e.g. , sun and woodroofe , 1999 ) , i.e. , is not even consistent .we further make the following assumptions .a2 : : the function is continuously differentiable in a neighborhood of with .a3 : : for a fixed , we assume the function has two continuous derivatives in a neighborhood of , and .a4 : : the error distribution has a density symmetric and continuous with . we consider first the case where is known .our first aim is to show that isotonic m - estimation is asymptotically a local problem .specifically , we will see in lemma [ lemma : rao1 ] that depends only on those corresponding to observation points lying in a neighborhood of order about .this result is similar to prakasa rao ( 1969 ) , lemma 4.1 , who stated it in the context of density estimation .our treatment here will parallel that of wright ( 1981 ) , who worked on the asymptotics of the isotonic regression estimator when the smoothness of the underlying trend function is specified via the number of its continuous derivatives . specifically , since we may choose for an arbitrary and sufficiently large , positive numbers and for which with this , define the_ localized version _ of the isotonic m - estimator as then we have the following lemma [ lemma : rao1 ] assume a1-a4 and ( [ condh ] ) .then if is defined by ( [ eqn : minmax ] ) , we have , =0 .\label{eqn : rao}\ ] ] is is also noteworthy that the estimator in equation ( [ eqn : modest ] ) is not computable , for and depend on the distribution which is generally unknown . for computational purposesthis implies that the calculation of these estimators will indeed be global for fixed sample sized .lemma [ lemma : rao1 ] is , however , crucial to study the asymptotic properties of . given an stochastic process , we denote by `` _{g} ] and in .the asymptotic breakdown point of at is defined by we start considering the case that is known .then we have the following theorem .consider the isotonic regression model given in ( [ eqn : model ] ) and let be an isotonic m - estimating functional where is an interior observation point .then under assumptions a1-a4 we have in the special case when is uniform , this becomes which takes a maximum value of at in the case that is replaced by an estimator derived from a continuous functional it can be proved that the breakdown point of satisfies ghement et al .( 2008 ) showed that if is defined as in remark [ rescalem ] , where is defined by ( [ scalem])-([limssca ] ) with and .moreover in this case coincides with the standard deviation when the error has a normal distribution .[ inmorta]in this section we consider data on infant mortality across countries .the dependent variable , the number of infant deaths per each thousand births is assumed decreasing in the country s per capita income .these data are part of the r package faraway and was used in faraway ( 2004 ) .the manual of this package only mentions that the data are not recent but it does not give information on the year and source . in figure [ morinf ]we compare the l isotonic regression estimator with the isotonic m - estimator computed with the huber s function with and as in remark 2 , where is defined by ( [ scalem])-([limssca ] ) with and there are four countries with mortality above 250 : saudi arabia ( 650 ) , afghanistan ( 400 ) , libya ( 300 ) and zambia ( 259 ) . these countries , specially saudi arabia and libya due to their higher relative income per capita ,exert a large impact on the l estimator .the robust choice , on the other hand , appears to resistant to these outliers and provides a good fit .[ ptb ] morewm.emf we reconsider the global warming dataset first analyzed in the context of isotonic regression by wu , woodroofe and mentz ( 2001 ) from a classical perspective and subsequently analyzed from a bayesian perspective in alvarez and dey ( 2009 ) .the original data is provided by jones _( see http//cdiac.esd.ornl.gov / trends / temp / jonescru / jones.html ) containing annual temperature anomalies from 1858 to 2009 , expressed in degrees celsius and are relative to the 1961 - 1990 mean .even though the global warming data , being a time series , might be affected by serial correlation , e.g. fomby and vogelsang ( 2002 ) , we opted for simplicity as an illustration to ignore that aspect of the data and model it as a sequence of i.i.d . observations . [ ptb ] gw3.eps _ _ in figure [ fig : gw1 ] we plot the l__ _ _ isotonic estimator , which for these data is identical to the isotonic m - estimate with k=0.98 .visual inspection of the plot shows a moderate outlier corresponding to the year 1878 ( shown as a solid circle ) . that apparent outlier , however , has no effect on the estimator due to the isotonic character of the regression.the fact that the l__ _ and the isotonic m - estimates coincide for these data seems to indicate that the phenomenon of global warming is not due to isolated outlying anomalies , but it is due instead to a steady increasing trend phenomenon . in our view , that validates from the point of view of robustness , the conclusions of other authors on the same data ( e.g. wu , woodroofe and mentz ( 2001 ) , and lvarez and dey ( 2009 ) ) who have rejected the hypothesis of constancy in series of the worlds annual temperatures in favor of an increasing trend . _interestingly the limiting distribution of the isotonic _ m_-estimator is based on the ratio as in the the i.i.d .location problem ( e.g. maronna , martin and yohai , 2006 ). the slower convergence rate , however , entails that the respective asymptotic relative efficiencies are those of the location situation taken to the power .specifically , note that from theorem 1 for any isotonic m - estimator \right\ } \\ & = \left [ \frac{1}{2}\mu^{\prime}(t_{0})h^{\prime}(t_{0})\frac { \mathrm{e}_{g}[\psi(u)^{2}]}{[\mathrm{e}_{g}\psi^{\prime}(u)]^{2}}\right ] ^{2/3}\text{var}[\text{slogcm}\left ( \mathbb{w}(v)+v^{2}\right ) ] , \end{aligned}\ ] ] where avar stands for asymptotic variance and var for variance . in order to determine the finite sample behavior of the isotonic m - estimators we have performed a monte carlo study .we took i.i.d .samples from the model ( [ eqn : model ] ) with trend term and where the distribution is n(0,1 ) and student with three degrees of freedom .the values corresponds to a uniform limiting distribution for .we estimated at , the true value of which is using three isotonic estimators : the l isotonic estimator , the l isotonic estimator and the same isotonic m - estimator that was used in the examples .we performed replicates at two sample sizes , and .dykstra and carolan ( 1998 ) have established that the variance of the random variable is approximately 1.04 . using this value , we present in table 1 sample mean square errors ( mse ) times as well as the corresponding asymptotic variances .{ccccccc|}\hline estimator & \multicolumn{2}{c}{n = 100 } & \multicolumn{2}{c}{n=500 } & \multicolumn{2}{c|}{avar}\\\cline{2 - 7 } & normal & student & normal & student & normal & student\\\hline l & 1.93 & 3.78 & 1.85 & 3.65 & 1.92 & 3.98\\ l & 2.38 & 2.89 & 2.67 & 2.76 & 2.59 & 2.89\\ m & 2.04 & 2.86 & 2.11 & 2.51 & 2.06 & 2.53\\\hline \end{tabular } \ \ \ ] ] table 1 .sample mse and avar for isotonic regression estimators .we note that for both distributions , the empirical mses for are close to the avar values . we also see that under both distributions the m - estimator is more efficient that the l one , that the m- estimator is more efficient than the l one for both distributions and that the l estimator is slightly less efficient than the l estimator for the normal case but much more efficient for the student distribution . in summary , the isotonic m - estimate seems to have a good behavior under both distributions .without loss of generality we can assume that given , for sufficiently large there exist positive numbers and for which as in wright ( 1981 ) , we first argue that \leq \mathrm{p}(\omega_{1n})+\mathrm{p}(\omega_{2n}),\ ] ] where \right .< \left .\max_{u\leq t_{0}-\alpha_{l}(n)}\hat{\mu}_{n}[u , t_{0}-\beta_{l}(n)]\right\ } , \label{omega1}\\ \omega_{2n } & = \left\ { \max_{u\leq t_{0}}\hat{\mu}_{n}[u , t_{0}+\beta _ { u}(n))\right . > \left .\min_{v\geq t_{0}+\alpha_{u}(n)}\hat{\mu}_{n}[t_{0}+\beta_{u}(n),v]\right\ } .\label{eqn : omega2}\ ] ] to see this , note that the complement of is the set in which , for all and all we have that \right\ } ] , we obtain that for some constant which does not depend on we can write (h_{n}(s)-h(s))\right\vert \leq acn^{-2/3}o(1 ) .\label{intbrav1}\ ] ] consider now the first term in the right hand side of equation ( [ decomp ] ) . using ( [ condh ] ) we have } & [ \mu(s)-\mu(t_{0}-\beta _ { l}(n))]dh(s)\\ & = \int_{t_{0}-\beta_{l}(n)}^{t_{0}}[\mu(t_{0})-\mu(t_{0}-\beta _ { l}(n))]dh(s)-\int_{t_{0}-\beta_{l}(n)}^{t_{0}}[\mu(t_{0})-\mu(s)]dh(s)\\ & = [ \mu(t_{0})-\mu(t_{0}-\beta_{l}(n))][h(t_{0})-h(t_{0}-\beta_{l}(n))]-\int_{t_{0}-\beta_{l}(n)}^{t_{0}}[\mu(t_{0})-\mu(s)]dh(s)\\ & \leq\left ( \frac{\mu(t_{0})-\mu(t_{0}-\beta_{l}(n))}{\beta_{l}(n)}\right ) \left ( \frac{h(t_{0})-h(t_{0}-\beta_{l}(n))}{\beta_{l}(n)}]\right ) \beta _ { l}(n)^{2}\\ & = \mu^{\prime}(t_{0})[1+o(1)]h^{\prime}(t_{0})[1+o(1)]\beta_{l}(n)^{2}\\ & = \mu^{\prime}(t_{0})[h^{\prime}(t_{0})]^{-1}c^{2}n^{-2/3}[1+o(1)].\end{aligned}\ ] ] therefore {n}(s)\\ & \leq\mu^{\prime}(t_{0})c^{2}[h^{\prime}(t_{0})]^{-1}n^{-2/3}(1+o(1))+2\mu ^{\prime}(t_{0})c[h^{\prime}(t_{0})]^{-1}n^{-2/3}o(1)\\ & \leq\mu^{\prime}(t_{0})c^{\star}[h^{\prime}(t_{0})]^{-1}n^{-1/3}\left\ { n^{-1/3}[1+o(1)]+2n^{-1/3}o(1)\right\}\end{aligned}\ ] ] with .then , for some constant which does not depend on we can write {n}(s)\leq bc^{\star}n^{-1/3}. \label{intb}\ ] ] from ( [ eqn : l ] ) , ( [ eqn : boundd ] ) , ( [ delta1 < ] ) , ( [ eqn : eqsum ] ) , ( [ decomp ] ) , ( [ intbrav1 ] ) and ( [ intb ] ) we derive that there exists a constant independent of such that for large enough and at this point , we use the hjek - renyi maximal inequality ( e.g. , shorack , 2000 ) which asserts that for a sequence of independent random variables with mean and finite variances and for a positive non - decreasing real sequence , using this inequality from ( [ eqn : tohr ] ) we get that approximating the riemann sum we obtain and since by ( [ condh ] ) , for large enough we have from ( [ file1 ] ) , ( [ file2 ] ) and ( [ file3 ] ) we derive that for large enough then the lemma follows immediately . without loss of generalitywe can assume that ^{-1}n^{-1/3}[1+o(1)] ] .so that the relabelling implies .consequently , now a taylor expansion of around for any gives which entails that using the equivariance of m - estimators , the monotonicity of and the fact that is bounded , it can be proved that where solves and thus , using that the are bounded over we have this entails that =\max_{r\leq0}\min_{v\geq0}n^{1/3}\tilde{\mu}_{n}^{c}(r , s)+o_{rs}^{\ast}(1),\ ] ] where then , we only need to obtain the asymptotic distribution of let be the solution of since we have that where we will approximate now as follows\nonumber\\ & + [ h(t_{0}+sn^{-1/3})-h(t_{0}-rn^{-1/3})]-[h_{n}(t_{0})-h(t_{0})]\nonumber\\ & = h^{\prime}(t_{0})(s - r)n^{-1/3}+o(n^{-1/3})\nonumber\\ & = n^{-1/3}h^{\prime}(t_{0})(s - r)[1+o(1 ) ] , \label{nrsfor}\ ] ] and therefore , \label{nrsfor2}\]] and then , taking and applying the law of large numbers is easy to show that a.s . and therefore by ( [ dnrs1 ] ) and ( [ dnrs2 ] ) too .since satisfies ( [ murs ] ) , by a taylor expansion of we get from here we obtain and then by ( [ murs ] ) , the law of the large numbers , and bounded we have and then , ( [ largui1 ] ) entails let by ( [ nrsfor3 ] ) and the central limit theorem we have that for any set of finite numbers , the random vector converges in distribution to n(0 , where with moreover , using standard arguments , it can be proved that is tight .then , we have where is a two sided brownian motion as for the second term in the right hand side of ( [ nosac ] ) define for we can write and then integrating by parts we get and by ( [ condh ] ) we have we can write and by([condh ] ) we get therefore by ( [ lan2 ] ) , ( [ lan3 ] ) and ( [ lan4 ] ) we get and for ( [ lan1 ] ) we get that for now we compute the variance of from ( [ lan0 ] ) we have then by ( [ lan5 ] ) we obtain similarly we can prove that therefore from ( [ nosac ] ) , ( [ bns ] ) , ( [ bnsac1 ] ) , ( [ bnsac2 ] ) , ( [ bnscac3 ] ) and ( [ bnsac4 ] ) we get that now the rest of the proof is as in wright ( 1981 ) .we require the following lemma assume a1-a5 then , proof taking the first derivative of equation ( [ eqn : zsigma ] ) with respect to yields and then let then by a5 we obtain therefore * proof of theorem 2 * by the mean value theorem where is some intermediate point between and .hence , by lemma 2 we have and a6 implies without loss of generality we can assume that we consider first the case assume that .then represents a contamination model where an outlier is placed at the observation point with value which is below the trend at the point .let be the value such that it is immediate that then should be the value of satisfying and , since by ( [ 1 ] ) we have applying the mean value theorem to the first term of ( [ befapp ] ) we can find such that as for the second term in ( [ befapp ] ) we also have that where . since is odd and even , , so that the first term above vanishes . as for the second term , notice that \left [ \int\limits_{-\infty}^{\infty}\psi^{\prime}(u+\gamma)g(u)du\right ] .\label{6}\ ] ] the first integral factor in the right hand side of the above display can be further approximated . by the mean value theorem, there exists such that and _ { t_{0}-k(\varepsilon)}^{t_{0}}\nonumber\\ & = \frac{1}{2}\mu^{\prime}(t_{0})h(t_{0})k^{2}(\varepsilon ) .\label{7}\ ] ] from expressions ( [ 3])-([7 ] ) we obtain that equation ( [ befapp ] ) , can be written as + ( 1-\varepsilon)\frac{1}{2}\mu^{\prime}(t_{0})h(t_{0})k^{2}(\varepsilon)\int\limits_{-\infty}^{\infty}\psi^{\prime } ( u)g(u+\gamma)du=0.\ ] ] dividing both sides of this equation by and using that and when we obtain finally , according to ( [ 1 ] ) and using the mean value theorem , we can write where . then using equation ( [ 8 ] ) we obtain that the proof in the case the that is similar we consider now the case to prove this part of the theorem is enough to show that there exists , so that implies and to prove this is enough to show that when this is immediate .consider the case that clearly for it is also easy to show that implies and for and for all then , using ( [ hinch1])-([hinch3 ] ) and the fact that in order to prove ( [ impeq ] ) , it is enough to show that recall that is the solution of where clearly and since and are both increasing in we get , since is bounded , we can find so that for we have and therefore then ( [ enough1 ] ) holds and this proves the theorem for the case the proof for the case is similar .without loss of generality we can assume that it is easy to see that the least favorable contaminating distribution is concentrated at where tends to or to a necessary and sufficient condition for is that the equation have a bounded solution solution for all and that the equation have a solution for all .taking we obtain that a sufficient condition for the existence of solution of ( [ bdp2 ] ) for all is that and this equivalent to the theorem follows from ( [ bdp1 ] ) and ( [ bdp2 ] ) .
in this paper we propose a family of robust estimates for isotonic regression : isotonic m - estimators . we show that their asymptotic distribution is , up to an scalar factor , the same as that of brunk s classical isotonic estimator . we also derive the influence function and the breakdown point of these estimates . finally we perform a monte carlo study that shows that the proposed family includes estimators that are simultaneously highly efficient under gaussian errors and highly robust when the error distribution has heavy tails . * keywords * : isotonic regression , m - estimators , robust estimates .
cryptocurrencies are digital currencies alternative to the legal ones .a cryptocurrency is a computer currency whose implementation is based on the principles of cryptography , used both to validate the transactions and to generate new currency .the cryptocurrency implementation often use a proof - of - work scheme recording all transactions in a public ledger in order to protect sellers from fraud .most of cryptocurrencies are designed to gradually introduce new currency , placing a ceiling on the total amount of money in circulation , to avoid the inflation phenomena as often happens for `` fiat '' currencies .the most popular cryptocurrency is undoubtedly bitcoin .it was created by a computer scientist known as `` satoshi nakamoto '' whose real identity is still unknown . like the other cryptocurrencies , bitcoins use cryptographic techniques , and thanks to an open source systemanyone is allowed to control and modify the source code of the bitcoin software .the bitcoin network is a peer - to - peer network that checks and monitors both the generation of new bitcoins , ( aka `` mining '' ) and the transactions in bitcoins .this network includes a high number of computers connected to each other through the internet .it performs complex mathematical procedures which give life to the mining and verify the correctness and truthfulness of the bitcoin transactions .the bitcoin system provides a ceiling on the amount of money in circulation , equal to 21 million of bitcoins , consequently there is not the risk that the number of coins increases too much , devaluating the currency .bitcoin has several attractive properties for consumers . at first, it does not rely on a central bank or a government to regulate the money supply .it enables quasi- anonymous transactions , providing a greater anonymity than traditional electronic payments .in addition , bitcoin transactions are irreversible and can also be very small .indeed , a bitcoin transaction can involve only one `` satoshi '' , a subunit equal to of a bitcoin .the bitcoin can be purchased on appropriate websites such as crypto trade coinmkt , btc - and vircurex , cryptsy , coinbase , upbit and vault of satoshi , that allow to change fiat cash in bitcoins .other sites offer online services , or goods exchange for goods , and accept payments in bitcoins .the bitcoin allow everyone to send cryptocurrency internationally at a very small expense . over the past years, interest in digital currencies has increased .indeed , bitcoin had a rapid growth , both in value and in the number of transactions since its beginning in early 2009 .the _ _ blockchain _ _ web site provides different graphs and statistical analysis about bitcoins .in particular , we can observe the time trend of the bitcoin price . between january 2009 and january 2010there were no exchanges on the market .between february 2010 and may 2010 two consumers made the first real - world transactions .one bought 2 pizzas for 10,000 btc , and another auctioned 10,000 btc for 0.008 to 1,150 was reached in december 2013 . in the same month , the bitcoin price crashed to 1,000 , then crashed again to the 800- 400 .the recent attention given to bitcoin and in general to the cryptocurrencies shows that many consumers are turning their attention toward new trading means to simplify their financial lives .online purchases performed with cryptocurrency are anonymous , faster and simpler than the traditional credit cards ones . while the popularity of cryptocurrencis has grown quickly , they still face important argument because of their unconventional way of working .a lively debate is ongoing about promise , perils and risks of digital currencies and in particular of bitcoin .several papers appeared on these topics , but the attempts to study the cryptocurrency market as a whole are very few . in this work ,we propose an agent - based model aimed to study and analyse the bitcoin market as a whole .we try to reproduce the main stylized facts present in the real bitcoin market , such as the autocorrelation and the cumulative distribution function of the price absolute returns .the model proposed simulates the bitcoin transactions , by implementing a mechanism for the formation of the bitcoin price , and a specific behavior for each typology of trader .the paper is organized in the following . in section [ sec:1 ]we discuss other works related to this paper , in section [ sec:2 ] we present our model in detail ; section [ sec:6 ] deals with the calibration of the model , and with the values given to several parameters of the model .section [ sec:7 ] presents the results of the simulations , including an analysis of bitcoin real prices .the conclusions of the paper are reported in section [ sec:8 ] .the study and analysis of the cryptocurrency market is a relatively new field . in the last years, several papers appeared on this topic given its potential interest , and the many issues related to it .androulaki et al . studied the privacy guarantees of bitcoin when bitcoin is used as a primary currency for the daily transactions .moore , hout et al . , eyal et al . , brezo et al . and hanley analysed promise , perils , risks and issues of digital currencies .bergstra et al . investigated technical issues about protocols and security , and issues about legal , ethical and phychological aspects of cryptocurrencies .singh et al . proposed an additional layer of mutual trust between the payer and the paye , e in order to enhance the security associated with fast transactions for the real bitcoin transaction network .only very few attempts have been made so far in order to model the cryptocurrency market as a whole .luther studied why cryptocurrencies failed to gain widespread acceptance using a simple agent model .the proposed a model in which crypto - anarchists , computer gamers , tech savy and black market agents derive a specific utility by using the fiat currency or the crypto currency .the utility s value varies with the typology of currency and traders .it takes into account the network related benefits from using the same money as other agents , the benefits unrelated to networks , and the switching costs that incur to switch to the alternative currency .the author showed that cryptocurrencies like bitcoins can not generate widespread acceptance in absence of significant monetary instability , or of government support , because of the high switching costs and of the importance of the network effects .considering that hundreds of cryptocurrencies have be already proposed on the internet , and that most of them are waiting for acceptance , bornholdt and sneppen proposed a model based on moran process to study the cryptocurrencies able to emerge .this model simulate the interchange between several markets where different cryptocurrencies are traded .in particular , the authors simulate the agent trading , and the mining of new coins at a constant rate , and the communications among traders .they showed that all the crypto - currencies are innately interchangeable , and that the bitcoin currency in itself is not special , but may rather be understood as the contemporary dominating crypto - currency that might be replaced by other currencies .since very few works have been made to model the crypto market , in this paper we propose a complex agent - based model to study the cryptocurrencies market as a whole and to reproduce the main stylized facts present in this market , such as autocorrelation and distribution function of the price absolute returns .our model is inspired by artificial financial market models .these models are stylized heterogeneous agent models ( hams ) and reproduce the real functioning of markets , trying to explain the main stylised facts observed in financial markets , such as the fat - tailed distribution of returns , the volatility clustering , and the unit - root property .many works have been published on this topic .lebaron offered a first review of the work appeared in this field .more recently , chakraborti et al . , and chen et al . offered other , updated reviews .palmer et al . and arthur et al . proposed an artificial markets combining market trading mechanisms with inductive agent learning . in the 2000s, researchers at the university of genoa and cagliari developed the genoa artificial stock market ( gasm ) .in particular , raberto et al . proposed an agent - based artificial financial market that , through a realistic trading mechanism for price formation , is able to reproduce some of the main stylised facts observed in real financial markets .raberto et al . , and cincotti et al . studied the long - run wealth of traders characterized by different trading strategies .moreover , raberto et al presented an extension of the gasm including a limit order book mechanism for price formation .they demonstrated that the main stylized facts in financial market can be reproduced as a consequence of the limit order book , not introducing any assumption on agents behavior .alfarano et al . , starting from the study of relatively complicated agent - based models which do not allow for analytical solutions , proposed a closed - form model that gives rise to realistic behavior of the resulting time series , like fat tails of returns and temporal dependence of volatility .liua et al . developed two simple models to investigate important statistical features of stock price series .with the first model , the authors found that the clearing house microstructure can explain fat tail , excess volatility and autocorrelation phenomena of high - frequency returns . with the second model the authors investigated the effects of agents behavioral assumptions on daily returns .ponta et al . studied the statistical properties of prices and returns by using a heterogeneous agent model .they simulate an artificial stock market where agents are modelled as nodes of sparsely connected graphs .the agents own an amount of cash and stocks , share information by means of interactions determined by graphs and trade risky assets ; whereas a central market maker determines the price at the intersection of the demand and supply curves .ponta et al . proposed a heterogeneous agent model for the simulation of high - frequency market data by using the genoa artificial stock market . in this market , agents have zero intelligence and trade a risky asset , the price being cleared by means of a limit order book in which the waiting - time distribution between consecutive orders follows a weibull distribution .the authors demonstrated that this mechanism is able to reproduce fat - tailed distributions of returns without ad - hoc behavioral assumptions on agents .recently , feng et al . combine the agent - based approach with the stochastic process approach and propose a model based on the empirically proven behavior of individual market participants that quantitatively reproduces fat - tailed return distributions and long - term memory properties .westerhoff and franke propose a model using three groups of traders : chartists , fundamentalists and investors , demonstrateing that this combination , together with a simple asset pricing model , can contribute to explaining the stylized facts of the daily returns of financial markets .the proposed model presents an agent - based artificial cryptocurrency market in which heterogeneous agents buy or sell cryptocurrency . in particular, we used the bitcoin market as a reference to calibrate the model and to compare the results .for the same reason , the fiat currency is referred as `` dollars '' , or `` , whereas the total cash of these traders is about 8.7 million , 100 . these values are much lower than real bitcoin price spikes .we attribute it to the fact that the intrinsic mean reverting behavior of the simulated model prevents prices to spike as in the real market .all other statistical properties of prices and returns , however , are reproduced quite well in our model .we performed the augmented dickey - fuller test to the series of simulated bitcoin daily prices .the results for one of the simulations are the followings .the statistic for the null hypothesis is , and its corresponding critical values at levels 1% , 5% , and 10% with 830 observations are , and respectively . at these levels , also for the simulated data we ca nt reject the null hypothesis that .all simulations yield similar results .the log - log plot of the ccdf of the price absolute returns in a simulated bitcoin market is shown in fig .[ fig : ccdfsim ] , together with its real counterpart .the linear behavior in the tail , that denotes a power - law , is patent .the two curves are quite similar , though the real ccdf has a fat tail slightly broader .the autocorrelations of the simulated price returns and absolute returns at different time lags are shown in figs .[ fig : acfccdf ] ( a ) and ( b ) , respectively .the autocorrelation of raw returns ( a ) is much lower than that of absolute returns ( b ) .both are very similar to those of real bitcoin prices , shown in fig .[ fig : realacf](a ) and ( b ) .this confirms the presence of volatility clustering also for the simulated price series .as described in the previous sections , each trader owns a specific amount of dollars and bitcoins , so the amount of bitcoins to be traded , depend on these quantities .the total wealth of -th trader at time , , is defined as the sum of her cash plus her bitcoins multiplied by the current price , reported in eq .[ eq.wealth ] .of course , this quantity varies with the price . some data about the distribution of the wealth among the different population of traders , the limit prices and the amount average of bitcoin exchanged in each transaction are reported in the next figures[ fig : cash ] ( a ) and ( b ) show the total amount of bitcoins and cash owned by both populations , for all time steps . the `` spikes '' you see in fig .[ fig : cash ] ( a ) regarding bitcoin endowment of random traders are an artifact due to the removal of traders .when a trader is removed from the market , she sells all her bitcoins through a market order .however , if there are no enough matching buy orders , these bitcoins `` disappear '' from the market , until some trader buy them .hence , the spikes . during the simulation shownthere is no chartist in a similar condition . ,scaledwidth=45.0% ] the sum of bitcoins owned by random and chartist traders follows the trends shown in fig .[ fig : basicdata ] ( b ) .it is worth noting that the total amount of bitcoins owned by chartists looks fairly constant .[ fig : cash1 ] shows the total wealth of both populations , as defined in eq .[ eq.wealth ] .the total wealth is heavily influenced by price behavior .we also computed the ratio between the total wealth of random traders and that of chartists , finding that it fluctuates around a constant value , equal to the ratio of the total initial wealth of all random traders over the total initial wealth of chartists .this ratio is on average equal the ratio of the number of random over chartist traders that is , but it can differ substantially from the average , due to the pareto distribution of traders wealth .in fact , in different runs there can be relatively high differences between the total wealth of the various kinds of traders , whereas the total wealth of all traders is constant .the relative invariance of this ratio during the simulation means that the chartist strategy is not winning with respect to the random strategy , but both are roughly equivalent . in order to assess the robustness of our model and the validity of our statistical analysis, we repeated 100 simulations with the same initial conditions , but different seeds of the random number generator .[ fig : averageprice ] ( a ) and ( b ) report the average and the standard deviation of the simulated price , taken on all 100 simulations .the average price follows quite closely the number of traders present in the market , reported in fig .[ fig : basicdata ] .this is reasonable because , given the slowly varying amount of bitcoins , the price should be roughly proportional to the cash available in the market to buy them , which is in turn proportional to the traders number .the standard deviation of prices is roughly equal to of the average , meaning that the 100 simulations are quite consistent as regards price behavior. figs .[ fig : acf ] ( a ) and ( b ) show the raw return autocorrelation functions ( acf ) of a couple of simulations extracted from the 100 performed .both acfs show the same behavior as reported in fig .[ fig : acfccdf ] ( a ) .all 100 acfs have similar behaviors .also the acf of absolute returns show a behavior similar to that reported in fig .[ fig : acfccdf ] ( b ) for all the considered cases . figs .[ fig : acfsomeccdf ] show the cccdf of the price absolute returns in the real case and for ten simulations extracted from the 100 simulations quoted .also in this case , the power - law behavior is patent in all reported simulations , though with a greater slope than in the real case .overall , all performed simulations presented a consistent behavior , with no significant variations of statistical properties of the bitcoin prices and of traders endowment of bitcoins and cash ., scaledwidth=45.0% ]in this paper , we present an heterogenous agent model of the bitcoin market , accurately modeling many of the characteristics of the real market .namely , the model includes different trading strategies , an initial distribution of wealth following pareto law , a realistic trading and price clearing mechanism based on an order book , the increase with time of the total number of bitcoins due to mining , the arrival of new traders interested in bitcoins .the model is simulated and its outputs especially the bitcoin prices are analysed and compared to real prices .the main result of the model , besides being to our knowledge the first model of a cryptocurrency market following the artificial financial market approach , is the fact that some key stylized facts of bitcoin real price series are very well reproduced .the computational experiments performed produce price series for which we are unable to reject the hypothesis that they follow a random walk .the autocorrelation of raw returns is very low for all time lags , whereas the autocorrelation of absolute returns is much higher , confirming the presence of volatility clustering .also , the ccdf of the absolute returns exhibit a power - law behavior in its tail , like that of real absolute returns .note that the results obtained are quite sensitive to the traders behavior .we found that the presence of different traders populations , and hence the trading between random traders and chartists , is essential to reproduce the autocorrelation and the ccdf of the returns of the bitcoin price .in particular , the chartists behaviour is essential to reproduce autocorrelations of the returns that confirm periods of quiescence and turbulence in the simulated bitcoin price .alfarano s. , lux t. , wagner f. , time variation of higher moments in a nancial market with heterogeneous agents : an analytical approach , journal of economic dynamics & control 32 , 101136 ( 2008 ) androulaki e. , karame g. , roeschlin m. , scherer t. and capkun s. , evaluating user privacy in bitcoin , proceedings of the financial cryptography and data security conference ( fc ) ( 2013 ) arthur w.b . ,holland j.h . , lebaron b. , palmer r. and tayler p. , asset pricing under endogenous expectations in an artificial stock market , the economy as an evolving complex system ii , ser .sfi studies in the sciences of complexity , arthur w.b , durlauf s.n . , and lane d.a .( eds ) , vol .addison wesley longman , pagg .15 - 44 ( 1997 ) bergstra j. a. and leeuw dk . , questions related to bitcoin and other informational money , corr , vol . 1305.5956 ( 2013 )bornholdt s. and sneppen k. , do bitcoins make the world go round ? on the dynamics of competing crypto - currencies ( 2014 ) brezo f. and bringas p. g. , issues and risks associated with cryptocurrencies such as bitcoin , the second international conference on social eco - informatics ( 2012 ) chakraborti a. , toke i.m ., patriarca m. and abergel f. , econophysics review : ii .agent - based models , quantitative finance 11:7 , 1013 - 1041 ( 2011 ) chen s.h . ,chang c.l . anddu y.r . , agent - based economic models and econometrics , the knowledge engineering review , 27,187 - 219 ( 2012 ) cincotti s. , focardi s. , marchesi m. and raberto m. , who wins ?study of long - run trader survival in an artificial stock market , physica a : statistical mechanics and its applications , elsevier , vol .324(1 ) , pagg .227 - 233 ( 2003 ) eyal i. and sirer e. , majority is not enough : bitcoin mining is vulnerable , corr , vol .1311.0243 ( 2013 ) feng l. , podobnik b. l. , b. , preis t. , and stanley h.e . ,linking agent - based models and stochastic models of financial markets , pnas 109:22 , 83888393 ( 2012 ) hanley b.p . , the false premises and promises of bitcoin , corr ( 2013 ) hout m. c. v. and bingham t. , responsible vendors , intelligent consumers : silk road , the online revolution in drug trading , international journal of drug policy , elsevier , vol . 25 , pagg . 183 - 189 ( 2014 ) lebaron b. , agent - based computational finance , handbook of computational economics , agent - based computational economics , vol. 2 ( 2006 ) levy m. and solomon s. , new evidence in the power - law distribution of wealth , physica a , 242(12 ) , 9094 ( 1997 ) .liua x. , gregorc s. , yang j. , the effects of behavioral and structural assumptions in articial stock market , physica a 387 25352546 ( 2008 ) luthe w. , cryptocurrencies , network effects , and switching costs , mercatus center working paper no .13 - 17 ( 2013 ) lux t. , marchesi m. , volatility clustering in financial markets : a microsimulation of interacting agents , international journal of theoretical and applied finance 3:4 675 - 702 ( 2000 ) moore t. , the promise and perils of digital currencies , international journal of critical infrastructure protection , agent - based computational economics , pagg .147 - 149 ( 2013 ) nakamoto s. , bitcoin : a peer - to - peer electronic cash system , www.bitcoin.org .newman m.e.j ., power laws , pareto distributions and zipf s law , contemporary physics 46:5 323 - 351 ( 2005 ) .pagan a. , the econometrics of financial markets , j. empirical finance 3 15 - 102 ( 1996 ) .palmer r. , arthur w.b ., holland j.h ., lebaron b. and tayler p , artificial economic life : a simple model of a stock market , physica d , vol . 75 , pagg . 264 - 274 ( 1994 ) ponta l. , pastore s. , cincotti s. , information - based multi - assets artificial stock market with heterogeneous agents , nonlinear analysis : real world applications , vol .12 , pagg .1235 - 1242 ( 2011 ) ponta l. , scalas e. , raberto m. , and cincotti s. , statistical analysis and agent - based microstructure .modeling of high - frequency financial trading , ieee journal of selected topics in signal processing , vol .6 , no . 4 ( 2012 ) raberto m. and cincotti s. and focardi s. and marchesi m. , agent - based simulation of a financial market , physica a , vol.299(1 ) , pagg .319 - 327 ( 2001 ) raberto m. and cincotti s. and focardi s. and marchesi m. , traders long - run wealth in an artificial financial market , society for computational economics , computing in economics and finance , vol . 301 ( 2002 ) raberto m. and cincotti s. and dose c. and focardi s. and marchesi m. , price formation in an artificial market : limit order book versus matching of supply and demand , nonlinear dynamics and heterogenous interacting agents , springer - verlag , berlin ( 2005 ) singh p. and chandavarkar b. r. and arora s. and agrawal n. , performance comparison of executing fast transactions in bitcoin network using verifiable code execution , second international conference on advanced computing , networking and security ( 2013 ) takayasu , h. , fractals in the physical sciences , john wiley and sons , new york ( 1990 ) .westerhoff f. , franke r. , converse trading strategies , intrinsic noise and the stylized facts of financial markets , quantitative finance , 12:3 , 425 - 436 ( 2012 ) .
this paper presents an agent - based artificial cryptocurrency market in which heterogeneous agents buy or sell cryptocurrencies , in particular bitcoins . in this market , there are two typologies of agents , random traders and chartists , which interact with each other by trading bitcoins . each agent is initially endowed with a finite amount of crypto and/or fiat cash and issues buy and sell orders , according to her strategy and resources . the number of bitcoins increases over time with a rate proportional to the real one , even if the mining process is not explicitly modelled . the model proposed is able to reproduce some of the real statistical properties of the price absolute returns observed in the bitcoin real market . in particular , it is able to reproduce the autocorrelation of the absolute returns , and their cumulative distribution function . the simulator has been implemented using object - oriented technology , and could be considered a valid starting point to study and analyse the cryptocurrency market and its future evolutions . artificial financial market , cryptocurrency , bitcoin , heterogeneous agents , market simulation
it has been hypothesized that , in many areas of the brain , having different brain functionality , repeatable precise spatiotemporal patterns of spikes play a crucial role in coding and storage of information .temporally structured replay of spatiotemporal patterns have been observed to occur during sleep , both in the cortex and hippocampus , and it has been hypothesized that this replay may subserve memory consolidation . the sequential reactivation of hippocampal place cells , corresponding to previously experienced behavioral trajectories , has been observed also in the awake state ( awake replay ) , namely during periods of relative immobility .awake replay may reflect trajectories through either the current environment or previously , spatially remote , visited environments .a possible interpretation is that spatiotemporal patterns , stored in the plastic synaptic connections of hippocampus , are retrieved when a cue activates the emergence of a stored pattern , allowing these patterns to be replayed and then consolidated in distributed circuits beyond the hippocampus .cross - correlogram analysis revealed that in prefrontal cortex the time scale of reactivation of firing patterns during post - behavioral sleep was compressed five- to eightfold relative to waking state , a similar compression effect may also be seen in primary visual cortex .internally generated spatiotemporal patterns have also been observed in the rat hippocampus during the delay period of a memory task , showing that the emergence of consistent pattern of activity may be a way to maintain important information during a delay in a task . among repeating patterns of spikes a central roleis played by phase - coded patterns , i.e. patterns with precise relative phases of the spikes of neurons participating to a collective oscillation , or precise phases of spikes relatively to the ongoing oscillation .first experimental evidence of the importance of spike phases in neural coding was observed in experiments on theta phase precession in rat s place cells , showing that spike phase is correlated with rat s position .recently , the functional role of oscillations in the hippocampal - entorinal cortex circuit for path - integration has been deeply investigated , showing that place cells and grid cells form a map in which precise phase relationship among units plays a central role .in particular it has been shown that both spatial tuning and phase - precession properties of place cells can arise when one has interference among oscillatory cells with precise phase relationship and velocity - modulated frequency .further evidence of phase coding comes from the experiments on spike - phase coding of natural stimuli in auditory and visual primary cortex , and from experiments on short - term memory of multiple objects in prefrontal cortices of monkeys .these experimental works support the hypothesis that collective oscillations may underlie a phase dependent neural coding and an associative memory behavior which is able to recognize the phase coded patterns .the importance of precise timing relationships among neurons , which may carry information to be stored , is supported also by the evidence that precise timing of few milliseconds is able to change the sign of synaptic plasticity .the dependence of synaptic modification on the precise timing and order of pre- and post - synaptic spiking has been demonstrated in a variety of neural circuits of different species .many experiments show that a synapse can be potentiated or depressed depending on the precise relative timing of the pre- and post - synaptic spikes .this timing dependence of magnitude and sign of plasticity , observed in several types of cortical and hippocampal neurons , is usually termed spike timing dependent plasticity ( stdp ) .the role of stdp has been investigated both in supervised learning framework , in unsupervised framework in which repeating patterns are detected by downstream neurons , cortical development , generation of sequences and polychronous activity , and in an associative memory framework with binary units . however, this is the first time that this learning rule has been used to make a if network to work as associative memory for phase - coded patterns of spike , each of which becomes a dynamic attractor of the network .notably , in a phase coded pattern not only the order of activation matters , but the precise spike timing intervals between units .we therefore present a possibility to build a circuit with stable phase relationships between the spikes of a population of if neurons , in a robust way with respect to noise and changes of frequency .the first important result of the paper is the measurement of the storage capacity of the model , i.e. the maximum number of distinct spatiotemporal patterns that can be stored and selectively retrieved , since it has never been computed in a spiking model for spatiotemporal patterns .several classic papers ( see and references therein ) have focused on storage capacity of binary model with static binary patterns , and much efforts have been done to use more biophysical models and patterns , but , up to our knowledge , without any calculation of the storage capacity of spatiotemporal patterns in if spiking models .notably , by introducing an order - parameter which measures the overlap between phase coded spike trains , we are able quantitatively measure of the overlap between the stored pattern and the replay activity , and to compute the storage capacity as a function of the model parameters . + another important result is the study of the different regimes observed by changing the excitability parameters of the network . in particular , we find that near the region of the parameter space where the network tends to become unresponsive and silent there is a regime in which the network responds selectively to cue presentation with a short transient replay of the phase - coded pattern . differently , in the region of higher excitability , the patterns are replayed persistently and selectively , and eventually with more then one spike per cycle .the paper is organized as follows : section [ sec - model ] introduces the leaky - integrate - and - fire ( if ) neuronal model ; section [ sec - connections ] describes the stdp learning rule used to design the connections ; in section [ sec - dyn ] we study the emergence of collective dynamics and introduce an order parameter to measure the overlap between the collective dynamics and the stored phase coded patterns ; section [ sec - capacity ] reports on the storage capacity of the network , i.e. the maximum number of patterns that can be stored and selectively retrieved in the network ; the parameter space and the different working regions are also investigated in section [ sec - capacity ] ; in section [ sec - noise ] we study the robustness of the retrieval dynamics wrt noise and heterogeneity ; section [ sec - oscillators ] reports on the implication of this model in the framework of oscillatory interference model of path - integration ; summary and discussion are outlined in section [ sec - summary ] .we consider a recurrent neural network with possible connections , where is the number of neural units .the connections are designed during the learning mode , when the connections change their efficacy according to a learning rule inspired to the stdp .after the learning stage , the connections values are frozen , and the collective dynamics is studied .this distinction in two stages , plastic connection in the learning mode and frozen connections in the dynamics mode , is a useful framework to simplify the analysis .it also finds some neurophysiological motivations in the effects of neuromodulators , such as dopamine and acetylcholine , which regulate excitability and plasticity .the single neuron model is a leaky integrate - and - fire ( if ) .this simple choice , with few parameters for each neuron , is suitable to study the emergence of collective dynamics and the diverse regimes of the dynamics , instead of focusing on the complexity of the neuronal internal structure .we use the spike response model ( srm ) formulation of the if model , which allows us to use an event - driven programming and makes the numerical simulations faster with respect to a differential equation formulation . in this picture , the postsynaptic membrane potential is given by : where are the synaptic connections , describes the response kernel to incoming spikes on neuron , and the sum over runs over all presynaptic firing times following the last spike of neuron .namely , each presynaptic spike , with arrival time , is supposed to add to the membrane potential a postsynaptic potential of the form , where \theta(t-{\hat t}_j ) \label{tre}\ ] ] where is the membrane time constant ( here 10 ms ) , is the synapse time constant ( here 5 ms ) , is the heaviside step function , and k is a multiplicative constant chosen so that the maximum value of the kernel is 1 .the sign of the synaptic connection sets the sign of the postsynaptic potential s change , so there s inhibition for negative and excitation for positive .when the membrane potential exceeds the spiking threshold , a spike is scheduled , and the membrane potential is reset to the resting value zero .we use the same threshold for all the units , except in sec [ sec - noise ] where different values are used and the robustness w.r.t .the heterogeneity is studied .clearly the spiking threshold of the neurons is related to the excitability of the network , an increase of the value of is also equivalent to a decrease of k , the size of the unitary postsynaptic potential , or , equivalently to a global decrease in the scaling factor of synaptic connections .+ numerical simulations of this dynamics are performed for a network with stored patterns , where connections are determined via a learning rule described in the next paragraph .we found that a few number of spikes , given a in proper time order , are able to selectively induce the emergence of a persistent collective spatiotemporal pattern , which replays one of the stored pattern ( see sec [ sec - dyn ] ) .in a learning model previously introduced in , the average change in the connection , occurring in the time interval ] , ^{-1} ] , where the value represents the time shift of the spike of unit with respect to the collective rhythm , i.e the time delay among units . however , as for the hopfield model , the patterns stored in the network are attractors of the dynamics , when do not exceeds storage capacity , and the dynamics during the retrieval is robust with respect to noise .we firstly check robustness w.r.t .input noise , i.e when a poissonian noise is added to the postsynaptic potential given in eq.[if ] .the total postsynaptic potential of each neuron is then given by where is modelled as the times are randomly extracted for each neuron , and are random strengths , extracted independently for each neuron and time .the intervals between times are extracted from a poissonian distribution , while the strength is extracted from a gaussian distribution with mean and standard deviation .the network dynamics during the retrieval of a pattern in presence of noise is shown in fig .[ noise ] with different levels of noise ( , and in a , b , c , d respectively ) .results show that when the noise is not able to move the dynamics out of the basin of attraction , the errors do not sum up , and the phase relationship is preserved over time ( see fig .[ noise]a , b , c ) . if the input noise is very high , as in the example of fig .[ noise]d , the dynamics moves out of the basin of attraction . + in order to see the effects of input noise level used in fig [ noise]cd , we report in fig .[ noise]ef the network dynamics when the pattern retrieval is not initiated ( ) .in particular , fig .[ noise]e shows that the noise level used in fig .[ noise]c is strong enough to generate spontaneous random activity in absence of the initial triggering , but is not sufficient to destroy the attractive dynamics during a successful retrieval . as in the hopfield model , errors do not sum up and the dynamics spontaneously goes back to the retrieved phase - coded pattern for all the perturbations that leave the system inside the basin of attraction . respectively ) , and pattern is triggered with as in previous cases . only in dthe level of noise is too high and the system goes out of the basin of attraction .( e , f ) for comparison , the dynamics , when the retrieval is not triggered ( ) , is shown in subplot e , f in presence of the same noise used in c , d .figure e shows that the noise used in c usually affects strongly the dynamics of the network , however if the collective oscillation is retrieved the system is robust wrt noise .thresholds in all figures are , , and synaptic connections are build learning phase - patterns at . , width=340 ] lastly , the robustness of retrieval w.r.t .heterogeneity of the spiking thresholds is investigated .this analysis can be carried out by using a different value of spiking threshold for each neuron : where is a random number extracted from a uniform distribution in ] .+ the role of stdp in the formation of sequences has been recently investigated in .these studies have shown how it s possible to form long and complex sequences , but they did not concern themselves on how it s possible to learn and store not only the order of activation in a sequence , but the precise relative times between spikes in a closed sequence , i.e. a phase - coded pattern . in our model not only the order of activation is preserved , but also the precise phase relationship among units . the tendency to synchronization of units is avoided in our model , without need to introduce delays or adaptation , due to the balance between excitation and inhibition that is in the connectivity of large networks when the phase coded pattern with random phases is learned using the rule in eqs .( 5 - 8 ) . in our ruleall the connections , both positive and negative , scale with the time of presentation of patterns , keeping always a balance . indeed , since ( 1 ) the stored phase are uniformly distributed in and ( 2 ) the learning window has the property , then the connectivity matrix in eq .( 7 ) has the property that the summed excitation and the summed inhibition are equal in the thermodynamic limit ( indeed they are of order unity , while their difference is of order ) .the task of storing and recalling phase - coded memories has been also investigated in in the framework of probabilistic inference . while we study the effects of couplings given by eqn.(5 ) in a network of if neurons , the paper studies this problem from a normative theory of autoassociative memory , in which real variable of neuron represents the neuron spike timing with respect to a reference point of an ongoing field potential , and the interaction among units is mediated by the derivative of the synaptic plasticity rule used to store memories .the model proposed here is a mechanism which combines oscillatory and attractor dynamics , which may be useful in many models of path - integration , as pointed out in sec .( [ sec - oscillators ] ) .our learning model offers a if circuit able to keep robust phase - relationship among cells participating to a collective oscillation , with a modulated collective frequency , robust with respect to noise and heterogeneities .notably the frequency of the collective oscillation in our circuit is not sensible to the single value of the threshold of each unit , but to the average value of the threshold of all units , since all units participate to a single collective oscillating pattern which is an attractor of the dynamics .recently there is renewed interest in reverberatory activity and in cortical spontaneous activity whose spatiotemporal structure seems to reflect the underlying connectivity , which in turn may be the result of the past experience stored in the connectivity .+ similarity between spontaneous and evoked cortical activities has been shown to increase with age , and with repetitive presentation of the stimulus .interestingly , in our if model , in order to induce spontaneous patterns of activity reminiscent of those stored during learning stage , few spikes with the right phase relationship are sufficient .it means that , even in absence of sensory stimulus , a noise with the right phase relationships may induce a pattern of activity reminiscent of a stored pattern .therefore , by adapting the network connectivity to the phase - coded patterns observed during the learning mode , the network dynamics builds a representation of the environment and is able to replay the patterns of activity when stimulated by sense or by chance .this mechanism of learning phase - coded patterns of activity is then a way to adapt the internal connectivity such that the network dynamics have attractors which represent the patterns of activity seen during experience of environment .nadasdy z , hirase h , czurko a , csicsvari j , buzsaki g. 1999 .replay and time compression of recurring spike sequences in the hippocampus .the journal of neuroscience 19 : 9497 - 9507 .ji d wilson ma .2007 . coordinated memory replay in the visual cortex and hippocampus during sleep .nature neuroscience 10 : 100 - 107 .carien s. lansink , pieter m. goltstein , jan v. lankelma , bruce l. mcnaughton , cyriel m. a. pennartz 2009 .hippocampus leads ventral striatum in replay of place - reward information .plos biol 7(8 ) : e1000173 .diba k , buzsaki g. 2007 .forward and reverse hippocampal place - cell sequences during ripples .nat neurosci 10 : 1241 - 1242 .davidson tj , kloosterman f , wilson ma .2009 hippocampal replay of extended experience .neuron 63 : 497 - 507 .a. c. welday , i. g. shlifer , m. l. bloom , k. zhang , and hugh t. blair 2011 .cosine directional tuning of theta cell burst frequencies : evidence for spatial coding by oscillatory interference the journal of neuroscience , november 9 , 2011 31(45):1615716176 1615 anishchenko a , treves a. 2006 .autoassociative memory retrieval and spontaneous activity bumps in small - world networks of integrate and fire neurons .journal of physiology - paris 100 : 225 - 236 + battaglia fp , treves a. 1998 . stable and rapid recurrent processing in realistic autoassociative memories .neural computation 10 : 431 - 450 .debanne d , gahwiler bh , thompson sm .1998 . event driven programming long - term synaptic plasticity between pairs of individual ca3 pyramidal cells in rat hippocampal slice cultures .j physiol 507 : 237 - 247 .shouval hz , wang ss , wittenberg gm . 2010 .spike timing dependent plasticity : a consequence of more fundamental learning rules .front comput neurosci 4 : 19 .graupner m , brunel n. 2010 . mechanisms of induction and maintenance of spike - timing dependent plasticity in biophysical synapse models .front comput neurosci 4 : 136 .yoshioka m , scarpetta s , marinaro m. 2007 .spatiotemporal learning in analog neural networks using spike - timing - dependent synaptic plasticity .phys rev e 75 : 051917 .scarpetta s , zhaoping l , hertz j. 2002 .hebbian imprinting and retrieval in oscillatory neural networks .neural computation 14 : 2371 - 96 .scarpetta s , zhaoping l , hertz j. 2001 .spike - timing - dependent learning for oscillatory networks .advances in neural information processing systems 13 , mit press .scarpetta s , marinaro m. , a learning rule for place fields in a cortical model : theta phase precession as a network effect , hippocampus volume 15 , issue 7 , pages 979989 , 2005 silvia scarpetta , masahiko yoshioka and maria marinaro , encoding and replay of dynamic attractors with multiple frequencies analysis of a stdp based learning rule , dynamic brain - from neural spikes to behaviors lecture notes in computer science , 2008 , volume 5286/2008 abarbanel h , huerta r , rabinovich mi .2002 . dynamical model of long - term synaptic plasticity .pnas 99 : 10132 - 10137 .legenstein r , christian nager c , maass w. 2005 . what can a neuron learn with spike - timing - dependent plasticity ?neural computation 17 : 2337 - 2382 .song s , abbot lf .cortical development and remapping through spike timing - dependent plasticity .neuron 32 : 339350 .fiete , i.r . ,senn , w. , wang , c.z.h ., hahnloser , r.h.r .spike - time - dependent plasticity and heterosynaptic competition organize networks to produce long scale - free sequences of neural activity .neuron 65(4 ) , 56376 verduzco - flores so , bodner m , ermentrout b. 2011 . a model for complex sequence learning and reproduction in neural populations .journal of computational neuroscience 2011 sep 2 .em izhikevich 2006 .polychronization : computation with spikes .neural computation vol .2 , pages 245 - 282 scarpetta s , giacco f , de candia a. 2011 . storage capacity of phase - coded patterns in sparse neural networks . epl 95 : 28006 .silvia scarpetta , antonio de candia , ferdinando giacco , dynamics and storage capacity of neural networks with small - world topology , frontiers in artificial intelligence and applications volume 226 , 2011 zilli ea , yoshida m , tahvildari b , giocomo lm , hasselmo me . 2009 . evaluation of the oscillatory interference model of grid cell firing through analysis and measured period variance of some biological oscillators .plos comput biol 5 : e1000573 .dangelo e , koekkoek sk , lombardo p , solinas s , ros e , garrido j , schonewille m , de zeeuw ci .timing in the cerebellum : oscillations and resonance in the granular layer .neuroscience 162 : 805 - 815. laura lee colgin , edvard i. moser and may - britt moser 2008 understanding memory through hippocampal remapping trends in neurosciences vol.31 no.9 ringach dl .2009 spontaneous and driven cortical activity : implications for computation .curr opin neurobiol 19 : 439 - 44 .luczak a , maclean jn . default activity patterns at the neocortical microcircuit level .front integr neurosci . 2012;6:30 .lau and g - q .synaptic mechanisms of persistent reverberatory activity in neuronal networks .usa , 102:10333 10338
we study the collective dynamics of a leaky integrate and fire network in which precise relative phase relationship of spikes among neurons are stored , as attractors of the dynamics , and selectively replayed at different time scales . using an stdp - based learning process , we store in the connectivity several phase - coded spike patterns , and we find that , depending on the excitability of the network , different working regimes are possible , with transient or persistent replay activity induced by a brief signal . we introduce an order parameter to evaluate the similarity between stored and recalled phase - coded pattern , and measure the storage capacity . modulation of spiking thresholds during replay changes the frequency of the collective oscillation or the number of spikes per cycle , keeping preserved the phases relationship . this allows a coding scheme in which phase , rate and frequency are dissociable . robustness with respect to noise and heterogeneity of neurons parameters is studied , showing that , since dynamics is a retrieval process , neurons preserve stable precise phase relationship among units , keeping a unique frequency of oscillation , even in noisy conditions and with heterogeneity of internal parameters of the units .
the carrier of genetic information in the nature , the dna molecule , has very peculiar structure .this is a spiral - staircase shaped object , whose steps are made from pairs selected from among only four other kinds of molecules .those are : cytosine ( c ) , guanine ( g ) , adenine ( a ) and thymine ( t ) .there are only two correct kinds of those pairs , not , as one might naively suspect .namely , the cytosine always forms a pair with guanine ( ) , and adenine `` likes '' thymine ( ) .so , if the sequence in a part of dna molecule is something like then the other half of dna s double helix has the structure such a redundancy has not yet been exploited by genetic algorithms , at least the present author failed to find in available literature anything similar in conjunction with this class of optimization algorithms ( or any other class ) . in this paper we will show , that the idea of doubled ( or duplicated ) genetic information may be very useful in fine - tuning of genetic algorithms and in formulating the stopping criteria , which are quite general , independent of the problem under study .many elements of existing genetic algorithms are straightforward computer implementations of the natural evolutionary phenomena .we have an evolving population , usually fixed in size , in which individuals mate , have offspring and are mutated .all those processes are driven by one or more random number generators , that is by purely stochastic forces , and , additionally , by the darwinian rule of _ survival of the fittest ._ it is well known that the crossover processes alone can not guarantee finding the optimal region in a search space , at least when the population is small . without mutations , the _ premature convergence _ is then almost certain , thus revealing the inability of the algorithm to find the desired solution .the mutations cause rapid changes of location in the search space thus making possible to reach an explore the regions , which are at all not accessible without them .the majority of researchers and practitioners prefer rather low rate of mutations .this is because the very frequent mutations turn the genetic algorithm into generic monte carlo procedure , which is to be avoided , since we hope that some kind of intelligence will lead us _ much _faster to the desired solution than completely blind trials .suppose , for simplicity , that the smallest part of the chromosome , called here gene , consists of only a single bit .let this bit has initial value of .the mutation process is then nothing else than negating ( flipping ) this bit .the question we want to examine now is : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ what will be the state of this bit after generations ? _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ let the probability of bit flip per generation is equal to . we can write where numerates consecutive generations and the number in parenthesis means the state of the bit in question . taking into account that for any we can rewrite the first row of relation ( [ master_eq ] ) as p + p_{k}(1)\cdot ( 1-p)\\ & = & p_{k}(1 ) \left(1 - 2 p\right ) + p\end{aligned}\ ] ] or , in shorter form where is the probability that the examined bit is in the state after generations .the recurrence formula ( [ recur ] ) seems very similar to the construction of a simple random number generator , so one might expect that it produces chaotic sequences of numbers .surprisingly ( ? ) this is not the case , as we shall see .consider the behavior of when .it is easy to see , that for ( no mutations ) the sequence is constant ( and thus convergent ) ; it remains at whatever initial state in our case .the other limiting case , , is also easy : now the recurrence relation ( [ recur ] ) simplifies to and our bit permanently oscillates between two possible states there is no convergence . for all other cases , i.e. , the sequence ( [ recur ] ) is always convergent and we have we skip the easy proof .it is interesting , however , that the convergence has oscillatory character when , while for the sequence grows monotonically .there is no chaotic behavior .the case is special again : it produces constant sequence .it is widely believed , that the mutations are essentially wrong thing for any individual .they usually damage the genetic material , sometimes to the extent which makes further existence of the individual impossible .it only rarely happens that the mutated individual is better fitted to its environment than the average one in a given population . on the other handnot all mutations are lethal . in the well known penna , model of biological evolution , for example, the subsequent mutations are accumulated in every chromosome .the older the chromosome , the more mutations it carries and finally dies either after single lethal mutation or due to the accumulation of many less damaging defects ( we do not discuss here the so called verhulst factor describing the decrease of the population size due to overcrowding and thus the limited accesss to necessary resources by any single individual ) . if no replication occurs , then the entire population is eventually extinct .by contrast , the `` small '' changes in the genotypes are silently transmitted to the offspring chromosomes . in the nature , however , an error correcting capability is at work .the double helix structure of the dna strand limits the proliferation of defective genes .the two parts of dna ( and rna as well ) are not independent , but complementary . after unzipping , just before the replication , only one strand of dna contains the distorted genetic information , while the other sequence is correct . therefore only % of offspring dna helices will be damaged .the idea is to use in genetic algorithms the chromosomes , which are `` doubled '' , i.e. which consist of regular , well known part ( `` visible '' ) and the other ( `` invisible '' ) parallel structure , of identical length , which is initially the exact negation of the first part , as below : we do not mark any gene boundaries here .a single gene may consist of one or more bits ; the lengths of consecutive genes need not to be equal to each other .the chromosome becomes mutated when at least one of its bits is flipped . the genetic material , representing trial solution of the original problem , is placed in the visible part .every mutation is reflected in the state of this very part , the invisible part being completely insensitive to mutations .quite contrary , the crossover operation is performed on both parts , mixing independently two visible and two invisible parts of the involved chromosomes , at the same crossover point(s ) .it is easy to see , that the crossover operation alone , no matter one- or multipoint , will never destroy the complementarity of the visible and invisible part of any chromosome .of course , the fitness factor will always be computed for the visible part only .now the process of aging of the population , generation after generation , may be tracked by looking at the invisible part of every chromosome .we can easily count all the bits in the entire population , which were changed by mutations it is enough to compare the visible and invisible parts of all chromosomes and count the bits , which are identical in both parts .of course , the even number of mutations applied to the same bit will go unnoticed .this is sometimes called _backward evolution_. from the earlier considerations , and assuming that the mutation probability per bit and per generation is small ( more precisely , as is always the case ) , we can conclude that the fraction of ( effectively ) mutated bits in the entire population will be an increasing function of time .the stochastic limit for this fraction is equal to .the value of uniquely separates the genetic algorithms from monte carlo type of optimization .for the fraction of mutated bits initially increases linearly with the number of generations .this observation does not take into account directly neither the structure of the individual chromosome ( number of genes or bits it consists of ) nor the number of chromosomes in the population .such information is hidden in the value of parameter probability of a particular bit being flipped during a single cycle of simulation ( single epoch ) .thus the the number of generations to reach % of flipped bits may be roughly estimated as . in practice ,as the performed simulations show , the fraction of effectively mutated bits reaches the value of for the first time after roughly to times greater number of epochs .the exact analytical formula for as a function of is neither easy to obtain nor very informative . for any , is a polynomial of order in variable .the graph of vs. , however , shows striking similarity to the graph of the expression : \ ] ] this is only an observation based on several evolutionary processes , guessed rather than formally derived .anyway , utilizing this approximation , we can say that after epochs the population is mature and gradually looses its innovative forces , and after epochs it becomes old and practically useless .it is time then to switch to other locally searching routine to improve further the optimal set of unknown parameters , if appropriate .the formula ( [ result ] ) is the main result of this paper .it sets the _limit for the number of necessary generations . in conclusion ,during calculations we should monitor the behavior of the fraction of mutated bits .* when this variable reaches the value of * , * then we may say that every second bit in the population was flipped at least once .this is the other way of saying that the search space has been explored quite thoroughly and no significant improvement of the fitness should be expected*. if the fitness of all chromosomes was the same , i.e. there was no evolutionary pressure of any kind , then after generations the search space would be covered quite uniformly , although irregularly , with trial points .further sampling would only make finer the already existing irregular grid of trial points . when there _ is _ a preference for better fitted chromosomes , then after generations all interesting regions , or at least their neighborhoods , should have already been found .observing the values of the fitness function alone , either as an average for the population or for the best individual only , may be _ very _ misleading . consideringthe efficiency of the algorithm , understood as the number of generations necessary to find the optimal fitness , we can see that it is inversely proportional to . for every practical purpose the condition ( average number of bits flipped in the whole population during one epoch is equal to one ) sets the lowest sensible limit for . is the total number of bits in the population ( in its visible part ) . of course, higher values of will speed up the evolutionary search ._ certainly not .it gives the reasonable estimate of the number of generations needed in fairly regular cases .the genotype space with only one well fitted chromosome and all other equally bad is an evident exception . what can be done in such casesis to increase the number of bits assigned to every continuous unknown ( kind of _ oversampling _ ) , for the price of increased computational effort , of course , since this approach is equivalent to the use of finer grid of points in the search space . to purely combinatorial problems ( only integer and/or logical unknowns )our estimate should be of similar value .as an example let us take the data from . herethe population consisted of 30 chromosomes , bits long each . and was set equal to .according to ( [ min - mut ] ) , should be set as at least while the quoted mutation rate was times higher than that number .the population should become mature after some generations and the evolutionary search should be terminated after generations at the latest . in fact , the author of reports that satisfactory results were achieved after generations ( every run was limited to generations ) .the search space had only points , so this example may be considered small and not representative .nevertheless , no more than only evaluations of the fitness factor were enough to reach valuable conclusions .we must be aware , that the search space , no matter how large , is always _ discrete _ and finite for this class of optimization algorithms .it may be considered as a random graph , in which every chromosome is a vertex ( we are not interested in the edges ) .this graph is by no means random , but clearly exhibits the _ small world property _ , i.e. the average ( hamming ) distance between its vertices scales as the logarithm of their number .indeed , -bit chromosomes are points in element universe .the maximum distance between any two chromosomes is equal to , so the average distance must never exceed this number .incidentally .this fact alone explains why genetic algorithms are fairly insensitive with respect to the number of unknowns . on the other hand ,speaking of the neighboring points in the search space makes sense , especially when we think of _ nearest neighbors_. it is therefore intuitively appealing that we should be able to find a ( hopefully small ) subset of points in the search space with the property that _ any _ point is distant from this set no more than unit something similar to the backbone or spanning tree known from the graph theory . in case it is easy to check that exactly two points , namely and , are enough to form such a subset with requested property . evaluating fitness for each member of this subset should be nearly equivalent to the exhaustive ( `` brute force '' ) search , since then , for arbitrarily chosen point in the search space either this point itself or one of its nearest neighbors was visited and evaluated during evolutionary process .unfortunately , today we do nt know how to find such a set in general case ; we do nt even know what is its cardinality hopefully much lower than that of original search space .it is certain , however , that the solution of this problem need not to be unique .-bit chromosomes .the universe consists of just elements : , , and .there are two subsets with desired property : and . ]perhaps the genetic algorithm is the best available tool for approximate construction of such subsets ?using `` doubled '' chromosomes we successfully mimic the double helix structure of dna .the cost is moderate : we need to double the storage for the population . counting the flipped bitsis performed only once per generation , so this cost should be negligible in comparison with evaluations of fitness function . instead of the true implementation of the `` double helix '' , one can use the simplified formula ( [ result ] ) as a stopping criterion . direct observation of the fraction of effectively mutated bits will signal the end of calculations usually much earlier .this work was done as part of author s statutory activity in institute of physics , polish academy of sciences .grzegorz dudek , _ zastosowanie algorytmu genetycznego do selekcji symptomw w badaniach diagnostycznych _ ( in polish , _ genetic algorithm for selection of symptoms in diagnostic research _ ) , proceedings of iii domestic conference on evolutionary algorithms and global optimization , potok zoty ( poland ) , may 2528 1999 , p. 99
over a quarter of century after the invention of genetic algorithms and miriads of their modifications , as well as successful implementations , we are still lacking many essential details of thorough analysis of it s inner working . one of such fundamental questions is _ how many generations do we need to solve the optimization problem ? _ this paper tries to answer this question , albeit in a fuzzy way , making use of the double helix concept . as a byproduct we gain better understanding of the ways , in which the genetic algorithm may be fine tuned . genetic algorithms , aging , double helix , efficiency , stopping criteria , fine tuning , small world property
[ introduction ] the main goal of this paper is to give a pedagogical introduction to quantum information theory to do this in a new way , using network diagrams called quantum bayesian ( qb ) nets .the paper assumes no prior knowledge of classical- or quantum- information theory .it does assume a good understanding of the machinery of quantum mechanics , such as one would obtain by reading any reasonable textbook that explains dirac bra - ket formalism . the paper reviews qb nets in an appendix .if you have difficulty understanding said appendix , you might want to read ref. before continuing this paper .most of the ideas discussed in this paper are not new .they are well - known , standard ideas invented by the pioneers ( bennett , holevo , peres , schumacher , wootters , etc . ) of the field of quantum information theory .what is new about this paper is that , whenever possible and advantageous , we rephrase those ideas in the visual language of qb nets. the paper does present a few new ideas , such as associating with each qb net a very useful density matrix that we call the meta density matrix of the net .the topics covered in this paper are shown in the table of contents .the paper , in its present form , is far from being a complete account of the field of quantum information theory .some important topics that were left out ( because the author did nt have enough time to write them up ) are : quantum compression , quantum error correction , channel capacities , quantum approximate cloning , entanglement quantification and manipulation .future editions of this paper may include some of these topics .i welcome any suggestions or comments . to fill in gaps left by this paper , or to find alternative explanations of difficult topics , see refs.- and references therein .[ notation ] in this section , we will introduce certain notation which is used throughout the paper .we define to be the set for any integers and .let .for any finite set , let denote the number of elements in . the kronecker delta function equals one if and zero otherwise .we will often abbreviate by .we will often use the symbol to mean that one must sum whatever is on the right - hand side of this symbol over all repeated indices ( a sort of einstein summation convention ) .likewise , will mean that one should sum over all indices .if we wish to exclude a particular index from the summation , we will indicate this by a slash followed by the name of the index .for example , in or we wish to exclude summation over .the pauli matrices , and are defined by = ( cc 0&1 + 1&0 ) , = ( cc 0&-i + i&0 ) , = ( cc 1&0 + 0&-1 ) . for any real ] , ] can be obtained by summing over the unwanted arguments , a process called marginalization .we define : h[(.)__1 ] = -_(x.)__1 p[(x.)__1 ] _ 2 p[(x.)__1 ] , [ eq : ce - h - simple ] h[(.)__1 | ( .)__2 ] = -_(x.)__1 _ 2 p[(x.)__1 _ 2 ] _ 2 ( ) , [ eq : ce - h - cond ] h[(.)__1 : ( .)__2 ] = _ ( x.)__1 _ 2 p[(x.)__1 _ 2 ] _ 2 ( ) .[ eq : ce - h - mutual]for example , if and are nodes of a cb net , then h ( ) = -_a p(a ) _ 2 p(a ) , h ( , ) = -_a , b p(a , b ) _ 2 p(a , b ) , h(| ) = -_a , b p(a , b ) _ 2 p(a | b ) , h ( : ) = _ a , b p(a , b ) _ 2 ( ) , where , , and the sums over ( ditto , ) range over all ( ditto , ) .note that definitions eqs.([eq : ce - h - simple ] ) to ( [ eq : ce - h - mutual ] ) are independent of the order of the node random variables within and . for example , if are nodes of a cb net , then h ( , , ) = h ( , , ) , h[| ( , ) ] = h[| ( , ) ] .it is convenient to extend definitions eqs.([eq : ce - h - simple ] ) to ( [ eq : ce - h - mutual ] ) in the following two ways .first , we will allow ( ditto , ) to contain repeated random variables .if it does , then we will throw out any extra copies of a random variable .for example , if are nodes of a cb net , then h ( , , , ) = h ( , , ) , h[| ( , , ) ] = h[| ( , ) ] .second , we will allow ( ditto , ) to contain internal parentheses .if it does , then we will ignore the internal parentheses .for example , if are nodes of a cb net , then h [ ( , ) , ] = h ( , , ) , h[| ( ( , ) , ) ] = h[| ( , , ) ] .let and . measures the spread of the distribution . is called the _ conditional entropy _ of given . is called the _ mutual entropy _ of and , and it measures the dependency of and : it is non - negative , and it equals zero iff are independent random variables ( i.e. , for all and ) .note that eqs.([eq : ce - h - simple ] ) to ( [ eq : ce - h - mutual ] ) imply that h(| ) = h ( , ) - h ( ) , [ eq : ce - decomp - cond ] h ( : ) = h ( ) + h ( ) - h ( , ) , [ eq : ce - decomp - mutual ] h ( : ) = h ( ) - h(| ) , [ eq : ce - info - trans ] h ( : ) = h ( ) - h(| ) . in eq.([eq : ce - info - trans ] ) , one may think of as the information about prior to transmitting it , and as the information about once is transmitted and is found out . since is the difference between the two , one may think of it as the information ( or entropy ) transmitted " from to .this interpretation of is an alternative to the dependency interpretation mentioned above .let , and , where the are non - empty , possibly overlapping , subsets of .we can extend further the domain of the function by introducing the following axioms h [ ( , ): ] = h [ ( :) , ( :) ] , [ eq : ce - colon - distri ] h [ (: ) , ] = h [ ( , ) : ( , ) ] .[ eq : ce - comma - distri]eq.([eq : ce - colon - distri ] ) means that : " distributes over , " . according to eq.([eq : ce - decomp - mutual ] ) , the left hand side of eq.([eq : ce - colon - distri ] ) equals .eq.([eq : ce - comma - distri ] ) means that , " distributes over : " . according to eq.([eq : ce - decomp - mutual ] ) , the right hand side of eq.([eq : ce - comma - distri ] ) equals . with the help of the above distributive laws , the entropy of a compound expression with any number of : " and " operators can be expressed as a sum of functions containing , " but not containing : " and " in their arguments .for example , if are nodes of a qb net , then l h [ (: ) | ] = h [ ( : ) , ] - h ( ) + = h [ ( , ): ( , ) ] - h ( ) + = h ( , ) + h ( , ) - h ( , , ) - h ( ) .if some parentheses are omitted within the argument of , the argument may become ambiguous .for example , does mean or ?ambiguous arguments should be interpreted using the following operator precedence order , from highest to lowest precedence : comma ( , ) , colon ( : ) , vertical line( ) .thus , should be interpreted as . in the mathematical field called set theory , one defines the union , the intersection and the difference of two sets and .one also defines functions called measures . a _ measure _ assigns a non - negative real number to any measurable " set . satisfies ( ) = 0 , ( _ i=0^ e_i ) = _ i=0^ ( e_i ) , where is the empty set , and the s are disjoint measurable sets .for example , for any set \cup [ a_2 , b_2 ] \cup \ldots \cup [ a_n , b_n] ] s are disjoint closed intervals of real numbers , one can define .there is a close analogy between the properties of entropy functions in information theory(it ) and those of measure functions in set theory(st ) .if are sets and are node random variables , then it is fruitful to imagine the following correspondences : llll atoms : & a & & + binaryoperators : & ab & & ( , ) + & ab & & (: ) + & a- b & & ( | ) + real - valuedfunction : & ( a ) & & h ( ) . in both st and it , one defines a real - valued function ( i.e. , in st versus in it ) .this real - valued function takes as arguments certain well - formed expressions .a well - formed expression consists of either a single atom ( a set in st versus a node random variable in it ) or a compound expression . a compound expression is formed by using binary operators ( in st versus in it ) to bind together either ( 1 ) 2 atoms or ( 2 ) an atom and another compound expression or ( 3 ) two compound expressions .table [ table - ent ] gives a list of properties ( identities and inequalities ) satisfied by the classical entropy .whenever possible , table [ table - ent ] matches each property of entropy functions with an analogous property of measure functions .see refs.- to get proofs of those statements in table [ table - ent ] that are not proven in this paper .[ table - ent ] ( compiled by r.r.tucci , report errors to tucci-tiste.com ) [ cols= " < , < , < " , ] .the columns of are clearly orthonormal so is a unitary matrix . the matrix mentioned above can be defined in terms of by r(b | f , y ) = u ( f | b , |y ) ( -1)^|y ( -1)^f_1 f_2 .our reasons for defining in this way will become clear as we go on .note that _b | r(b | f , y)|^2 = 1 , as required by the definition of qb nets .it is convenient to define a function by k(x , y , a , f , b ) = r(b | f , y ) u ( f | a , x ) _epr(x , y ) . substituting explicit expressions for and into the last equationyields k(x , y , a , f , b ) = ^a , x_b , |y ( ^a , x_0 , f_1 + ^a , x_1 , |f_1 ) . from this expression for , it follows that [ eq : te - k - ids ] _ x , y k = ^a_b , _ x , y , f k = ^a_b , _ x , y |k|^2 = ^a_b , _ x , y , f |k|^2 = ^a_b .define the following kets : [ eq : te - kets ] = _ a _ a , = _ a _ a , = _ x. a(x . )= _ all k(x , y , a , f , b)_a , = 2 _ all / f k(x , y , a , f , b)_a .note that we do nt sum over in the equation for .it follows by eqs.([eq : te - k - ids ] ) that the kets of eqs.([eq : te - kets ] ) have unit magnitude and that = ( -1)^f_1 f_2 , [ eq : te - tele ] = .[ eq : te - pseudo - tele]because of eq.([eq : te - tele ] ) , one says that the qb net of fig.([fig : tele ] ) teleports " a quantum state from node to node . without knowing the state , alice at measures the joint state delivered to her by and .she obtains result which she sends by classical means to bob at .bob can choose to allow any value of , or he can ignore those repetitions of the experiment in which does not equal a particular value , say . in either case , the state emerging from bob s lab is equal to . note that according to eq.([eq : te - pseudo - tele ] ) , even if alice does not measure , and instead she sends a quantum message to bob , equals . however , this is not true " teleportation .in true " teleportation , we allow alice to receive quantum messages but not to send them .the meta density matrix for the net of fig.([fig : tele ] ) is = , where = _ all k(x , y , a , f , b ) _ a .note that by eqs.([eq : te - k - ids ] ) , has unit magnitude .define the reduced matrix by = ( ) .it is easy to show that = , where = _ a _ a , = _ f .define h^in = -_abool |_a|^2 _ 2 ( |_a|^2 ) . next we will calculate classical and quantum entropies for various possible density matrices : then = ( _ a |_a|^2 ) .[ eq : te - trace - b]it is easy to show from eq.([eq : te - trace - b ] ) that lll & & ( zero coherence ) + & & ( max .coherence ) + & & + + & & + & & + & & +. then = .[ eq : te - trace - f]note that we get the same density matrix if we reduce by projecting , tracing or e - summing over node : = = .it is easy to show from eq.([eq : te - trace - f ] ) that lll & & ( zero coherence ) + & & ( zero coherence ) + & & + + & & + & & + & & transmitted info : quantum = 2 classical + .[ sec : qbit - bouncing ] ref. was the first to discuss a phenomenon that we will call qubit bouncing .qubit bouncing is often called quantum super dense coding " . in this section, we will consider a qb net that represents qubit bouncing .consider the qb net of fig.([fig : bounce ] ) , where & & ] to each node .we call ] determines a matrix that we call the _ node matrix of node _ . is the matrix s _ row index _ and is its _column index_. we require that the values ] to each node .we call ] determines a matrix that we call the _ node matrix of node _ . is the matrix s _ row index _ and is its _column index_. we require that the quantities ] by its magnitude squared .we call this special cb net the _ parent cb net _ of the qb net from which it was constructed .we call it so because , given a parent cb net , one can replace the value of each node by its square root times a phase factor . for a different choice of phase factors , one generates a different qb net .thus , a parent cb net may be used to generate a whole family of qb nets . a _ qb pre - net _ is a labelled graph and an accompanying set of node matrices that satisfy eqs.([eq : rv - norm - aj ] ) , ( [ eq : rv - def - a ] ) and ( [ eq : rv - io - norm - a ] ) , but do nt necessarily satisfy eq.([eq : rv - norm - a ] ) .a qb pre - net that is acyclic satisfies eq.([eq : rv - norm - a ] ) , because its parent cb pre - net is acyclic and this implies that eq.([eq : rv - norm - a ] ) is satisfied .if one considers only acyclic graphs as we do in this paper , then there is no difference between qb nets and qb pre - nets .one can check that all the examples of qb nets considered in this paper satisfy eq.([eq : rv - norm - a ] ) .eq.([eq : rv - norm - a ] ) is true iff the meta state defined by eq.([eq : adm - ketmeta ] ) has unit magnitude .r. r. tucci , int .jour . of mod .physics * b9 * , 295 ( 1995 ) .available as los alamos eprint quant - ph/9706039 .the theory of this paper is implemented by a computer program called quantum fog " , available at www.ar-tiste.com .this analogy between information theory and set theory , and its pictorial representation in terms of venn diagrams , has been known since time immemorial .i m not sure who was the first to point it out , but it seems to have been common knowledge less than five years after shannon s 1948 paper that started it all .i suspect that the analogy can be phrased more generally and rigorously within the mathematical field of lattice algebras , but i know of no references to support this claim .this is very much in the spirit of n. j. cerf , c. adami , negative entropy and information in quantum mechanics " , phys.rev.lett .79 ( 1997 ) 5194 ( available as los alamos eprint quant - ph/951202 .note other los alamos eprints by same authors on similar topics . ) like us , cerf and adami advocate defining quantum conditional and mutual entropies so as to preserve the venn diagrams which have been used in classical information theory for decades . however , there are some big differences between our work and theirs ( apart from the obvious fact that they do nt use bayesian nets ) . for them the and in refer to separate sub - systems " at the same instant of time . for usthey are node random variables which need not represent separate subsystems .they might , for example , represent the same sub - system at different instants .in classical probability , one speaks of an event space and a function called a real - valued measure .a random variable on is a function , where is the set of values that may assume . is defined by in quantum mechanics , one speaks of an event space , a hilbert space , a density matrix acting on , and a function called an operator - valued measure .a random variable is still a function . for each , one defines an operator acting on by then is defined by it s really that is a pom , but since the set partly specifies , we call this set a pom too . for more information about poms , see and references therein . andreas winter , quant - ph/9907077 ; r. ahlswede , p. loeber , quant - ph/9907081 .these workers from the uni . of bielefeld have also shown ( working independently from me , and using a algebra approach ) that holevo s inequality follows from a data processing inequality .
the main goal of this paper is to give a pedagogical introduction to quantum information theory to do this in a new way , using network diagrams called quantum bayesian nets . a lesser goal of the paper is to propose a few new ideas , such as associating with each quantum bayesian net a very useful density matrix that we call the meta density matrix .
atrial fibrillation ( af ) is the most frequently appearing heart arrhythmia since it accounts for one third of all hospitalizations caused by heart arrhythmia in the industrialized countries . during af the electric conduction system of the heartis disturbed and an increased rate of activation by a factor of 3 - 12 compared to normal sinus rhythm occurs .special spatio - temporal patterns of the electric potential like spiral waves , mother waves or ectopic foci are thought to be underlying generating mechanisms of af .these patterns are often located near physiologically modified regions of the heart tissue in the left atrium .the question hence arises , how these physiologically modified regions can be responsible for the generation of spiral waves or ectopic foci and how they influence the properties of these patterns . to tackle these questions , we study generating mechanism for af on the basis of the fitzhugh - nagumo model , which is a simple model for action potential generation and propagation . by modeling physiologically modified regions using a spatial variation of the parameters characterizing cell properties like excitability or resting state stability, we calculate phase diagrams , which specify the type of spatio - temporal excitation pattern in dependence of the extent of the modified region and the strength of the modification . thereupon we investigate how self - excitatory sources as spiral waves or ectopic foci with rather regular dynamics in one region can induce irregular , fibrillatory excitation patterns in some other region .irregular , fibrillatory states are often observed in the right atrium and it was conjectured that these are caused by the perturbation of regular waves generated by the sinus node by waves emanating from an additional pacemaker like a spiral wave or ectopic foci in the left atrium .the fitzhugh - nagumo ( fhn ) equations are a set of two coupled nonlinear ordinary differential equations , which describe excitable media via an inhibitor - activator mechanism .they were originally developed by searching for a simplified version of the hodgkin - huxley equations for electric pulse propagation along nerves . when combined with a spatial diffusion term ,the equations are this set of partial differential equations serves as a prototype for a large variety of reaction - diffusion systems , which occur , for example , in chemical reactions as the bhelousov - zhabotinsky reaction or the catalysis of carbon monoxide , in population dynamics , in biology in connection with aggregation processes or plancton dynamics , as well as in the spreading of forest fires . herewe will use eqs .( [ eq : fhn ] ) in their original context as a model to investigate the spatio - temporal evolution of electric excitations in the heart . in this approach the variable roughly associated with the membrane potential and the variable with the ion currents through the cell membrane .the resting state is given by the pair of values and .the diffusion coefficient describes the coupling between the cells , and is an applied external current ( stimulus ) .the influence of the parameters , and can be inferred by numerical solutions of eqs .( [ eq : fhn ] ) without the diffusive term .the parameter values have to be limited to some range in order to generate excitability , and their detailed effect on the pulses is complicated due to mutual interdependencies originating from the nonlinearity in eq .( [ eq : fhn ] ) . roughly speaking, affects the length of the refractory period , influences the stability of the resting state , and controls the excitability and strength of the cells response to a stimulus . to capture the propagation and form of a typical action potential , the following set of parameterscan be used : , , , and .these values will be associated with a `` healthy tissue '' in the following .figure [ fig : fig1 ] shows the time development of and during an excitation after a stimulus with these parameters .note that the variable mirrors the form of a pulse in an usual representation of an ecg recording .ectopic foci and spiral waves are thought to be caused and influenced by physiologically modified regions of the tissue , which in the modeling correspond to spatial variations of the parameters .to simplify the analysis , we fix and , and consider variations of the parameters and according to where the amplitudes , characterize the strength , and the correlation lengths and characterize the spatial range of modification .the calculations are carried out on a two - dimensional simulation area of size , which represents an isolated section of atrial heart tissue , as it is used often in experiments .the boundary conditions of the simulation area are of von neumann type , i. e. , where denotes the normal derivative . to solve the two nonlinear coupled partial differential equations ( [ eq : fhn ] ) we use the finite element method ( fem ) with a triangulation consisting of 4225 nodes and 8192 triangles , and a constant integration time step .a simulation time of corresponds to a time of roughly to ms .the nonlinearity in eq .( [ eq : fhn ] ) is treated as an inhomogeneity , which means that for the value of the preceding time step is used .ectopic foci are regions in the atria , which generate activation waves emanating from self - excitatory hyperactive cells . in these cellsthe transmembrane potential raises without external stimulation until the threshold value is reached and an action potential results . in optical mapping studies and spatially resolved ecg recordings , ectopic foci are often localized in the regions of the pulmonary veins . to model a tissue with physiologically modified properties that result in ectopic activity , we fix ( ) and vary the resting state stability around the center of the simulation area ( ) according to eq .( [ eq : obstacle b ] ) .initially the system is in the excitable resting state ( and ) .figure [ fig : fig2]a shows the resulting activation pattern for and .the modified tissue is self - excitatory and acts as a pacemaker for activation waves , which propagate radially . in the time evolution shown in fig . [fig : fig3 ] , decreases until the threshold value for activation is reached and an action potential with a steep fall in occurs . in response to this activation , the inhibitor variable increases and pulls back to a value even larger than its initial value ( overshoot ) before returns to it , and the self - excitatory process starts anew . in order to systematically characterize the occurrence of ectopic activity, we calculate a phase diagram , where in dependence of and regions of ectopic activity can be distinguished from that without self - excitatory behavior .the results in fig .[ fig : fig2]b show that there exists a minimal , below which no ectopic activity occurs .the corresponding value is the critical value of resting state stability in the fhn equations ( [ eq : fhn ] ) in the absence of the diffusion term . for fixed the ectopic activity vanishes , when falls below the dashed transition line in fig .[ fig : fig2]b . in this regionthe diffusive current from the modified tissue to the surrounding causes the initial decrease of to become so slow , that the counter - regulation by eventually hinders to reach its activation threshold , cf.fig .[ fig : fig3 ] . only small oscillation of around a reduced resting state value can be seen in fig .[ fig : fig3 ] , which become weaker with growing time .the temporal - spatial pattern of the activation in the phase of ectopic activity is characterized by the frequency of the ectopic focus .we calculate this frequency as the inverse mean time interval between consecutive action potentials . as shown in fig .[ fig : fig4 ] , the frequency becomes larger with increasing ( at fixed ) and ( at fixed ) , and it tends to saturate for large . with increasing ,the diffusive current of the inner cells of the modified tissue decreases and thus a larger frequency is obtained . in the saturation limit the frequency is nearly the same as in the absence of diffusion and thus is mainly determined by the refractory period .in this section we study the influence of physiologically modified regions , called `` obstacles '' henceforth , on spiral wave behavior .it was observed that spiral waves in the atria can be generated by a perturbation of the propagation of planar excitation waves by anatomic obstacles as , for example , the pulmonary veins , the venae cavae , the pectinate muscle bundles or some localized region of modified tissue .these regions are considered as not fully excitable and are thus modeled as regions with a reduced parameter according to eq .( [ eq : obstacle c ] ) . in an experiment by ikeda and coworkers a nearly rectangular area of atrial tissue was placed on an electrode plaque in a tissue bath .holes with different diameters were created and a reentrant wave was initiated by cross - field stimulation .the resulting behavior of the wavefront was , amongst others , classified according to whether the spiral is anchored by the obstacle , and by the relationship between hole size and cycle length of the reentry .it was observed that for large obstacle sizes ( 6 , 8 and 10 mm ) , the reentrant wave attaches to the obstacle , leading to a linear increase of the cycle length with the hole diameter . for small obstacle diameters below about 4 mm , by contrast , meandering spirals with a tip getting variably closer to or further away from the hole were found . in this casethe cycle length becomes independent of the hole diameter .similar results were observed by lim and coworkers .they analyzed the behavior of spiral waves near holes with diameters ranging from to mm and obtained a higher attachment rate for larger obstacle diameters as well as a positive linear correlation of the reentry conduction velocity and wave length with the obstacle diameter in the case of attached spirals for smaller obstacle diameters the spiral waves were found to attach to and detach from the obstacle .the missing anchoring for small hole sizes was explained in by invoking a `` source - sink relationship '' .the `` source '' is the activation wavefront and provides a diffusive current to the surrounding tissue in the resting state , which constitutes the `` sink '' . the sink becomes larger for smaller obstacles , where more cells become depolarized by the activation wavefront .if the source - to - sink ratio is decreased below a certain critical value , the wavefront detaches from the obstacle . to elucidate these experimental findings , we perform numerical calculations for a geometry corresponding to the experiments with the following initial state and parameters settings : the modified region , is , as in the previous sec .[ subsec : ea ] , placed in the center of the simulation area at .initially a `` planar '' ( linear ) wave is generated by inducing a current in the stripe , , and by setting the area , into a refractory state with and , while the rest of the simulation area is in the resting state ( , ) .this initial state resembles the activation pattern directly after application of a cross - field or paired - pulse stimulation ( two rectangular pulses ) . at the `` upper part '' of the initial planar wavefront ( , ) , diffusive currents flow `` radially '' in all forward directions ( ) , while at the `` right boundary '' ( , ) the diffusive currents can flow only in positive direction ( due to the refractory state in the area , )this higher loss by diffusion leads to a smaller propagation speed of the initial wavefront at its upper boundary compared to its right boundary . as a consequence ,the wavefront becomes curved , and a reentrant spiral wave develops for all reductions in excitability and obstacle sizes , in accordance with the experimental observations .figure [ fig : fig5 ] shows activation patterns for and a ) , and b ) . the stronger reduction of excitability in fig .[ fig : fig5]a leads to an anchoring of the spiral wave , while in fig .[ fig : fig5]b the spiral is meandering . to analyze the parameter regimes of the occurrence of anchored or meandering spiral waves, we perform a frequency analysis for different values of and .therefore , we determine the peak positions in the time series of at positions far away from the center of the spiral and calculate the peak - to - peak intervals .the frequency of one point is one over the mean of the peak - to - peak intervals and the mean cycle length is one over the average of all these local frequencies .the results in fig .[ fig : fig6 ] show that , as in the experiments , attached spiral waves occur for large and for sufficiently large , where decreases with increasing . for these anchored spirals ,the frequency is proportional to , where is the conduction velocity in the fhn model .accordingly , increases linearly with for in fig .[ fig : fig6 ] . for small , only meandering spirals are observed .the transition from large to small reflects the transition from anatomical to functional reentry , as it has been reported in medical studies .the fact that for small always meandering spirals occur , can be interpreted by the small source - sink ratio .the same mechanism can also lead to meandering spirals for large if becomes small .note , that nevertheless the spiral wave is not anchored to the obstacle , its movement is still influenced by the obstacle .in previous studies on interactions of paced waves with self - excitatory waves , the influence of the pacing on a spiral wave was studied with the aim to suggest a possible therapy to suppress fibrillation or tachycardia .the pacing was applied to the region , where the spiral wave was located .it was found that the pacing leads to an annihilation of the reentrant activity or to a shift of the spiral core . herewe investigate the perturbation of regular paced waves from a source representing the sinus node by waves emanating from an additional pacemaker located in a distant region .electrocardiogram recordings and their frequency analyses show that regular excitation patterns are often observed in the left atrium , where additional pacemakers like spiral waves or ectopic foci are located , and that at the same time irregular , fibrillatory - like states in the right atrium occur .this led to the conjecture that fibrillatory states can be induced in the right atrium by self - excitatory pacemakers in the left atrium . in this connection it is important to better understand how a fibrillatory state can occur ,if regular paced waves , as generated by the sinus node , are disturbed by additional pacemaker waves . to this end, we consider the waves to be located in spatially separated regions that are connected by a small region . to be specific , we choose a simulation area of size , which is divided into three regions ( see fig .[ fig : fig7 ] ) .the rectangular area l with , representing the left atrium , the rectangular area r with , representing the right atrium , and the small bridge b with , representing the connection between the atria .the grid used in the finite element calculations consists of in total 8871 nodes and 17350 triangles .we focus on situations where the pacemaker in the region l is located far outside the left part of the simulation area , so that the resulting wavefronts become `` planar '' ( linear ) . in the simulation they are generated by application of a stimulating current with duration and a period in the region and .the activation waves representing the pacemaker in region r are generated by the application of a current with duration and period in the region and .the irregularity of the resulting patterns in region r is quantified by calculating the shannon entropy of the distribution of local activation frequencies for every grid point in r. to this end we divide the frequency range into bins of size ^{-1},0.01 \right\rbrace $ ] and calculate the probabilities of finding frequency in bin , where is the total number of grid points .the normalized entropy then is given by for a single frequency ( ) , , while for a chaotic activation pattern with a uniform distribution ( ) , .for small perturbation frequencies ( ) the influence of the activation wavefronts from the additional pacemaker onto the sinus node waves is almost negligible .small deformations of the linear wavefronts are observed , but the measured frequencies are close to the pacing frequency , and the overall spatiotemporal pattern in r is regular. with increasing perturbation frequency the spatiotemporal pattern in region r becomes more irregular and a breakup of the regularly paced waves can occur .figure [ fig : fig8 ] shows the time evolution of the excitation patterns for a perturbation frequency and a pacing frequency . for small times , the waves are only slightly perturbed and the pattern remains regular , as can be seen from the four consecutive snapshots in fig .[ fig : fig8]a . at a later time , however , the perturbation by the waves from region l results in a breakup of the waves close to the bridge b in region r , as can be seen from the four consecutive snapshots in fig .[ fig : fig8]b .the onset of this breakup was found to occur at a time . in order to investigatehow the spatial irregularity is reflected in the time evolution , we consider three different points and in region r. the time evolution of for these three points is shown for two different perturbation frequencies and in fig .[ fig : fig9 ] ( : black solid curve , : red dashed curve , : blue dash - dotted curve ) . for the lower frequency the evolution at all three points is regular , see fig .[ fig : fig9]a . for the higher frequency , by contrast , the break ups of the waves seen in fig .[ fig : fig8]b yield unsuccessful activations during refractory periods , as can be seen , for example , at time in point p ( red dashed curve ) and at time in point p ( black solid curve ) .these unsuccessful activations are caused by a rapid pacing of the region by the curled wave .another feature is that the shape of the action potential varies .this can be seen , for example , at point p ( black solid line ) when comparing the pulses at and .it is important to note that these irregularities are hardly observed at point p ( blue dash - dotted curve ) , showing that they exhibit a spatial heterogeneity .the total irregularity in region r quantified by the normalized shannon entropy of the local frequency distribution is shown in fig .[ fig : fig10 ] as a function of the frequency of the perturbing waves from region l. for small perturbation frequencies the entropy equals the unperturbed case , while for , sharply increases until it reaches a maximum at . for higher a return to more regular activation patternis found , indicating that the disturbance is most pronounced if is close to .to conclude , the disturbance of the wavefronts in region r by waves emanating from an additional source in region l and propagating through the bridge region b can lead to irregular , fibrillatory - like activation patterns in region r. on the other hand , the waves in region l are almost unaffected by the waves in r with primary wavefront orthogonal to the cross section of the bridge .the irregularities in region r are most pronounced at a certain perturbation frequency .how this value is influenced by the geometry of the bridge and the parameters characterizing the cell properties remains further investigation .the influence of physiologically modified regions on the generation and properties of spatio - temporal activation patterns was investigated on the basis of the fitzhugh - nagumo equations with von neumann boundary conditions , in particular the occurrence of ectopic foci and spiral waves under spatial inhomogeneities of the parameters characterizing the cell properties .it was shown that the reduction of the resting state stability in circular regions of the tissue can lead to ectopic activity .a minimal size of hyperactive tissue is necessary for ectopic activity to occur , as well as a minimal strength of the reduction of resting state stability with respect to the `` healthy '' reference value . with increasing size of the hyperactive tissue , the frequency of the ectopic focus first increases and eventually saturates .the saturation frequency depends on the strength of the modification .for spiral wave patterns it was found that an anchoring of the wave to the obstacle can occur . to uncover this mechanism ,an obstacle was modeled as a patch of modified tissue with reduced excitability by a reduction of the parameter in the fhn equations .the obstacle was placed in the middle of a two - dimensional square simulation area and a planar excitation wave was generated aside of the obstacle in front of a refractory region , which represents an activation pattern observed after cross - field stimulation in experiments . as in the experiments , reentrant waves are observed . these exhibit either functional or anatomical reentry in dependence of the obstacle size and reduction strength of excitability .an analysis of the spiral wave frequency in dependence of the obstacle size yields results in accordance with the experimental observations .finally we studied the question , if and how fibrillatory - like states can arise in the right atrium due to the presence of self - excitatory spiral waves or ectopic foci in the left atrium . to this endthe simulation area was separated into two rectangular regions l and r connected by a small bridge b. planar excitation waves were generated with different frequencies in the left region l to model a pacemaker far outside the left part of the simulation area .planar excitation waves resembling stimulation by the sinus node were generated by periodic application of a stimulating current at one boundary in the right part r of the simulation area .for small perturbation frequencies in l , the disturbance of the waves in r turned out to be small . for higher perturbation frequencies ,the waves in r become significantly disturbed and the spatio - temporal activation pattern eventually becomes irregular .the time evolution of the activation variable , representing the electric potential in the fhn equations , shows features in close resemblance to the ones found in intra - atrial electrocardiogram recordings during fibrillation in the right atrium .the spatial variation of the excitation frequency was quantified in terms of an entropy , which showed , for a given pacing frequency , a maximum as a function of the perturbation frequency .further investigations will focus on the influence of the geometry of the bridge and the wavefronts as well as analyse the behavior for different pacing frequencies .the reliability of the measure of irregularity should be analysed and if necessary other methods to characterise the system behavior should be searched .v. fuster , l. e. ryden , d. s. cannom , h. j. crijns , a. b. curtis , k. a. ellenbogen , j. l. halperin , j .- y .le heuzey , g. n. kay , j. e. lowe , s. b. olsson , e. n. prystowsky , j. l. tamargo , s. wann , s. c. smith , a. k. jacobs , c. d. adams , j. l. anderson , e. m. antman , s. a. hunt , r. nishimura , j. p. ornato , r. l. page , b. riegel , s. g. priori , j .- j .blanc , a. budaj , a. j. camm , v. dean , j. w. deckers , c. despres , k. dickstein , j. lekakis , k. mcgregor , m. metra , j. morais , a. osterspey , j. l. zamorano , europace * 8 * , 651 ( 2006 ) .p. sanders , o. berenfeld , m. hocini , p. jais , r. vaidyanathan , l .- f .hsu , s. garrigue , y. takahashi , m. rotter , f. sacher , c. scavee , r. ploutz - snyder , j. jalife , m. haisaguerre , circulation * 112 * , 789 ( 2005 ) .
we study the interplay between traveling action potentials and spatial inhomogeneities in the fitzhugh - nagumo model to investigate possible mechanisms for the occurrence of fibrillatory states in the atria of the heart . different dynamical patterns such as ectopic foci , localized and meandering spiral waves are found depending on the characteristics of the inhomogeneities . their appearance in dependence of the size and strength of the inhomogeneities is quantified by phase diagrams . furthermore it is shown that regularly paced waves in a region r , that is connected by a small bridge connection to another region l with perturbing waves emanating from an additional pacemaker , can be strongly disturbed , so that a fibrillatory state emerges in region r after a transient time interval . this finding supports conjectures that fibrillatory states in the right atrium can be induced by self - excitatory pacemakers in the left atrium .
when using traditional reinforcement learning ( rl ) algorithms ( such as -learning or sarsa ) to evaluate policies in environments with large state or action spaces , it is common to introduce some form of architecture with which to approximate the value function ( vf ) , for example a parametrised set of functions .this approximation architecture allows algorithms to deal with problems which would otherwise be computationally intractable .one issue when introducing vf approximation , however , is that the accuracy of the algorithm s vf estimate is highly dependent upon the exact form of the architecture chosen .accordingly , a number of authors have explored the possibility of allowing the approximation architecture to be _ learned _ by the agent , rather than pre - set manually by the designer ( see for an overview ) .it is common to assume that the approximation architecture being adapted is linear ( so that the value function is represented as a weighted sum of basis functions ) in which case such methods are known as _ basis function adaptation_. when designing a method to adapt approximation architectures , we assume that we have an underlying rl algorithm which , for a given linear architecture , will generate a vf estimate .if we assume that our basis function adaptation occurs on - line , then the rl algorithm will be constantly updating its vf estimate as the basis functions are updated ( the latter typically on a slower time - scale ) . a simple andperhaps , as yet , under - explored method of basis function adaptation involves using an estimate of the frequency with which an agent has visited certain states to determine which states to more accurately represent .such methods are _ unsupervised _ in the sense that no direct reference to the reward or to any estimate of the value function is made .the concept of using visit frequencies in an unsupervised manner is not completely new however it remains relatively unexplored when compared to methods based on direct estimation of vf error .however it is possible that such methods can offer some unique advantages . in particular : ( i ) estimates of visit frequencies are very cheap to calculate , ( ii ) accurate estimates of visit frequencies can be generated with a relatively small number of samples , and , perhaps most importantly , ( iii ) in many cases visit frequencies contain a lot of the most important information regarding where accuracy is required in the vf estimate .our aim here is to further explore and quantify , where possible , these advantages . in the next sectionwe outline the details of an algorithm ( pasa , short for `` probabilistic adaptive state aggregation '' ) which performs unsupervised basis function adaptation based on state aggregation .this algorithm will form the basis upon which we develop theoretical results in section [ theoretic ] .the agent interacts with an environment over a sequence of iterations . for each it will be in a particular state ( ) andwill take a particular action ( ) according to a policy ( we denote as the probability the agent takes action in state ) .transition and reward functions are denoted as and respectively ( and are unknown , however we assume we are given a prior distribution for both ). hence we use to denote the probability the agent will transition to state given it takes action in state .we assume the reward function ( which maps each state - action pair to a real number ) is bounded : for all .we are considering the problem of policy evaluation , so we will always assume that the agent s policy is fixed .a _ state aggregation _ approximation architecture we will define as a mapping from each state to a _ cell _ ( ) , where typically . given a state aggregation approximation architecture ,an rl algorithm will maintain an estimate of the true value function , where specifies a weight associated with each cell - action pair .provided for any particular mapping the value converges , then each mapping has a corresponding vf estimate for the policy .many different methods can be used to score a vf estimate ( i.e. measure its accuracy ) .a common score used is the squared _ bellman error _ for each state - action , weighted by the probability of visiting each state .this is called the _mean squared error _ ( mse ) .when the true vf is unknown we can use , the _ bellman operator _, to obtain an approximation of the mse .this approximation we denote as .hence : this is where is a discount factor and where is a vector of the probability of each state given the stationary distribution associated with ( given some fixed policy , the transition matrix , obtained from and , has a corresponding stationary distribution ) .we assume that our task in developing an adaptive architecture is to design an algorithm which will adapt so that is minimised .the pasa algorithm , which we now outline , adapts the mapping .pasa will store a vector of integers of dimension , where .suppose we start with a partition of the state space into cells , indexed from to , each of which is approximately the same size .using we can now define a new partition by splitting ( as evenly as possible ) the cell in this partition .we leave one half of the cell with the index and give the other half the index ( all other indices stay the same ) . taking this new partition ( consisting of cells )we can create a further partition by splitting the cell .continuing in this fashion we will end up with a partition containing cells ( which gives us the mapping ) .we need some additional mechanisms to allow us to update .denote as the set of states in the cell of the partition ( so and ) .the algorithm will store a vector of real values of dimension .this will record the approximate frequency with which certain cells have been visited by the agent .we define a new vector of dimension : where is the indicator function for a logical statement ( such that if is true ) .the resulting mapping from each state to a vector we denote as .we then update in each iteration as follows ( i.e. using a simple stochastic approximation algorithm ) : this is where ] , and , and ( 3 ) is also `` close to '' deterministic ( i.e. the probability of _ not _ taking the most probable action is no greater than for each state ) .we can make the following observation .if and are deterministic , and we pick a starting state , then the agent will create a path through the state space and will eventually revisit a previously visited state , and will then enter a cycle .call the set of states in this cycle and denote as the number of states in the cycle .if we now place the agent in a state ( arbitrarily chosen ) it will either create a new cycle or it will terminate on the path or cycle created from . call the states in the second cycle ( and the number of states in the cycle , noting that is possible ) .if we continue in this manner we will have sets .call the union of these sets and denote as the number of states in .we denote as the event that the path created in such a manner terminates on itself , and note that , if this does not occur , then . if ( 2 ) holds then and . ) .the expectation is over the prior distribution for . for a description of the problem and a formal proof see , for example , page 114 of flajolet and sedgewick .the variance can be derived from first principles using similar techniques to those used for the mean in the birthday problem . ] supposing that and are no longer deterministic then , if ( 1 ) and ( 3 ) hold , we can set sufficiently low so that the agent will spend an arbitrarily large proportion of its time in .if ( 2 ) holds we also have the following : [ genmoments ] and .we will have : and for the variance : where we have used the fact that the covariance term must be negative for any pair of lengths and , since if is greater than its mean the expected length of must decrease , and vice versa .[ error ] for all and , there is sufficiently large and sufficiently small such that pasa in conjunction with a suitable rl algorithm will provided for some generate , with probability no less than , a vf estimate with . using chebyshev s inequality , and lemma [ genmoments ], we can choose sufficiently high so that with probability no greater than .since is bounded and then for any , and we can also choose so that summed only over states not in is no greater than . we choose so that this is satisfied , but also so that for all elements of . will be bounded from below for all .this can be verified by more closely examining the geometric distributions which govern the `` jumping '' between distinct cycles . ]now provided that then each state in will eventually be in its own cell .the rl algorithm will have no error for each such state so therefore will be no greater than .the bound on provided represents a significant reduction in complexity when starts to take on a size comparable to many real world problems ( and could make the difference between a problem being tractable and intractable ) .it also seems likely that the bound on in theorem [ error ] can be improved upon , as the one provided is not necessarily as tight as possible .conditions ( 1 ) and ( 3 ) are commonly encountered in practice , in particular ( 3 ) which can be taken to reflect a `` greedy '' policy .condition ( 2 ) can be interpreted as the transition function being `` completely unknown '' ( it seems possible that similar results may hold under other , more general , assumptions regarding the prior distribution ) .note finally that the result can be extended to exact mse if mse is redefined so that it is also weighted by .the key message from our discussion is that there are commonly encountered circumstances where unsupervised methods can be very effective in creating an approximation architecture .however , given their simplicity , they can at the same time avoid the cost ( both in terms of computational complexity , and sampling required ) associated with more complex adaptation methods . in the setting of policy _ improvement _ these advantages have the potential to be particularly important , especially when dealing with large state spaces .some initial experimentation suggests that the pasa algorithm can have a significant impact on rl algorithm performance in both policy evaluation and policy improvement settings .the nature of the vf estimate generated by pasa and its associated rl algorithm is that the vf will be well estimated for states which are visited frequently under the existing policy .this does come at a cost , however , as estimates of the value of deviating from the current policy will be made less accurate .thus , even though or mse may be low , it does not immediately follow that an algorithm can use this to optimise its policy via standard policy iteration ( since the consequences of deviating from the current policy are less clearly represented ) .ultimately , however , the theoretical implications of the improved vf estimate in the context of policy iteration are complex , and would need to be the subject of further research .
when using reinforcement learning ( rl ) algorithms to evaluate a policy it is common , given a large state space , to introduce some form of approximation architecture for the value function ( vf ) . the exact form of this architecture can have a significant effect on the accuracy of the vf estimate , however , and determining a suitable approximation architecture can often be a highly complex task . consequently there is a large amount of interest in the potential for allowing rl algorithms to adaptively generate approximation architectures . we investigate a method of adapting approximation architectures which uses feedback regarding the frequency with which an agent has visited certain states to guide which areas of the state space to approximate with greater detail . this method is `` unsupervised '' in the sense that it makes no direct reference to reward or the vf estimate . we introduce an algorithm based upon this idea which adapts a state aggregation approximation architecture on - line . a common method of scoring a vf estimate is to weight the squared bellman error of each state - action by the probability of that state - action occurring . adopting this scoring method , and assuming states , we demonstrate theoretically that provided ( 1 ) the number of cells in the state aggregation architecture is of order or greater , ( 2 ) the policy and transition function are close to deterministic , and ( 3 ) the prior for the transition function is uniformly distributed our algorithm , used in conjunction with a suitable rl algorithm , can guarantee a score which is arbitrarily close to zero as becomes large . it is able to do this despite having only space complexity and negligible time complexity . the results take advantage of certain properties of the stationary distributions of markov chains .
bifurcations of unstable modes from the continuous spectrum underlie pattern formation in a wide variety of physical systems that can be described by hamiltonian 2 + 1 field theories .these patterns take the form of vortices in phase space , and are referred to as ` bgk modes ' ( bernstein , greene , kruskal ) in plasma physics and ` kelvin cats - eye ' vortices in two - dimensional , inviscid , incompressible fluid mechanics , and possess analogs in condensed matter physics , geophysical fluid dynamics , and astrophysics . the equations that describe these physical systems all share crucial features : a formulation as a noncanonical hamiltonian system , and that stable equilibria possess continuous spectra .before nonlinear patterns form in these systems , unstable modes bifurcate from their continuous spectra , a linear bifurcation we call the continuum hamiltonian hopf ( chh ) bifurcation that is an analog of the usual hamiltonian hopf ( hh ) bifurcation of finite - dimensional systems . in this chapterwe describe some mathematical aspects of the chh , continuing on from the material presented in .perturbation of point spectra in canonical , finite - degree - of - freedom hamiltonian systems is described by kren s theorem , which states that a necessary condition for a hh bifurcation is to have a collision between eigenvalues of opposite signature .a different situation arises in the infinite - dimensional case if the linear hamiltonian system has a continuous spectrum .a representative example of such a hamiltonian system is the vlasov - poisson equation , which when linearized about stable homogeneous equilibria gives rise to a linear hamiltonian system with pure continuous spectrum that can be brought into action - angle normal form .a definition of signature was given in these works for the continuous spectrum .the primary example here will be the vlasov - poisson equation , but the same structure is possessed by a large class of equations , examples being euler s equation for the two - dimensional fluid , where signature for shear flow continuous spectra was defined , and likewise for a model for electron temperature gradient turbulence .modulo technicalities , the behavior treated here is expected to cover a large class of systems . in sec .[ sec : mathchh ] we present the mathematical structure that we use to describe chh bifurcations , in particular we define structural stability and discuss the definition of signature for the continuous spectrum .one of the crucial parts of this framework is the choice of the norm on the perturbations to the time evolution operator , a step that requires selection of a banach space to be the phase space for solutions of the linearized system . in sec .[ sec : vlasov ] we apply this framework to the vlasov - poisson equation , presenting without proof results that appeared in .we show that the plasma two - stream instability is a chh bifurcation that can be viewed as a zero - frequency mode interacting with a negative energy continuous spectrum to bifurcate to instability , so that the continuous spectrum provides the ` other ' mode in the chh bifurcation .we show that if in the chosen banach space the of the hilbert transform is an unbounded operator , then equilibria of the vlasov - poisson equation are always structurally unstable .two examples of such banach spaces are and .if we restrict perturbations to those that are dynamically accessible , which precludes the possibility of altering the signature of the continuous spectrum , we prove that equilibria with positive signature only are structurally stable .section [ sec : canonical ] contains a description of the differences between canonical and noncanonical systems ; in particular , comparison to the work of kren on canonical hamiltonian systems is made and a simple demonstration of a bifurcation to instability in such a canonical system is described . in sec .[ sec : singlewave ] we present the idea that a certain mean field hamiltonian system , the single - wave model , is a nonlinear normal form for the chh bifurcation that describes the eventual nonlinear saturation of the resulting instability near criticality . we note , this model is derived by means of matched asymptotic expansions of a hamiltonian 2 + 1 mean field theory near a marginally stable equilibrium , and also by comparison with the results of numerical simulations . finally , in sec .[ sec : conclude ] we summarize and conclude .iii.b.1 of we presented a specific example of the chh bifurcation , the plasma two - stream instability .this theory gives a necessary condition for structural instability : collision of eigenvalues of opposite signature .we present a framework for bifurcations in noncanonical hamiltonian systems with continuous spectra. the key notion will be the generalization of signature to the continuous spectrum , which is prevalent in the linear infinite - dimensional hamiltonian systems that undergo the chh bifurcation .we first set the stage by discussing structural stability .now we consider structural stability of linear noncanonical hamiltonian systems with continuous spectrum .the dynamical variable is assumed to be a member of a function space .we are given a hamiltonian functional ( of ) , which is ( typically ) an unbounded quadratic functional on , and a noncanonical poisson bracket , which will be bilinear , antisymmetric , and satisfy the jacobi identity and which in this chapter will always be of lie - poisson form , see .hamilton s equations are ( see the previous chapter , ) : _ t=\{,h}=. here is the time evolution operator , which by assumption is a linear operator from to itself .this equation can also be written : _t==[eq : noncanon ] where is the cosymplectic operator of the bracket , and is a linear operator derived from using the bracket .care must be taken when using this formulation as the operator is often not defined as an operator on , and only the product takes values in . the process of canonization , which reformulates eq .( [ eq : noncanon ] ) in terms of a canonical cosymplectic operator , which is bounded and invertible , eliminates this difficulty .the operator ( and hence the linear hamiltonian system ) is _ spectrally stable _ if the spectrum of is contained in the imaginary axis , .this is equivalent to the condition that the spectrum is in the closed lower half plane , i.e. , because the spectrum satisfies the property implies , a property that comes from the hamiltonian structure .solutions of spectrally stable systems grow at most sub - exponentially .we consider now a family of hamiltonian systems that depend continuously on some parameter , and look for changes in the stability of the hamiltonian system as the parameter varies .such a family can be generated in many ways .one common scenario is for the linear hamiltonian system to come from the linearization of some nonlinear hamiltonian system about an equilibrium solution . in that case , the bracket and hamiltonian functional come from linearizations of the original hamiltonian system , and both will depend on the equilibrium ( cf .both the bracket and hamiltonian are subject to change , however , and this malleability of the bracket gives the bifurcation theory of noncanonical hamiltonian systems a different character than that of canonical hamiltonian systems .bifurcations to instability occur when a spectrally stable system becomes spectrally unstable .the following definitions depend on our definition of the size of a perturbation , and we will have to choose some set of perturbations and a measure of this size in order to proceed . assuming that we have made this choice , we say that the spectrally stable time evolution operator is _ structurally stable _ if there exists some such that for all perturbations satisfying the operator is spectrally stable , where is our chosen measure for perturbations of .otherwise we say that is _ structurally unstable_. the theory will depend on the choice of allowable perturbations and norm .let the family of hamiltonian systems be parametrized by a parameter so that the time evolution operator for each system is .the continuity properties of the family will typically come from an induced norm from the banach space in which solutions to the equations live .for instance , one choice would be that is a bounded operator .other choices are relatively compact / bounded perturbations ( considered in in the context of canonical systems ) or the more general class of unbounded perturbations based on the _ gap _ , given by . some of the most physically interesting families of systems come from linearizing a hamiltonian system about a member of a continuous family of equilibrium states .if this is the context of our physical problem , it makes sense to only consider perturbations that leave this hamiltonian structure unchanged , for instance restricting to perturbations that change the equilibrium state only .our most important example will be the two - stream instability described by the vlasov - poisson system , which is of this type .a further restriction is to choose to perturb to equilibria that are dynamically accessible from the original equilibrium , which restricts to perturbations that can be produced using hamiltonian forces .the kren signature is essential in the description of hh bifurcations , as the collision between positive and negative energy modes is a necessary condition for the existence of a hh bifurcation .it is straightforward to compute the energy signature of modes in the finite - dimensional case as one can simply use the hamiltonian function . in the infinite - dimensional casethis is complicated by the presence of the continuous spectrum .the continuous analogs of discrete eigenmodes , the so - called generalized - eigenfunctions , are distributions whose hamiltonian is not defined , and another approach , based on the the theory of normal forms , is required to attach a signature to the continuous spectrum . the linear theory of finite - dimensional hamiltonian systems is organized around normal forms , and the proof of kren s theorem is based on the theory of normal forms . though the situation can be more complex in the infinite - dimensionsal case , for many important cases it is possible to find the appropriate normal form .the simplest normal forms arise when the time evolution operator of the hamiltonian system is diagonalizable , in which case the hamiltonian can be written in terms of action - angle variables and a canonical poisson bracket , for instance : h= _ du ( u ) ( u ) j(u ) = 12 _ du ( u ) ( u ) ( q(u)^2+p(u)^2 ) , [ eq : oscform ] where and in the second equality we introduce the alternative form in terms of the canonically conjugate oscillator variables and . here for all and defines the signature of the continuous spectrum corresponding to .the ability to define the signature for a given hamiltonian system is directly related to the ability to bring the system into a normal form , i.e. , to canonize and diagonalize it .diagonalization is equivalent to finding a transformation that converts the time - evolution operator of the system into a multiplication operator , viz ., to finding a linear operator such that is a multiplication operator .the systems described in this paper all tend to have non - normal time - evolution operators , so it may be surprising that it is ever possible to define such a transformation since the spectral theorem does not apply , but it turns out that for many important cases the time evolution operators are diagonalizable .operators with this property are called _ spectral operators _ , . by definition ,a spectral operator possesses a family of spectral projection operators defined on borel susbets that commute with the time evolution operator and reduce its spectrum , i.e. , .the signature of the subset is then defined by the sign of the hamiltonian operator restricted to members of , which can be positive , negative , or indefinite . for a given point ,this is defined by taking limits of sets that contain .if a diagonalization is known , the application of this definition can be straightforward .consider eq .( [ eq : oscform ] ) with , .then is equal to the functions with support on and the energy is clearly positive .an equivalent definition involves the sign of the operator on the spectrum but the definition involving signature is more intuitive physically .now consider the vlasov - poisson system of sec .iii.b.1 of , as an example of the hamiltonian 2 + 1 field theories that exhibits the chh bifurcation .here we are interested in the properties of the equations linearized around a homogeneous equilibrium .these equations , repeated for completeness , are _ k , t= - i kp _k+if_0 k^-1 _ d|p _ k(|p , t)=:-_k _ k. [ fk ] our goal is to understand how the spectrum of changes under changes in . to this end we consider consider perturbations of the equilibrium function , .the time evolution operator of the perturbed system is , where : _ k= - if_0 k^-1 _ d|p _ k(|p , t ) .we use the operator norm induced by the norm on to measure the size of , which requires restriction to function spaces in which is bounded , for instance the sobolev spaces and .the quantity can be bounded by in the norms for and , because the integral operator is a bounded operator from those spaces to : _k. if we were to consider , then would be an unbounded operator and we would have to use some other means of determining its size ( see grillakis or kato ) . as mentioned earlier , the linearized vlasov - poisson system has a continuous spectrum .morrison constructed a transformation that diagonalizes the linearized vlasov - poisson system , converting the time evolution operator to a multiplication operator and determining the signature of the continuous spectrum .( see for a generalization of this method to other 2 + 1 hamiltonian field theories . ) this is based on the -transform , which for stable equilibria with no discrete modes is defined as follows : g[g]=_r g+_i [ f]=f - where ] satisfies : + ikug_k=0 , which gives a representation of the solution upon back - transforming . using this -transform and the theory of generating functions , it is possible to canonize and diagonalize the linearized vlasov - poisson system .canonization proceeds by transforming the poisson bracket of eq .( 1.35 ) of according to _ k=(_k+_-k ) _k=(_k-_-k ) .the canonically conjugate pair are real due to the reality of the distribution function , and the poisson bracket is in terms of them is \{f , g}=_dp ( - ) , where here and often henceforth we suppress the dependence .diagonalization is achieved using a mixed - variable generating function involving the -transform , a transformation that was inspired by van kampen s formal expression for continuum eigenmodes , _k(u , p)=iku ( u , p ) . where ( u , p):=_i(p)pv+_r(p)(u - p ) , which clearly bares the mark of the -transform .diagonalization proceeds from the following mixed - variable generating functional : _du p , which leads to a transformation to the new variables , q== [ ] = = [ p ] . under direct substitution into the hamiltonian andmaking use of identities derived in ( see also ) in terms of the new variables the hamiltonian becomes h[q , p]=_du ( u)|ku|(q^2+p^2 ) , [ vpdia ] where is the signature of the continuous spectrum with frequency .the hamiltonian of ( [ vpdia ] ) , that for a continuum of uncoupled harmonic oscillators , is the normal form for the linearized vlasov - poisson system .this transformation can be defined only in reference frames where , which can always be made true by galilean shift .therefore the signature changes only when the sign of changes . to illustrate this signature , consider two special cases , that of a maxwellian distribution , , and that of a bi - maxwellian distribution , ( where normalization is not important ) .the maxwellian distribution has one maximum and therefore it has only positive signature . on the other hand, the bi - maxwellian has three extrema ( see fig . [ bigaus ] ) and two signature changes .the penrose criterion , which was introduced in the previous chapter , clearly demonstrates the role that signature plays in transitions to instability for the linearized vlasov - poisson system .this criterion is that the winding number of the image of the real line under the map is equal to when is stable .suppose that we have a family of equilibria that depend continuously on some parameter , and is stable for some values of the parameter and unstable for others .in order for the stability to change , the penrose plot must increase its winding number from to .for this to happen , the penrose plot must cross the origin at the bifurcation point .we call these crossing points critical states . a simple technique to compute the winding number is to draw a ray from the origin to infinity and to count the number of intersections with the contour , accounting for orientation by adding for a positive orientation and subtracting for a negative orientation .one counts the number of zeros of for which and adds them with a positive sign if is positive , a crossing of the penrose plot from the upper half plane to the lower half plane , a negative sign if is negative , a crossing from the lower half plane to the upper half plane , and zero if is not a crossing of the real axis , a tangency .we begin by choosing the phase space of the linearized vlasov - poisson system to be . in this phase space ,the induced norm on is proportional to the of .this choice puts a restriction on the ability of perturbations to affect the signature of ; at a point a perturbation must have norm at least to induce a signature change at , viz . |f_0|_k .furthermore , the other part of the penrose plot , ] . at such a point ,the addition of a generic function to will cause the penrose plot to intersect the real axis transversely , and such a perturbation can be used to cause instability .if the system is perturbed so that the tangency becomes a pair of transverse intersections , then the winding number of the penrose plot would jump to 1 and the system would be unstable .figure [ gausscrit ] illustrates a critical penrose plot for a bifurcation at .another critical state occurs when =0 ] , and the winding number will be positive . figure [ bigaus2 ] is a critical penrose plot corresponding to the bi - maxwellian distribution with the maximum stable separation . to understand and interpret these bifurcations we must understand the signature of the embedded modes at the critical state and also of the continuous spectrum .the energy signature of an embedded mode with frequency is given by .consider the bifurcation .assume that the embedded mode is _ not _ a zero frequency mode .then we claim that if then and if then <0 .this is demonstrated in using the analyticity of the plasma dispersion function .suppose the opposite were true , that and , then a small perturbation would generically decrease the winding number rather than increase it .this would imply the existence of poles in the upper half plane , violating the analyticity of the plasma dispersion function .this implies that the signature of the inflection point mode is always the same as the signature of the surrounding continuous spectrum .the fact that this bifurcation occurs when there is only positive signature may seem counterintuitive , but it is due to the fact that negative signature can be added in the neighborhood of the inflection point mode with an infinitesimal perturbation . when we restrict to dynamically accessible perturbations , which we do in a later section , this bifurcation will disappear . in the bifurcation , the signature of the embedded mode and the continuous spectrum surrounding it are always indefinite , either the embedded mode is at frequency , then the embedded mode has zero energy and the signature surrounding it is negative ( the embedded mode is always in a valley of the distribution function , which can be seen again using analyticity and the perturbation introduced in the next section ( cf . ) ) , or there is a change in the signature of the continuous spectrum .there is no reference frame in which the signature of the continuous spectrum _ and _ embedded mode are definite .we will prove that if the perturbation function is some homogeneous and the space is , then every equilibrium distribution function is structurally unstable to an infinitesimal perturbation . underthis choice of , | ] is order one at a zero of .such a perturbation can turn any point where into a point where >0 ] , respectively . in the space function has norm .if we choose and , then the terms that do not involve are all smaller than . with these choices , satisfies ( 0)&=&2-(h+e^-1/h)(|h+e^-1/h| )+ & & + ( 3h+e^-1/h)(|3h+e^-1/h| ) + & = & 2+o(hh ) .if we choose and , then for any we can choose an such that and , and for .the perturbation arbitrarily moves the crossings of the real axis of the penrose plot of .if we use this perturbation to move crossings from the positive imaginary axis to the negative real axis , we can increase the winding number of the penrose plot , thus causing instability .therefore the existence of this implies that any equilibrium is structurally unstable in both the spaces and .[ structuralinstability ] a stable equilibrium distribution is structurally unstable under perturbations of the equilibrium in the banach spaces and .thus we emphasize that we can always construct a perturbation to that makes our linearized vlasov - poisson system unstable .for the special case of the maxwellian distribution , fig .[ pert]a shows the perturbed derivative of the distribution function and fig .[ pert]b shows the penrose plot of the unstable perturbed system .observe the two crossings created by the perturbation on the positive axis as well as the negative crossing arising from the unboundedness of the perturbation .theorem [ structuralinstability ] expresses the fact that in the norm , signature changes give rise to unstable modes under infinitesimal perturbations combined with the fact that a signature change can be induced in the neighborhood of any maximum of . in the next sectionwe will demonstrate the role of signature more explicitly by restricting to dynamically accessible perturbations .as we have stated , the signature of the continuous spectrum in the vlasov - poisson system is . in , is bounded by , which means that most points of the continuous spectrum can not change signature under infinitesimal perturbations , the exception being near points where vanishes .all signature changes can be prevented by restricting to perturbations of that are _ dynamically accessible _ from , as we shall explain .the solution of any of the mean - field hamiltonian field theories that have been described here can be written as a composition an initial condition with a symplectic map , where describes the single particle characteristics ( see ) .we say that two functions and are dynamically accessible from each other if there exists some symplectic map such that , i.e. , is a symplectic rearrangement of and vice versa . in this workwe only study perturbations of that preserve homogeneity .it is impossible to construct a dynamically accessible perturbation of in a finite spatial domain that preserves spatial homogeneity . to see this we write a rearrangement as , where is a function of alone .because =1 ] .equation ( [ eq : canonical ] ) is a hamiltonian system with hamiltonian functional ] .the two signs occur because the real axis is a branch cut , which is known to be a consequence of continuous spectra in systems of this type .observe , the same occurs for the caldeira - leggett model in ( [ cldr2 ] ) .after collision , the number of discrete eigenvalues the emerge can be counted in a straightforward way .for example , consider the penrose plot of fig .[ gausscrit ] that depicts a bifurcation at criticality . if the used for fig .[ gausscrit ] is replaced by , a one parameter perturbation that matches at , then when instability sets in the point of tangency will move with so that there are two intersections of the real axis giving rise to a winding number of unity , which signals the instability .generically this will give a complex eigenvalue where , with both real and imaginary parts of depending on the bifurcation parameter .a similar penrose argument reveals that there is also a root in the lower half plane , bringing our eigenvalue count to two , with corresponding to decay . in these plots is assumed to be fixed , but associated with a given is canonical pair , , which can be traced back to and .here each labels a degree of freedom , which has two associated eigenvalues : a mode or degree of freedom , determined by its wavenumber , has two dimensions , corresponding to amplitude and phase .replacing by in gives the remaining two eigenvalues , .thus , the chh is a bifurcation to a quartet , , and after bifurcation the structure is identical to that of the ordinary hh bifurcation .tractability often arises in problems because of assumptions of symmetry , e.g. , the homogeneity of the equilibrium simplifies the vlasov problem and the symmetry in the jeans problem of sec .ii.b.3 of allowed an explicit solution of the dispersion relation ( 25 ) .thus , the question arises , what happens if we symmetrize the chh bifurcation discussed above ? if , with the upper portion of fig .[ gausscrit ] unchanged , then we obtain a plot that is reflection symmetric about the -axis with two osculating points . under parameterchange to instability , both curves must cross and using the ray counting procedure discussed in the in sec .[ sec : vlasov ] this causes the winding number to jump by 2 .thus , for symmetric with , bifurcating eigenvalues occur in pairs and after bifurcation we have an octet , characteristic of a degenerate chh .next consider the bifurcation with the imposed symmetry for all control parameter values with criticality at as depicted in fig .[ bigaus2 ] . because of the symmetry for all near .the bifurcation can be instigated either by fixing and varying or by setting to a value for which the crossing of fig .[ bigaus2 ] becomes negative and then varying until .either way , it follows that with the imposition of this symmetry the solution of the dispersion relation , , must have the following form : ^2= g(k , ) [ ssbif ] where the function is real .this is seen by separating the dispersion relation into real and imaginary parts , ( k,)&=&1 + 1k^2 _ dp [ split ] + & = & 1+_dp -i _ dp , where .then , with the assumption that is antisymmetric in its argument and splitting the imaginary part of ( [ split ] ) into symmetric and anti - symmetric parts yields & & u_i_dp = _ dp + & & = 2u_i u_r_dp. [ integral ] this expression must vanish when is a root , which implies that , , or the integral equals zero .if we assume that is nonzero , then either vanishes or the integral of ( [ integral ] ) vanishes . in general , even with the assumed symmetry , we do not expect the integral to vanish ; the condition for the existence of an embedded mode does not reference this integral in any way , and the imaginary part of the dispersion relation at criticality for such , only depends on the value of at the frequency of the mode .therefore , at the bifurcation point this integral does not appear .note , the case of the degenerate octet discussed above is not forbidden by this argument , due to the potential vanishing of the integral , which would allow for both and to be nonzero .from such a state , further variation of will lead to a branch of solutions in the upper half plane . also note that for the vlasov - jeans instability , where the sign of the interaction is reversed , the integral ( [ integral ] ) has a positive integrand for the maxwellian distribution andtherefore can not vanish . from the above discussion about symmetry , it is clear that at criticality , say at , implies discrete zero frequency eigenvalues , while as increases implies two pure imaginary eigenvalues , indicating exponential growth and decay .in fact , the situation is precisely like the dispersion relation of ( 22 ) for the multi - fluid example of . upon properly counting eigenvalues as above ,we see that after the bifurcation there are in fact two growing and two decaying eigenvalues .we note that an attempt to use the usual marginality relations for determining the eigenvalues , _r(k,_r)=0 = - , [ margin ] at , will be indeterminant because both the numerator and denominator of vanish . as we have seen in degenerate steady state ( ss ) bifurcations happen in finite systems when symmetry is imposed .we call any ss bifurcation in the presence of a continuous spectrum , a continuum steady state bifurcation ( css ). if one breaks the symmetry , then generically as increases the bifurcation is a chh bifurcation . for this casegenerally does not vanish and equations ( [ margin ] ) apply .counting eigenvalues gives the chh quartet .one might be fooled into thinking a change of frames , a doppler shift , would make the symmetric and non symmetric bifucations identical , but this is not the case .galilean frame shifting the degenerate css , say by a speed , replaces ( [ ssbif ] ) by a dispersion relation of the form ; thus , unlike the nonsymmetric case the real parts of the frequencies do not depend on .a goal of linearized theories is to predict weakly nonlinear behavior .indeed , bifurcation theory in dissipative systems has achieved great success in this regard . in particular , for finite - dimensional systems rigorous center manifold theorems allow one to reliably track bifurcated solutions into the nonlinear regime and , in some instances , obtain saturated values . for infinite - dimensional systems various normal forms , such as the ginzburg landau equationadequately describe pattern formation due to the appearance of a single mode of instability in a wide variety of dissipative problems . in hamiltonian systemsthe situation is more complicated ; the lack of dissipation creates a greater challenge because dimensional reduction is not so accommodating .however , for finite - dimensional hamiltonian systems there is a long history of perturbation/ averaging techniques for near integrable systems , systems with adiabatic invariants , etc .techniques that may lead to nonlinear normal forms .similarly , techniques have been developed for infinite - dimensional hamiltonian systems , particularly in the context of single field 1 + 1 models .however , the combination of nonlinearity together with the type of continuous spectrum treated here and in provide a distinctively more difficult challenge .this challenge is met by the single - wave ( sw ) model , an infinite - dimensional hamiltonian system that describes the behavior near threshold and subsequent nonlinear evolution of a discrete mode that emerges from the continuous spectrum .the sw model was originally derived in plasma physics , then ( re)discovered in various fields of inquiry , ranging from fluid mechanics , galactic dynamics , and condensed matter physics .the presence of the continuous spectrum , which is responsible for landau damping on the linear level , causes conventional perturbation analyses to fail because of singularities that occur at all orders of perturbation .however , in , it was shown by a suitable matched asymptotic analysis , how the single - wave model emerges from the large class of 2 + 1 theories of sec .iii of .an essential ingredient for this asymptotic reduction is that these hamiltonian systems have a continuous spectrum in the linear stability problem , arising not from an infinite spatial domain but from singular resonances along curves in phase space ( e.g. , the wave - particle resonances in the plasma problem or critical levels in fluid mechanics ) .thus , the sw model describes nonlinear consequences of the chh and css bifurcations .in particular , the sw model describes a range of universal phenomena , some of which have been rediscovered in different contexts . for a bifurcation to instability , the model features the so - called trapping scaling dictating the saturation amplitude , and the cats - eye or phase space hole structures that characterize the resulting phase - space patterns .an example of this is shown in fig .[ swm ] , which depicts the phase space pattern and temporal fate of the singe - wave ( bifurcated mode ) amplitude .the sw model also gives a description of nonlinear landau damping , i.e. , how such damping can be arrested by nonlinearity .an in - depth description of he sw model is beyond the scope of the present contribution , but we comment that in addition to the normal form that aligns with the chh bifurcation there is also a degenerate from associated with the css bifurcation .we refer the reader to ( notably sec .v ) for further details .we presented a mathematical account of chh bifurcations in 2 + 1 hamiltonian continuous media field theories .we presented a mathematical framework in which we describe the structural stability of equilibria of hamiltonian systems , whose most important ingredient was a method for attaching signature to the continuous spectrum .we presented an application of this framework to the vlasov - poisson equation , demonstrating that the two - stream instability can be interpreted as a positive energy mode interacting with a negative energy continuous spectrum , and that all equilibria are structurally unstable in banach spaces that are not strong enough to prevent infinitesimal perturbations from altering the signature of the continuous spectrum . if we restrict to dynamically accessible perturbations , which by construction can not effect the signature , then only those equilibria with both positive and negative signature are structurally unstable . in the last parts of this paper we examined the difference between canonical and noncanonical hamiltonian systems and also the idea that the single - wave model is a normal ( degenerate ) form for the chh ( css ) bifurcation that describes it s nonlinear evolution .these processes underlie phase space pattern formation in the 2 + 1 theories that we are interested in , and explaining these patterns and the structure of the phase space of these systems provides strong motivation for this work and further research. * acknowledgements . *hospitality of the gfd summer program , held at the woods hole oceanographic institution , is greatly appreciated .gih and pjm were supported by usdoe grant nos .de - fg02-er53223 and de - fg02 - 04er54742 , respectively .ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] in _ _ , ( , ) ( ) * * , ( ) in _ _ , vol . , ( , ) pp . * * , ( ) * * , ( ) in _ _ ( , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) in _ _ , ( , ) pp . * * , ( ) in _ _ , ( , , ) * * , ( ) * * , ( ) _ _ , ed . , vol .( , ) * * , ( ) * * , ( ) _ _ ( , , ) * * , ( ) _ _ ( , ) _ _ ( , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * ( ) * * ( ) * * , ( ) * * , ( ) * * , ( )
building on the development of , bifurcation of unstable modes that emerge from continuous spectra in a class of infinite - dimensional noncanonical hamiltonian systems is investigated . of main interest is a bifurcation termed the continuum hamiltonian hopf ( chh ) bifurcation , which is an infinite - dimensional analog of the usual hamiltonian hopf ( hh ) bifurcation . necessary notions pertaining to spectra , structural stability , signature of the continuous spectra , and normal forms are described . the theory developed is applicable to a wide class of 2 + 1 noncanonical hamiltonian matter models , but the specific example of the vlasov - poisson system linearized about homogeneous ( spatially independent ) equilibria is treated in detail . for this example , structural ( in)stability is established in an appropriate functional analytic setting , and two kinds of bifurcations are considered , one at infinite and one at finite wavenumber . after defining and describing the notion of dynamical accessibility , kren - like theorems regarding signature and bifurcation are proven . in addition , a canonical hamiltonian example , composed of a negative energy oscillator coupled to a heat bath , is treated and our development is compared to pervious work in this context . a careful counting of eigenvalues , with and without symmetry , is undertaken , leading to the definition of a degenerate continuum steady state ( css ) bifurcation . it is described how the chh and css bifurcations can be viewed as linear normal forms associated with the nonlinear single - wave model described in , which is a companion to the present work and that of .
citations of scientific papers are considered to be an indicator of papers importance and relevance , and a simple count of the number of citations a paper receives is often used as a gauge of its impact .however , it is also widely believed that citations are affected by factors besides pure scientific content , including the journal a paper appears in , author name recognition , and social effects .one substantial and well - documented effect is the so - called cumulative advantage or preferential attachment bias , under which papers that have received many citations in the past are expected to receive more in future , independent of content .a simple mathematical model of this effect was proposed by price , building on earlier work by yule and simon , in which paper content is ignored completely and citation is determined solely by preferential attachment plus stochastic effects . within this model , the expected number of citations a paper receives is a function only of its date of publication , measured from the start of the topic or body of literature in which the paper falls , and shows a strong `` first - mover effect '' under which the first - published papers are expected to receive many more citations on average than those that come after them .indeed the variation in citation number as a function of publication date is normally far wider than the stochastic variation among papers published at the same time . in a previous paper compared the predictions of price s model against citation data for papers from several fields and found good agreement in some , though not all , cases .this suggests that pure citation numbers may not be a good indicator of paper impact , since much of their variation can be predicted from publication dates , without reference to paper content . instead , therefore , we proposed an alternative measure of impact .we proposed that rather than looking for papers with high total citation counts , we should look for papers with counts higher than expected given their date of publication .since publication date is measured from the start of a field or topic , and since different topics have different start dates , one should only use this method to compare papers within topics .the appropriate calculation is to count the citations a paper has received and compare that figure to the counts for other papers on the same topic that were published around the same time . in our work we used a simple -score to perform the comparison : we calculate the mean number of citations and its standard deviation for papers published in a window close to the date of a paper of interest , then calculate the number of standard deviations by which that paper s citation count differs from the mean . papers with high -scores we conjecture to be of particular interest within the field .one promising feature of this approach is that the papers it highlights are not necessarily those with the largest numbers of citations .the most highly cited papers are almost always the earliest in a field , in part because of the first - mover effect but also because they have had longer to accumulate citations .more recent papers usually have fewer citations , but they may still have high -scores if the count of their citations significantly exceeds the average among their peers .thus the method allows us to identify papers that may not yet have received much attention but will do so ( we conjecture ) in the future . in our previous study , we used this method to identify some specific papers that we believed would later turn out to have high impact .here we revisit those predictions to see whether the papers identified have indeed been successful . to quantify success, we look again at citation counts , and to minimize the preferential attachment bias we compare them against randomly drawn control groups of papers that had the same numbers of citations at the time of the original study . as we show , our predictions appear to be borne out : the papers previously identified as potential future successes have received substantially more attention than their peers over the intervening years .in this section we briefly review some of the results from our previous paper , which we will refer to as paper 1 .in paper 1 we examined citation data from several different fields in physics and biology , but made specific predictions for one field in particular , the field of interdisciplinary physics known as network science .this field is an attractive one for study because it has a clear beginning a clear date before which there was essentially no published work on the topic within the physics literature ( although there was plenty of work in other fields)and a clear beginning is crucial for the theory developed in the paper to be correct and applicable .( it is also the present author s primary field of research , which was another reason for choosing it . )it is again on papers within network science that we focus here . in paper 1we assembled a data set of 2407 papers on network science published over a ten year period , starting with the recognized first publications in 1998 and continuing until 2008 .the data set consisted of papers in physics and related areas that cited one or more of five highly - cited early works in the field , but excluding review articles and book chapters ( for which citation patterns are distinctly different from those of research articles ) .we then calculated the mean and standard deviation of the number of citations received by those papers as a function of their time of publication within a moving window .crucially , however , our theoretical studies indicate that `` time '' in this context is most correctly defined not in terms of real time , but in terms of number of papers published within the field .if papers have been published in the field in total ( with in this case ) , then the `` time '' of publication of the paper is defined to be , where papers are numbered in the order of their publication .thus runs from a lowest value of for the first paper in the field to for the most recent paper .it is in terms of this variable that we perform our averages .armed with these results , we then calculate a -score for each paper as described in the introduction : we take the count of citations received by a paper , subtract the mean for papers published around the same time , and divide by the standard deviation .figure [ fig : outliers ] reproduces a plot from paper 1 of -scores for papers in the data set . only -scores that exceed 2 are shown , since these are the ones we are most interested in , corresponding to papers whose citation counts are significantly above the mean for their peers .as the figure shows , there are within this set a small number of papers that stand out as having particularly high -scores .our suggestion was that these were papers to watch , that even if they did not currently have a large number of citations , they would in future attract significant attention .table [ tab : winners ] lists twenty of the top papers by this measure from our 2008 data set along with their citation counts and -scores .-scores for citations to papers in the data set of ref .each -score is equal to the number of standard deviations by which the citations to the corresponding paper exceed the mean for papers published around the same time . only -scores greater than two ( the dashed line ) are plotted . as described in the text , the `` time '' in the horizontal axisis measured by the number of papers published , not real time . ] [ cols="<,<,<,>,>,<",options="header " , ] looking at the papers in these 100 sets , we now find the number of new citations each received in the last five years , the difference between their citation counts in 2013 and 2008 . calculating the median new citations for each set, we find the average figure over all sets to be , a much lower number by a factor of 15than the median of 238 measured for the fifty leaders from our analysis . alternatively , as before, we can compute the ratio of new to old citations , again taking a median for each set , and we find an average figure of , not very different from the we found for the entire data set ( perhaps suggesting that our assumption of linear preferential attachment was a reasonable one ) . as reported above , the equivalent figure for the set of fifty leading papers was more than twice the size , at 5.24 . by these measuresit appears that the predictions of paper 1 are quite successful .papers identified using our method outperformed by a wide margin both the field at large and randomly drawn control groups that started out with the same number of citations .encouraged by these results and capitalizing on the fact that we have a new data set of papers and their citations up to the year 2013 , we now apply our methods to the new data to make predictions about papers that will be highly cited in the next few years . working with the entirety of our new 6976-paper data set, we again calculate the mean and standard deviation of the numbers of citations received as a function of time and look for the papers with the highest -scores those that exceed the mean for their publication date by the largest number of standard deviations .table [ tab : newwinners ] is the equivalent of table [ tab : winners ] for this new analysis , listing twenty of the papers with the highest -scores within the field of network science .a few observations are worth making .first , note that the numbers of citations received by these papers are substantially greater than those received by the papers in table [ tab : winners ] at the time of our first round of predictions .all but one of them have over 100 citations already , putting them in the top few percent of the data set .this is probably in part a sign of the rapid growth of the field mentioned earlier .a more rapid rate of publication means more citations are being made , and hence more received , particularly by the most prominent papers .it s worth emphasizing , however , that each of the papers in the table earned its place by receiving significantly more citations , by many standard deviations , than other papers in the same field published around the same time .so the citation counts are not merely high , but anomalously so .moreover , the -scores in table [ tab : newwinners ] are significantly higher than those in table [ tab : winners ] , so the margin by which the top papers outperform expectations has also grown .second , we note that while some of the papers in the list would be unsurprising to those familiar with the field of network science , such as the influential early papers by watts and strogatz and by barabsi and albert , there are also some more recent papers listed , such as the papers by buldyrev _ et al . _ on interdependent networks or liu _ et al . _ on controllability , which received two of the highest -scores in the table .our analysis suggests that these papers will have an outsize impact a few years down the road , relative to what one would expect given only their current numbers of citations .third , we notice the abundance in the list of papers published in the journal _ nature _ , and to a lesser extent _ science _ and _ proceedings of the national academy of sciences_. the publication venues of the papers in our old list , table [ tab : winners ] , were more diverse. the predominance of these journals could just be coincidence the sample size of 20 is small enough to make such a thing plausible .but it is also possible that it is a real effect .perhaps these journals have over the last few years established a special name for themselves as attractive venues for publication of work in this particular field . or perhaps there has been a deliberate change in editorial policy that has resulted more papers in the field being accepted for publication in these journals .alternatively , these journals may have pulled ahead of their competition in the accuracy of their peer review , so that they are better able to identify ( and hence publish ) papers that will be important in the field . or it may be that papers in these high - profile journals tend to be cited more than papers in other journals because they are more visible , and hence they are more likely to appear in our table . or the truth might be a combination of all of these factors , and perhaps some others as well .in this paper we have revisited predictions made in 2008 of scientific papers in the field of network science that , according to those predictions , should receive above average numbers of citations , even though they may not yet have had many citations at that time .looking at those predictions five years on , we find that the papers in question have indeed received many more citations than the average paper in the field .indeed they have received substantially more even than comparable control groups of randomly selected papers that were equally well cited in 2008 .because of the so - called preferential attachment effect , one can quite easily identify papers that will do well just by looking for ones that have done well in the past .but the papers identified by our analysis did well even when one controls for this effect , indicating that the analysis is capable of identifying papers that will be successful not merely because they already have many citations .we hope , though we can not prove , that the additional factors driving this success are factors connected with paper quality , such as originality and importance of research and quality of presentation .we have also applied our methods to contemporary data , from 2013 , to make a new round of predictions , identifying papers that , according to the metrics we have developed , are expected to see above average citation in the next few years , when compared to other papers that currently have the same number of citations .it will be interesting to see whether these predictions in turn come true .we have been slightly selective about the papers listed in the table .the actual top 20 papers included a number written by the present author , most of which we omitted from the list out of modesty .the table lists the twenty papers with the highest -scores among those that remain. the full list of highest - scoring papers , including those written by yours truly , can be found online at ` http://www.umich.edu/~mejn/citation ` .
in an article written five years ago , we described a method for predicting which scientific papers will be highly cited in the future , even if they are currently not highly cited . applying the method to real citation data we made predictions about papers we believed would end up being well cited . here we revisit those predictions , five years on , to see how well we did . among the over 2000 papers in our original data set , we examine the fifty that , by the measures of our previous study , were predicted to do best and we find that they have indeed received substantially more citations in the intervening years than other papers , even after controlling for the number of prior citations . on average these top fifty papers have received 23 times as many citations in the last five years as the average paper in the data set as a whole , and 15 times as many as the average paper in a randomly drawn control group that started out with the same number of citations . applying our prediction technique to current data , we also make new predictions of papers that we believe will be well cited in the next few years .
grid has emerged as a modern trend in computing , aiming to support the sharing and coordinated use of diverse resources by virtual organizations ( vo s ) in order to solve their common problems .it was originally driven by by scientific , and especially high - energy physics ( hep ) communities .hep experiments are a classic example of large , globally distributed vo s whose participants are scattered over many institutions and collaborate on studies of experimental data , primarily on data processing and analysis .our background is specifically in the development of large - scale , globally distributed systems for hep experiments .we apply grid technologies to our systems and develop higher - level , community - specific grid services ( generally defined in ) , currently for the two collider experiments at fermilab , d0 and cdf .these two experiments are actually the largest currently running hep experiments , each having over half a thousand users and planning to analyze repeatedly peta - byte scale data .the success of the distributed computing for the experiments depends on many factors . in the hep computing , which remains a principal application domain for the grid as a whole , jobs are _ data - intensive _ and therefore _data handling _ is one of the most important factors . for hep experiments such as d0 and cdf , data handling is the center of the meta - computing grid system .the sam data handling system was originally developed for the d0 collaboration and is currently also used by cdf .the system is described in detail elsewhere ( see , for example , and references therein ) . here, we only note some of the advanced features of the system the ability to coordinate multiple concurrent accesses to storage systems and global data routing and replication .given the ability to distribute data on demand globally , we face the similar challenges of distributing the processing of the data .generally , for this purpose we need global job scheduling and information management , which is a term we prefer over `` monitoring '' as we strive to include configuration management , resource description , and logging . in recent years, we have been working on the samgrid project , which addresses the grid needs of the experiments ; our current focus is in the jobs and information management ( jim ) , which is to complement the sam grid data handling system with services for job submission , brokering and execution as well as distributed monitoring .together , sam and jim form samgrid , a `` vo - specific '' grid system . in this paper , we present some key ideas from our system s design . for job managementper se , we collaborate with the condor team to enhance the middleware so as to enable scheduling of data - intensive jobs with flexible resource description . for information , we focus on describing the sites resources in the tree - like structures of xml , with subsequent projections onto the condor classified advertisements ( classad ) framework , monitoring with globus mds and other tools . the rest of the paper is organized as follows .we discuss the relevant job scheduling design issues and condor enhancements in section [ sec : jobs ] . in section [ sec : info ] , we describe configuration management and monitoring . in section [ sec : status ] , we present the status of the project and interfaces with the experiments computing environment and fabric ; we conclude in section [ sec : summary ] .a key area in grid computing is job management , which typically includes planning of job s dependencies , selection of the execution cluster(s ) for the job , scheduling of the job at the cluster(s ) and ensuring reliable submission and execution .we base our solution on the framework , a powerful grid middleware commonly used for distributed computing .thanks to the particle physics data grid ( ppdg ) collaboration , we have been able to work with the condor team to enhance the framework and then implement higher - level functions on top of it . in this section , we first summarize the general enhancements and then proceed to actually describing how we schedule data - intensive jobs for d0 and cdf . we have designed three principal enhancements for .these have all been successfully implemented by the condor team : * original required users to either specify which grid site would run a job , or to use condor - g s glidein technology . we have enabled to use a matchmaking service to automatically select sites for users .* we have extended the classad language , used by the matchmaking framework to describe resources , to include externally supplied functions to be evaluated at match time .this allows the matchmaker to base its decision not only on explicitly advertised properties but also on opaque logic that is not statically expressible in a classad .other uses include incorporation of information that is prohibitively expensive to publish in a classad , such as local storage contents or lists of site - authorized grid users .* we removed the restriction that the job submission client had to be on the same machine as the queuing system and enabled the client to securely communicate with the queue across a network , thus creating a multi - tiered job submission architecture .fundamentally , these changes are sufficient to form a multi - user , multi - site job scheduling system for generic jobs .thus , a novelty of our design is that we use the standard grid technologies to create a highly reusable framework to the job scheduling , as opposed to writing our own resource broker , which would be specific to our experiments . in the remainder of this section we described higher - level features for the job management , particularly important for data - intensive applications .classic matchmaking service ( mms ) gathers information about the resources in the form of published classads .this allows for a general and flexible framework for resource management ( e.g. jobs and resource matching ) , see .there is one important limitation in that scheme , however , which has to do with the fact that the entities ( jobs and resources ) have to be able to express all their relevant properties upfront and _ irrespective of the other party_. recall that our primary goal was to enable co - scheduling of jobs and data . in data - intensive computing , jobs are associated with long lists of data items ( such as files ) to be processed by the job .similarly , resources are associated with long lists of data items located , in the network sense , near them .for example , jobs requesting thousands of files and sites having hundreds of thousands of files are not uncommon in production in the sam system .therefore , it would not be scalable to explicitly publish all the properties of jobs and resources in the classads . furthermore , in order to rank jobs at a resource ( or resources for the job ) , we wish to include additional information that could nt be expressed in the classads at the time of publishing , i.e. , before the match .rather , we can analyze such an information during the match , _ in the context of the job request_. for example , a site may prefer a job based on similar _ already scheduled _data handling requests .another example of useful additional information , not specific to data - intensive computing , is the pre - authorization of the job s owner with the participating cite , by means of e.g. looking up the user s grid subject in the site s * gridmapfile*. such a pre - authorization is not a replacement of security , but rather a means of protecting the matchmaker from some blunders that otherwise tend to occur in practice . the original mms scheme allowed for such additional information incorporation only in the _ claiming _ phase , i.e. , after the match when the job s scheduler actually contacts the machine . in the samgrid design ,we augment information processing by the mms with the ability _ to query _ the resources with a job in the context .this is pictured in figure [ fig : jim_architecture ] by arrows extending from the resource selector to the resources , specifically to the local data handling agents , in the course of matching .it is implemented by means of externally supplied ranking function whose evaluation involves remote call invocation at the resources sites .specifically , the resource classads in our design contain pointers to additional information providers ( data handling servers called stations ) : .... station_id = foo .... and the body of the mms ranking function called from the job classad .... rank = fun(job_dataset , other.station_id ) .... includes logic similar to this pseudo - code : .... station = resolve(station_id , ... ) return station- > get_preference(job_dataset , ... ) .... in the next subsection , we discuss how we believe this will improve the co - scheduling of jobs with the data .the co - scheduling of jobs and data has always been critical for the sam system , where at least a subset of hep analysis jobs ( as of the time of writing , the dominating class ) have their latencies dominated by data access .please note that the sam system already implemented the advanced feature of retrieving multi - file datasets asynchronously with respect to the user jobs this was done initially at the cluster level rather than at the grid level . generally with the data - intensive jobs , we attempt to minimize the time to retrieve any missing data and the time to store output data , as these times propagate into the job s overall latency . as we try to minimize the grid job latency , we ensure that the design of our system , figure [ fig : jim_architecture ] is such that the data handling latencies will be taken into account in the process of job matching .this is a principal point of the present paper , i.e. , while we do not yet possess sufficient real statistics that would justify certain design decisions , we stress that our system design enables the various strategies and supports considerations listed below . in the minimally intelligent implementation , we prefer sites that contain most of the the job s data. our design does not rely on a replica catalogue because in the general case , we need _ local _ metrics computed by and available from the data handling system : * the network speeds for connections to the sources of any missing data ; * the depths of the queues of data requests for both input and output ; * the network speeds for connections to the nearest destination of the output files .it is important that network speeds be provided by a high - level service in the data handling rather than by a low - level network sensor , for reasons similar to those why having e.g. a 56kbps connection to the isp does not necessarily enable one to actually download files from the internet with that speed .naturally , being able to submit jobs and schedule them more or less efficiently is necessary but not sufficient for a grid system .one has to understand how resources can be described for such decision making , as well as provide a framework for monitoring of participating clusters and user jobs .we consider these and other aspects of _ information management _ to be closely related to issues of grid configuration . in the jim project, we have developed a uniform configuration management framework that allows for generic grid services instantiation , which in turn gives flexibility in the design of the grid as well as inter - operability on the grid ( see below ) . in this section ,we briefly introduce the main configuration framework and then project it onto the various samgrid services having to do with information .our main proposal is that grid sites be configured using a site - oriented schema , which describes both resources and services , and that grid instantiation at the sites be derived from these site configurations .we are not proposing any particular site schema at this time , although we hope for the grid community as a whole to arrive at a common schema in the future which will allow reasonable variations such that various grids are still instantiatable .figure [ fig : fw ] shows configuration derivation in the course of instantiation of a grid at a site .the site configuration is created using a meta - configurator similar to one we propose below . in our framework, we create site and _ all _ other configurations by a universal tool which we call a _ meta - configurator _ , or configurator of configurators .the idea is to separate the process of querying the user for values of attributes from the _ schema _ that describes what those attributes are , how they should be queried , how to guess the default values , and how to derive values of attributes from those of other attributes .any concrete configurator uses a concrete schema to ask the relevant questions to the end user ( site administrator ) in order to produce that site s configuration .any particular schema is in turn derived from a meta - schema .thus , the end configuration can be represented as : where is a particular configuration , is the configuration operation , is a particular schema reflecting certain design , is the meta - schema , and are the inputs of the designer and the user , respectively . in our framework ,configurations and schemas are structures of the same type , which we choose to be trees of nodes each containing a set of distinct attributes .our choice has been influenced by the successes of the xml technologies and , naturally , we use xml for representing these objects . to exemplify , assume that in our _ present _ design , a grid * site * consists of one or more * clusters * each having a name and an architecture ( homogenous ) , as well as exactly one * gatekeeper * for grid access .example configuration is : .... < ?xml version='1.0 ' ?> < site name='fnal ' schema_version='v0_3 ' > < cluster name='samadams ' architecture='linux ' > < gatekeeper ... > < /cluster/site > .... this configuration was produced by the following schema .... < ?xml version='1.0 ' ?> < site cardinalitymin='1 ' cardinalitymax='1 ' name='inquire - default , fnal ' > < cluster cardinalitymin='1 ' name='set , clustername , inquire ' architecture= ' inquire - default , exec , uname'/ > < /site > .... in an interactive session with the site administrator as follows : .... what is the name of the site ? [ fnal ] : < return > what is the name of cluster at the site ' fnal ' ? samadams what is the architecture of cluster ' samadams ' [ linux ] ?... .... when the schema changes or a new cluster is created at the site , the administrator merely needs to re - run the tool and answer the simple questions again . in the jim project , we have designed the grid job management as follows .we advertise the participating grid clusters to an information collector and grid jobs are matched with clusters ( resources ) based on certain criteria primarily having to do with the data availability at the sites .we have implemented this job management using condor - g with extensions that we have designed together with the condor team . for the job management to work as described in section [ sec : jobs ] , we need to advertise the clusters together with the * gatekeepers * as the means for condor to actually schedule and execute the grid job at the remote site .thus , our _ present _ design requires that each advertisement contain a cluster , a gatekeeper , a sam station ( for jobs actually intending to process data ) and a few other attributes that we omit here .our advertisement software then selects from the configuration tree all patterns containing these attributes and then applies a classad generation algorithm to each pattern .the selection of the subtrees that are classad candidates is based on the xquery language .our queries are generic enough as to allow for _ design evolution _ , i.e. to be resilient to some modifications in the schema . when new attributes are added to an element in the schema , or when the very structure of the tree changes due to insertion of a new element, our advertisement service will continue to advertise these clusters with or without the new information ( depending on how the advertiser itself is configured ) but the important factor is that this site will continue to be available to our grid .for example , assume that one cluster at the site from subsection [ sec : metaconf ] now has a new grid gatekeeper mechanism from globus toolkit 3 , in addition to the old one : .... < ?xml version='1.0 ' ?> < site name='fnal ' schema_version='v0_3 ' > < cluster name='samadams ' architecture='linux ' > < grid_accesses > < gatekeeper ... > < gatekeeper - gtk3 ... > < /grid_accesses > ... .... assume further that our particular grid is not yet capable of taking advantage of the new middleware and we continue to be interested in the old * gatekeeper * from each cluster .our pattern was such that a * gatekeeper * is a descendant of the * cluster * so we continue to generate meaningful classads and match jobs with this site s cluster(s ) .in addition to advertising ( pushing ) of resource information for the purpose of job matching , we deploy globus mds-2 for pull - based retrieval of information about the clusters and activities ( jobs and more , such as data access requests ) associated with them .this allows us to enable web - based monitoring , primarily by humans , for performance and troubleshooting .we introduce ( or redefine in the context of our project ) concepts of * cluster * , * station * etc , and map them onto the ldap attributes in the oid space assigned to our project ( the fnal organization , to be exact ) by the iana .we also create additional branches for the mds information tree as to represent our concepts and their relations .we derive the values of the _ dn _s on the information tree from the site configuration . in this framework , it is truly straightforward to use xslt ( or a straight xml - parsing library ) to select the names and other attributes of the relevant pieces of configuration .for example , if the site has two clusters defined in the configuration file , our software will automatically instantiate two branches for the information tree .note that the resulting tree may of course be distributed within the site as we decide e.g. to run an mds server at each cluster , which is a separate degree of freedom .we have been mentioning that there are in fact several other grid projects developing high - level grid solutions ; some of the most noteworthy include the european datagrid , the crossgrid , and the nordugrid .inter - operability of grids ( or of solutions on the grid if you prefer ) is a well - recognized issue in the community .the high energy and nuclear physics intergrid and grid inter - operability projects are some of the most prominent efforts in this area .as we have pointed out in the introduction , we believe that inter - operability must include _ the ability to instantiate and maintain multiple grid service suites at sites_. a good example of inter - operability in this sense is given by various cooperating web browsers which all understand the user s bookmarks , mail preferences etc .. of course , each browser may give a different look and feel to its `` bookmarks '' menu , and otherwise treat them in entirely different ways , yet most browsers tend to save the bookmarks in the common html format , which has _ de facto _ become the standard for bookmarks . our framework , proposed and described in this section , is a concrete means to facilitate this aspect of inter - operability .multiple grid solutions can be instantiated using a grid - neutral , site - oriented configuration in an xml - based format .we can go one step further and envisage that the various grids instantiated at a site have additional , separate configuration spaces that can easily be conglomerated into a _ grid instantiation database_. in practice , this will allow the administrators e.g. , to list all the globus gatekeepers with one simple query .to provide a complete computing solution for the experiments , one must integrate grid - level services with those on the fabric .ideally , grid - level scheduling complements , rather than interferes with , that of local batch systems . likewise , grid - level monitoring should provide services that are additional ( orthogonal ) to those developed at the fabric s facilities ( i.e. , monitoring of clusters batch systems , storage systems etc . ) .our experiments have customized local environments .cdf has been successfully using cluster analysis facility ( caf ) , see .d0 has been using mcrunjob , a workflow manager which is also part of the cms computing insfrastructure .an important part of the samgrid project is to integrate its job and information services with these environments . for job management ,we have implemented gram - compliant job managers which pass control from grid - gram to each of these two systems ( which in turn are on top of the various batch systems ) .likewise , for the purposes of ( job ) monitoring , these systems supply information about their jobs to the xml databases which we deploy on the boundary between the grid and the fabric .( for resource monitoring , these advertise their various properties using the frameworks described above ) .we delivered a complete , integrated prototype of samgrid in the fall of 2002 .our initial testbed linked 11 sites ( 5 d0 and 6 cdf ) and the basic services of grid job submission , brokering and monitoring .our near future plans include further work on the grid - fabric interface and more features for troubleshooting and error recovery .we have presented the two key components of the samgrid , a sam - based datagrid being used by the run ii experiments at fnal . to the data handling capabilities of sam ,we add grid job scheduling and brokering , as well as information processing and monitoring .we use the standard middleware so as to maximize the reusability of our design . as to the information management, we have developed a unified framework for configuration management in xml , from where we explore resource advertisement , monitoring and other directions such as service instantiation .we are deploying samgrid at the time of writing this paper and learning from the new experiences .this work is sponsored in part by doe contract no .de - ac02 - 76ch03000 .our collaboration takes place as part of the doc particle physics data grid ( ppdg ) , collaboratory scidac project .we thank the many participants from the fermilab computing division , the d0 and cdf experiments , and the members of the condor team for fruitful discussions as well as the development of our software and other software that we rely upon . v. white _ et al ._ , `` d0 data handling '' , plenary talk at the international symposium on computing in high energy physics ( chep ) 2001 , september 2001 , beijing china , in proceedings . l. carpenter __ , `` sam overview and operational experience at the dzero experiment '' , ibidem .l. lueking _et al . _ , ` resource management in sam and the d0 particle physics data grid' , ibidem .i. terekhov _, `` distributed data access and resource management in the d0 sam system '' in proceedings of 10-th international symposium on high performance distributed computing ( hpdc-10 ) , ieee press , july 2001 ,san - fransisco , ca v. white and i. terekhov , `` sam for d0 - a fully distributed data access system '' , talk given at vii international workshop on advanced computing and analysis techniques in physics research ( acat-2000 ) , october , 2000 , batavia , il .g. garzoglio , `` the sam - grid project : architecture and plan '' , in proceedings of _ the viii international workshop on advanced computing and analysis techniques in physics research ( acat-2002 ) _ , june 2002 , moscow , russia .r. raman , m. livny and m. solomon , `` matchmaking : distributed resource management for high throughput computing '' , in proceedings of the seventh ieee international symposium on high performance distributed computing , july 28 - 31 , 1998 , chicago , il .i. terekhov and v. white , `` distributed data access in the sequential access model in the d0 run ii data handling at fermilab '' , in proceedings of the 9-th international symposium on high performance distributed computing ( hpdc-9 ) , august 2000 , pittsburgh , pa
we describe some of the key aspects of the samgrid system , used by the d0 and cdf experiments at fermilab . having sustained success of the data handling part of samgrid , we have developed new services for job and information services . our job management is rooted in and uses enhancements that are general applicability for hep grids . our information system is based on a uniform framework for configuration management based on xml data representation and processing .
the development of methods for reconstructing the topologies of real networks from the observable data , is of great interest in modern network science .topology , in combination with the inter - node interactions , determine the function of complex networks .reconstruction methods are often developed within the contexts of particular fields , relying on domain - specific approaches .these include gene regulations , metabolic networks , neuroscience , or social networks . on the other hand ,theoretical reconstruction concepts are based on paradigmatic dynamical models such as phase oscillators , some of which have been experimentally tested . in a similar context ,techniques for detecting hidden nodes in networks are being investigated .a class of general reconstruction methods exploit the time series obtained by quantifying the network behaviour .some of them assume the knowledge of the internal interaction functions , while others do not .network couplings can be examined via information - theoretic approach .advantage of these methods is that they are _ non - invasive _ , i.e. require no interfering with the on - going network dynamics . reconstruction methods are often based on examining the inter - node correlations . on the other hand ,universal network models such as eq.[eq-1 ] , are based on expressing the time derivative of a node as a combination of a local and a coupling term .inspired by this , we propose a non - invasive reconstruction method , departing from the concept of _ derivative - variable correlation_. our method assumes the dynamical time series to be available as measured observables , and the interaction functions to be known .we present our theory in a general form , extending our initial results .as we show , our approach allows for the reconstruction precision to be estimated , indicating the level of noise in the data , or possible mismatches in the knowledge of the interaction functions .we consider a network of nodes , described by their dynamical states .its time evolution is governed by : where the function represents the local dynamics for each node , and models the action of the node on other nodes .the network topology is encoded in the adjacency matrix , specifying the strength with which the node acts on the node .we assume that : ( _ i _ ) the interaction functions and are precisely known , and ( _ ii _ ) a discrete trajectory consisting of values is known for each node .the measurements of are separated by the uniform observation interval defining the time series resolution .we seek to reconstruct the unknown adjacency matrix under these two assumptions .the starting point is to define the following correlation matrices , using the observable whose role will be explained later : where denotes time - averaging . inserting into the eq.[eq-1 ] , we obtain the following linear relation between the correlation matrices : which is our main reconstruction equation , applicable to any network with dynamics given by eq.[eq-1 ] .time series are to be understood as the available observables , allowing for matrices in eq.[eq-2 ] to be computed for any . for the infinitely long dynamical data ,reconstruction is always correct for any generic . for short timeseries , representing experimentally realistic scenarios , the reconstruction is always approximate , and its precision crucially depends on the choice of , for which the subtraction of averages is not needed . ] . to be able to quantify the reconstruction precision , we need to equip ourselves with the adequate measures . to differentiate from the original adjacency matrix , we term the reconstructed matrix , and express the matrix error as : of course , each is computed according to eq.[eq-3 ] in correspondence with the chosen .however , since the matrix is unknown , we have to introduce another precision measure , based only on the available data . a natural test for each to quantify how well does it reproduce the original data .we apply the following procedure : start the dynamics from and run it using until ; denote thus obtained values ; re - start the run from and run until , accordingly obtaining , and so on .the discrepancy between the reconstructed time series and the original is an explicit measure of the reconstruction precision , based solely on the available data .we name it trajectory error , and define it as follows : different choices of the observable lead to different , with different precisions expressed through errors and . as we show below, these two error measures are related , meaning that small suggests small .the function hence plays the role of a tunable parameter , which can be used to optimize the reconstruction . by considering many -s obtained through varying , we can single out -s with the minimal to obtain the best reconstruction . to illustrate the implementation of our method ,we begin by constructing a network with nodes by putting 17 directed links between randomly chosen node pairs . as our first example , we consider the hansel - sompolinsky model , describing the firing rates in neural populations .it is defined by the interaction functions and which are fixed for all nodes .the adjacency matrix is specified by assigning positive and negative weights to the networks links , randomly chosen from ] with the log - uniform probability .this is implemented by selecting each fourier coefficient via , where is a random number between 0 and 1 .a typical function thus constructed for each choice of and will have all 10 fourier components , but one ( or at most few ) will be well pronounced .functions are then normalized to the range of time series values . given relatively smooth timeseries , lower harmonics are expected to generally extract more features from data , which is why we limit ourselves to the first 10 harmonics .to improve the stability of the derivative estimates , we base our calculations on the set of time points . for each , the matrix is obtained via eq.[eq-2 ] and eq.[eq-3 ] , with the invertibility of each checked by virtue of the singular value decomposition . the errors and are then calculated for each , and reported as a scatter plot in fig.[figure-3]a . and , in relation with the first and second example , in ( a ) and ( b ) , respectively .best 1% of -s with the minimal , are represented by the dots left of the vertical dashed line.,scaledwidth=80.0% ] the main result of this analysis is a clear correlation between and , particularly pronounced for smaller values of errors .this confirms that the best are among those that display minimal .in order to identify the best reconstruction and estimate its precision , we focus on the 1% of matrices with the minimal , as illustrated in fig.[figure-3]a by the dashed vertical line . the variability of within this group can be viewed as the reconstruction precision .small variability indicates the invariance of to the choice of , which suggests a good reconstruction. large variability of implies its drastic dependence on , indicating a bad precision .we quantify this by computing the mean and the standard deviation for each matrix element of within this group , and identify them , respectively , with the best reconstruction value and its precision . in fig.[figure-4]awe report the original and the best reconstruction , along with the respective errorbar for each matrix element , describing the reconstruction precision .the reconstruction is indeed very good for both zero and non - zero weights ( i.e. for non - linked and linked node pairs in the network ) .( circles ) , and the best reconstruction ( crosses ) , with the corresponding errorbars . first and second example in ( a ) and ( b ) , respectively.,scaledwidth=70.0% ] as our second example , we consider a dynamical model describing gene interactions , with the coupling functions of two types : activation and repression .local interaction are again modeled via .the adjacency matrix is based on the same network , and defined by assigning a random weight from $ ] for each link , as shown in fig.[figure-1]b .the nodes 1 - 3 ( respectively , 4 - 6 ) act activatorily ( repressively ) on all nodes that they act upon .again , we run the dynamics from to , obtaining another set of time series with 20 points ( not shown ) .the same reconstruction procedure is applied , yielding the vs. scatter plot shown in fig.[figure-3]b .using the same procedure , we obtain the best reconstruction and show it in fig.[figure-4]b .again , the precision is very good .note that our method thus applies also in cases of strongly non - linear interaction functions , which capture most real dynamical scenarios . in order to model the real applicability of our method ,we test its robustness to possible violations of the initial assumptions , focusing on the first example ( fig.[figure-1]a ) .we start with the scenario when the interaction functions are not precisely known we assume a small mismatch in their mathematical form ( _ model error _ ) . instead of the original and , we take and .the measurements of now can not be expected to converge to zero .nevertheless , we apply the same procedure , and find ( a weaker ) correlation between and , as shown ( by black dots ) in fig.[figure-5]a . to see the worsening of the precision clearly , grey dots show the original non - perturbed scatter plot from fig.[figure-3]a . dashedvertical line shows the part of the error which is unavoidable due to the presence of the perturbation . we compute it as the difference between the original and the perturbed interaction functions , averaged over the range of time series .we isolate this part of the trajectory error , since it is not due to the properties of our method .its size indicates that the remaining part of the is similar to the occurring in the non - perturbed case .this demonstrates that our method works optimally even under perturbed conditions .the worsening of the reconstruction precision is what expected from the nature of the perturbation , meaning that our method makes no additional `` unexpected '' errors in the perturbed conditions .the best reconstruction and the corresponding errorbars are computed as before and shown in fig.[figure-6]a .the errorbars are larger and the reconstruction precision worsens .still , the essential fraction of elements of are within the respective errorbars .the decline of precision is controllable , since it is clearly signalized by the size of the errorbars .this could be used to generalize the method in the direction of detecting the interaction functions as well .each best would be accompanied by the best guesses for and , meaning that different network topologies , reproducing the data equally well , would come in combination with different and . and ( black dots ) , for the model error scenario in ( a ) and the observation error scenario in ( b ) .original non - perturbed scatter plot from fig.[figure-3]a is shown in gray for comparison .vertical dashed lines depicts the part of the error which is unavoidable in the presence of the perturbation ( see text).,scaledwidth=80.0% ] to test the second assumption of our theory , we take the time series to be not precisely known due to _observation errors_. uncorrelated white noise of intensity is added , perturbing each value of the time series . instead of the original data , we now consider one realization of the noisy data , as illustrated in fig.[figure-2 ] ( interaction functions are the original ones ) .the central problem now is the computation of the derivatives , which are extremely sensitive to the noise .we employ the savitzky - golay smoothing filter as a standard technique of data de - noising , which allows for a good derivative estimation .since the time series are short , we apply the smallest smoothing parameters . the reconstruction procedure is applied as before , using smoothed derivatives to compute matrix in eq.[eq-2 ] .the scatter plot of vs is shown in fig.[figure-5]b , again compared with the original plot , and with the perturbation - induced unavoidable error indicated by the vertical line .the worsening of the precision is of a similar magnitude as in the model error scenario .the best reconstruction and the corresponding errorbars are reported in fig.[figure-6]b .note that the precision is again correctly reflected by the size of the errorbars . in two cases from fig.[figure-6 ], the precision does not decline uniformly for all links .the analysis above shows that our reconstruction method is reasonably robust to both model and observation error .we found this robustness to be qualitatively independent of the realization of both these errors .( circles ) , and the best reconstruction ( crosses ) , with the corresponding errorbars ( first example ) .model error and observation error scenarios in ( a ) and ( b ) , respectively.,scaledwidth=70.0% ]we presented a method of reconstructing the topology of a general network from the dynamical time series with known interaction functions . throughconceptually novel approach , our method is formulated as an inverse problem using linear systems formalism . rather than relying on the correlations between the observed variables , it is based on the correlations between the variables and their time derivatives .our method involves two important factors : it applies to the data that is relatively short , i.e. of the length comparable to the network size and to the characteristic time scale ; and , it yields the errorbars as a by - product , correctly reflecting the reconstruction precision .on the other hand , our theory relies on knowing ( at least approximately ) both the dynamical model eq.[eq-1 ] and the interaction functions .while these assumptions might limit the immediate applicability of our method , our idea presents a conceptual novelty , potentially leading towards a more general and applicable reconstruction method .for example , we expect applicability in studies of interacting neurons in slices or cultures , where the properties of the individual neurons ( i.e. functions and ) can be relatively well established , while the adjacency matrix is unknown .in contrast , the application to problems such as brain fmri activity patterns , where even the existence of a dynamical model like eq.[eq-1 ] is questionable , appears at present not possible .our theory includes choosing the tunable observables , which allow for the reconstruction to be optimized within the constrains of any given data .the question of constructing the optimal which extracts the _ maximal _extractable information , remains open .our algorithm can be reiterated : once the 1% of the best -s are found , one can examine the functions leading to those 1% , and repeat the procedure , sampling only the neighboring portion of the functional space .alternatively , various evolutionary optimization algorithms could be used . an important factor for the methods applicability is the dynamical regime behind the time series , which could be regular ( periodic ) or chaotic ( transiental ) .the former case is less reconstructible , because of a poor coverage of the phase space . in particular , the synchronized dynamics , being essentially non - sensitive to the variations of the coupling coefficients , offers very little insight into the structure of the underlying network .increasing the time series length is obviously of no help .in contrast , the latter case contains more network information , and is potentially more reconstructible .another issue is the applicability to large networks , and in particular , the dependence of precision on relationship between and .this relates to the possibility of quantifying the network information content of the available data .relevant here is also the performance of our method for varying types of network topologies ( random , scalefree etc . ) .this is a matter of ongoing research to be reported elsewhere .another limitation of our theory comes from the form of eq.[eq-1 ] .a similar theory could be developed for alternative scenarios , such as specified by both source and target nodes .the real challenge here are the networks with non - additive inter - node coupling ( i.e. , the dynamical contribution to the node is not a mere sum of neighbours inputs ) .the key practical problem is that the mathematical forms of and are not ( precisely ) known for many real networks , although for certain systems they can be inferred with a reasonable confidence .noise always hinders the reconstruction , specially via derivative estimates .however , longer time series not only bring more information , but also allow for a better usage of smoothing .finally , we note that the network reconstruction problem is opposite of the network design problem .our method could be employed to design a network that displays given dynamics .however , while any network with solves the design problem , in the reconstruction theory this creates the permanent issue of isolating the true network among those that exhibit .0 a. barrat , m. barthlemy , a. vespignani , _ dynamical processes on complex networks _ , cambridge university press , cambridge ( 2008 ) . m. hecker _ et al ._ gene regulatory network inference : data integration in dynamic models - a review , _ biosystems _ * 96*(1 ) , 86 ( 2009 ). f. emmert - streib _ et al ._ statistical inference and reverse engineering of gene regulatory networks from observational expression data , _ front .genet . _ * 3 * , 8 ( 2012 ) .d. nelson d. _ et al ._ , oscillations in nf- signaling control the dynamics of gene expression , _ science _ * 306 * , 704 ( 2004 ) , s. pigolotti , s. krishna and m. jensen , oscillation patterns in negative feedback loops , _ proc .* 104 * , 6533 ( 2007 ) .m. j. herrgrd _ et al ._ , a consensus yeast metabolic network reconstruction obtained from a community approach to systems biology , _ nature biotech . _ * 26 * , 1155 ( 2008 ) .e. bullmore , o. sporns , complex brain networks : graph theoretical analysis of structural and functional systems , _ nat ._ * 10 * , 186 ( 2009 ) .n. eagle , a. pentland , d. lazer , inferring friendship network structure by using mobile phone data , _ proc .usa _ * 106*(36 ) , 15274 ( 2009 ) .z. levnaji , a. pikovsky , network reconstruction from random phase resetting , _ phys . rev. lett . _ * 107 * , 034101 ( 2011 ) .b. kralemann , a. pikovsky , m. rosenblum , reconstructing phase dynamics of oscillator networks , _ chaos _ * 21 * , 025104 ( 2011 ) .l. prignano , a. daz - guilera , extracting topological features from dynamical measures in networks of kuramoto oscillators , _ phys .e _ * 85 * , 036112 ( 2012 ) ._ , noise bridges dynamical correlation and topology in coupled oscillator networks , _ phys .lett . _ * 104 * , 058701 ( 2010 ) .k. blaha _ et al ._ , reconstruction of two - dimensional phase dynamics from experiments on coupled oscillators , _ phys .e _ * 84 * , 046201 ( 2011 ) .t. stankovski _ et al ._ , inference of time - evolving coupled dynamical systems in the presence of noise , _ phys .lett . _ * 109 * , 024101 ( 2012 ) .r. su , w. wang , y. lai , detecting hidden nodes in complex networks from time series , _ phys .e _ * 85 * , 065201(r ) ( 2012 ) .s. g. shandilya , m. timme , inferring network topology from complex dynamics ,_ new j. phys . _ * 13 * , 013004 ( 2011 ) .s. deroski , l. todorovski , equation discovery for systems biology : finding the structure and dynamics of biological networks from time course data , _ curr ._ * 19 * , 360 ( 2008 ) .s. hempel _ et al ._ inner composition alignment for inferring directed networks from short time series , _ phys .lett . _ * 107 * , 054101 ( 2011 ) .b. pompe , j. runge , momentary information transfer as a coupling measure of time series , _ phys .e _ * 83 * , 051122 ( 2011 ) .z. levnaji , derivative - variable correlation reveals the structure of dynamical networks , _ eur .j. b _ * 86 * , 298 ( 2013 ) .d. hansel , h. sompolinsky , solvable model of spatiotemporal chaos , _ phys .lett . _ * 71 * , 2710 ( 1993 ) .s. widder , j. schicho , p. schuster , dynamic patterns of gene regulation 1 : simple two gene systems , _ j. theor. biol . _ * 246 * , 395 ( 2007 ) .z. levnaji , evolutionary design of non - frustrated networks of phase - repulsive oscillators , _ sci_ * 2 * , 967 ( 2012 ) .j. p. hespanha , _ linear systems theory _ , princeton university press , princeton ( 2009 ) . j. s. simonoff , _ smoothing methods in statistics _ , springer , new york , ( 1996 ) . + + * acknowledgments . *work supported by the dfg via project for868 , by arrs via program p1 - 0383 `` complex networks '' and via project j1 - 5454 `` unravelling biological networks '' .work also supported by creative core fisnm-3330 - 13 - 500033 `` simulations '' funded by the european union , the european regional development fund .the operation is carried out within the framework of the operational programme for strengthening regional development potentials for the period 2007 - 2013 , development priority 1 : competitiveness and research excellence , priority guideline 1.1 : improving the competitive skills and research excellence .we thank colleagues michael rosenblum and mogens h. jensen for useful discussions .+ + * competing financial interests .* the authors declare no competing financial interests .+ + * figure legends : * figure 1 : : adjacency matrices of two examined dynamical networks figure 2 : : example of timeseries produced by network fig .[ figure-1]a , including potential observation noise figure 3 : : scatter plots of errors vs. for the two studied cases figure 4 : : network reconstruction with errorbars for the two cases figure 5 : : scatter plots of errors vs. for the model error and observation error scenarios figure 6 : : network reconstruction with errorbars for the model error and observation error scenarios
a method of network reconstruction from the dynamical time series is introduced , relying on the concept of derivative - variable correlation . using a tunable observable as a parameter , the reconstruction of any network with known interaction functions is formulated via simple matrix equation . we suggest a procedure aimed at optimizing the reconstruction from the time series of length comparable to the characteristic dynamical time scale . our method also provides a reliable precision estimate . we illustrate the method s implementation via elementary dynamical models , and demonstrate its robustness to both model and observation errors . + +
the general perception is that kernel methods are not scalable . when it comes to large - scale nonlinear learning problems , the methods of choice so far are neural nets where theoretical understanding remains incomplete .are kernel methods really not scalable ? or is it simply because we have not tried hard enough , while neural nets have exploited sophisticated design of feature architectures , virtual example generation for dealing with invariance , stochastic gradient descent for efficient training , and gpus for further speedup ? a bottleneck in scaling up kernel methods is the storage and computation of the kernel matrix , , which is usually dense . storing the matrix requires space , and computing it takes operations , where is the number of data points and is the dimension .there have been many great attempts to scale up kernel methods , including efforts from numerical linear algebra , functional analysis , and numerical optimization perspectives .a common numerical linear algebra approach is to approximate the kernel matrix using low - rank factors , , with and rank .this low - rank approximation usually requires operations , and then subsequent kernel algorithms can directly operate on .many works , such as greedy basis selection techniques , nystrm approximation and incomplete cholesky decomposition , all followed this strategy . in practice, one observes that kernel methods with approximated kernel matrices often result in a few percentage of losses in performance .in fact , without further assumption on the regularity of the kernel matrix , the generalization ability after low - rank approximation is typically of the order , which implies that the rank needs to be nearly linear in the number of data points ! thus , in order for kernel methods to achieve the best generalization ability , the low - rank approximation based approaches quickly become impractical for big datasets due to their preprocessing time and memory requirement .random feature approximation is another popular approach for scaling up kernel methods .instead of approximating the kernel matrix , the method directly approximates the kernel function using explicit feature maps .the advantage of this approach is that the random feature matrix for data points can be computed in time using memory , where is the number of random features .subsequent algorithms then only operate on an matrix .similar to low - rank kernel matrix approximation approach , the generalization ability of random feature approach is of the order , which implies that the number of random features also needs to be .another common drawback of these two approaches is that it is not easy to adapt the solution from a small to a large .often one is interested in increasing the kernel matrix approximation rank or the number of random features to obtain a better generalization ability .then special procedures need to be designed to reuse the solution obtained from a small , which is not straightforward .another approach that addresses the scalability issue rises from optimization perspective .one general strategy is to solve the dual forms of kernel methods using coordinate or block - coordinate descent ( , ) . by doing so, each iteration of the algorithm only incurs computation and memory , where is the size of the parameter block .a second strategy is to perform functional gradient descent by looking at a batch of data points at a time ( , ) .thus , the computation and memory requirements are also and respectively in each iteration , where is the batch size .these approaches can easily change to a different without restarting the optimization and has no loss in generalization ability since they do not approximate the kernel matrix or function .however , a serious drawback of these approaches is that , without further approximation , all support vectors need to be kept for testing , which can be as big as the entire training set ! ( , kernel ridge regression and non - separable nonlinear classification problems . ) in summary , there exists a delicate trade - off between computation , memory and statistics if one wants to scale up kernel methods . inspired by various previous efforts , we propose a simple yet general strategy to scale up many kernel methods using a novel concept called `` _ _ doubly stochastic functional gradients _ _ '' .our method relies on the fact that most kernel methods can be expressed as convex optimization problems over functions in reproducing kernel hilbert spaces ( rkhs ) and solved via functional gradient descent .our algorithm proceeds by making _two unbiased _ stochastic approximations to the functional gradient , one using random training points and the other one using random features associated with the kernel , and then descending using this noisy functional gradient .the key intuitions behind our algorithm originate from * the property of stochastic gradient descent algorithm that as long as the stochastic gradient is unbiased , the convergence of the algorithm is guaranteed ; and * the property of pseudo - random number generators that the random samples can in fact be completely determined by an initial value ( a seed ) .we exploit these properties and enable kernel methods to achieve better balances between computation , memory and statistics .our method interestingly combines kernel methods , functional analysis , stochastic optimization and algorithmic trick , and it possesses a number of desiderata : + * generality and simplicity . *our approach applies to many kernel methods , such as kernel ridge regression , support vector machines , logistic regression , two - sample test , and many different types of kernels , such as shift - invariant kernels , polynomial kernels , general inner product kernels , and so on .the algorithm can be summarized in just a few lines of code ( algorithm 1 and 2 ) . for a different problem and kernel ,we just need to adapt the loss function and the random feature generator . +* flexibility .* different from previous uses of random features which typically prefix the number of features and then optimize over the feature weightings , our approach allows the number of random features , and hence the flexibility of the function class , to grow with the number of data points .this allows our method to be applicable to data streaming setting , which is not possible for previous random feature approach , and achieve the full potential of nonparametric methods . +* efficient computation .* the key computation of our method is evaluating the doubly stochastic functional gradient , which involves the generation of the random features with specific random seeds and the evaluation of these random features on the small batch of data points .for iteration , the computational complexity is . +* small memory . *the doubly stochasticity also allows us to avoid keeping the support vectors which becomes prohibitive in large - scale streaming setting .instead , we just need to keep a small program for regenerating the random features , and sample previously used random feature according to pre - specified random seeds . for iteration , the memory needed is independent of the dimension of the data .+ * theoretical guarantees . *we provide a novel and nontrivial analysis involving hilbert space martingale and a newly proved recurrence relation , and show that the estimator produced by our algorithm , which might be outside of the rkhs , converges to the optimal rkhs function . more specifically , both in expectation and with high probability , our algorithm can estimate the optimal function in the rkhs in the rate of , which are indeed optimal , and achieve a generalization bound of .the variance of the random features , introduced during our second approximation to the functional gradient , only contributes additively to the constant in the final convergence rate .these results are the first of the kind in kernel method literature , which can be of independent interest .+ * strong empirical performance .* our algorithm can readily scale kernel methods up to the regimes which are previously dominated by neural nets .we show that our method compares favorably to other scalable kernel methods in medium scale datasets , and to neural nets in big datasets such as 8 million handwritten digits from mnist , 2.3 million materials from molecularspace , and 1 million photos from imagenet using convolution features .our results suggest that kernel methods , theoretically well - grounded methods , can potentially replace neural nets in many large scale real - world problems where nonparametric estimation are needed .+ in the remainder , we will first introduce preliminaries on kernel methods and functional gradients .we will then describe our algorithm and provide both theoretical and empirical supports .kernel methods owe their name to the use of kernel functions , , which are symmetric positive definite ( pd ) , meaning that for all , and , and , we have .there is an intriguing duality between kernels and stochastic processes which will play a crucial role in our later algorithm design .more specifically , if is a pd kernel , then there exists a set , a measure on , and random feature from , such that essentially , the above integral representation relates the kernel function to a random process with measure .note that the integral representation may not be unique .for instance , the random process can be a gaussian process on with the sample function , and is simply the covariance function between two point and .if the kernel is also continuous and shift invariant , , for , then the integral representation specializes into a form characterized by inverse fourier transformation ( , ( * ? ? ?* theorem 6.6 ) ) , + a continuous , real - valued , symmetric and shift - invariant function on is a pd kernel if and only if there is a finite non - negative measure on , such that } 2 \ , \cos(\omega^\top x + b)\ , \cos(\omega^\top x ' + b)\ , d \rbr{\pp(\omega ) \times \pp(b ) } , ] , and .+ for gaussian rbf kernel , , this yields a gaussian distribution with density proportional to ; for the laplace kernel , this yields a cauchy distribution ; and for the martern kernel , this yields the convolutions of the unit ball . similar representation where the explicit form of and are known can also be derived for rotation invariant kernel , , using fourier transformation on sphere . for polynomial kernels , , a random tensor sketching approachcan also be used .explicit random features have been designed for many other kernels , such as dot product kernel , additive / multiplicative class of homogeneous kernels , , hellinger s , , jensen - shannon s and intersection kernel , as well as kernels on abelian semigroups .we summarized these kernels with their explicit features and associated densities in table [ table : explicit_features ] .ll|c|c|c & kernel & & & + & gaussian & & & + & laplacian & & & + & cauchy & & & + & matrn & & & + & dot product & & & = \frac{1}{p^{n+1}} ] & + & intersection & & {j=1}^d ] & + & & & & + & & & & + & & & & + & & & & + & arc - cosine & & & + + + instead of finding the random process and function given a kernel , one can go the reverse direction , and construct kernels from random processes and functions ( , ) .[ thm : inverse_dual ] if for a nonnegative measure on and , each component from , then is a pd kernel . for instance , , where can be a random convolution of the input parametrized by , or ] with respect to is which is essentially a single data point approximation to the true functional gradient . furthermore , for any , we have .inspired by the duality between kernel functions and random processes , we can make an additional approximation to the stochastic functional gradient using a random feature sampled according to .more specifically , * doubly stochastic functional gradient .* let , then the doubly stochastic gradient of ] , and therefore , the corresponding ] , where is a convex function with , proposed a nonparametric estimator for the logarithm of the density ratio , , which is the solution of following convex optimization , + \ee_{p}[r^*(-\exp(f ) ) ] + \frac{\nu}{2}\|f\|_\hcal^2\end{aligned}\ ] ] where denotes the fenchel - legendre dual of , . in kullback - leibler ( kl )divergence , the .its fenchel - legendre dual is specifically , the optimization becomes - \ee_{x\sim p}[f(x ) ] + \frac{\nu}{2}\|f\|_\hcal^2 \\ & = & 2\ee_{z , x , y}\bigg[\delta_1(z)\exp(f(y ) ) - \delta_{0}(z)f(x)\bigg ] + \frac{\nu}{2}\|f\|_\hcal^2 .\end{aligned}\ ] ] where .denote , we have and the the step 5 in algorithm . 1 .becomes in particular , the and are not sampled in pair , they are sampled independently from and respectively . proposed another convex optimization based on whose solution is a nonparametric estimator for the density ratio . designed for novelty detection .similarly , the doubly stochastic gradients algorithm is also applicable to these loss functions .* gaussian process regression . * the doubly stochastic gradients can be used for approximating the posterior of gaussian process regression by reformulating the mean and variance of the predictive distribution as the solutions to the convex optimizations with particular loss functions .let where and , given the dataset , the posterior distribution of the function at the test point can be derived as where , , ^\top ] , then where the second equation based on identity .therefore , we just need to estimate the operator : + we can express as the solution to the following convex optimization problem where is the hilbert - schmidt norm of the operator .we can achieve the optimum by , which is equivalent to eq .[ eq : variance_operator ] .+ based on this optimization , we approximate the using by doubly stochastic functional gradients .the update rule for is please refer to appendix [ appendix : gp_update_rule ] for the details of the derivation .2 . assume that the testing points , , are given beforehand , instead of approximating the operator , we target on functions ^\top ] and ^\top ] .then for any , the error can be decomposed as two terms for the error term due to random features , is constructed such that is a martingale , and the stepsizes are chosen such that , which allows us to bound the martingale .in other words , the choices of the stepsizes keep close to the rkhs . for the error term due to random data , since , we can now apply the standard arguments for stochastic approximation in the rkhs . due to the additional randomness ,the recursion is slightly more complicated , where ] 2 . forany , with probability at least over , let . since is a function of and we have that is a martingal difference sequence .further note that then by azuma s inequality , for any , which is equivalent as moreover , since , we immediately obtain the two parts of the lemma . [ lem : coeff ] suppose and .then we have 1 .consequently , 2 . . follows by induction on . is trivially true .we have when , for any , so . when , if , then ; if , then . for , when , when and [ lem : random_data ] assume is -lipschitz continous in terms of be the optimal solution to our target problem. then 1 . if we set with such that , then where particularly , if , we have .2 . if we set with such that and , then with probability at least over , where particularly , if , we have . for the sake of simple notations , let us first denote the following three different gradient terms , which are note that by our previous definition , we have .+ denote .then we have because of the strongly convexity of ( [ eq : primal ] ) and optimality condition , we have hence , we have _ proof for _ : let us denote , , .we first show that are bounded . specifically , we have for , 1 . , where for and ; 2 .=0 ] , where for and ; we prove these results separately in lemma [ lem : bounds ] below .let us denote ] ; + this is because , &=&\ee_{\dcal^{t-1},\omegab^t}\sbr{\ee_{d_t}\sbr{\langle h_t - f_\ast , \bar g_t-\hat g_t\rangle_\hcal|\dcal^{t-1},\omegab^t}}\\ & = & \ee_{\dcal^{t-1},\omegab^t}\sbr{\langle h_t - f_\ast , \ee_{d_t}\sbr{\barg_t-\hat g_t}\rangle_\hcal}\\ & = & 0.\end{aligned}\ ] ] + 3 . \leqslant \kappa^{1/2}lb_{1,t}\sqrt{\ee_{\dcal^{t-1},\omegab^{t-1}}[a_t]} ] .the corollary then follows from the fact that =\ee_{\dcal^t,\omegab^t}[\hat h_{t+1}] ] .[ lem : rec ] suppose the sequence satisfies , and where .then , the proof follows by induction .when , it always holds true by the definition of .assume the conclusion holds true for with , i.e. , , then we have where the last step can be verified as follows . where the last step follows from the defintion of .[ lem : rec2 ] suppose the sequence satisfies where and . then , the proof follows by induction . when it is trivial. let us assume it holds true for , therefore , since , we have .hence , .as we show in section [ sec : doubly_sgd ] , the estimation of the variance of the predictive distribution of gaussian process for regression problem could be recast as estimating the operator defined in ( [ eq : variance_operator ] ) .we first demonstrate that the operator is the solution to the following optimization problem where is the hilbert - schmidt norm of the operator .the gradient of with respect to is set , we could obtain the optimal solution , , exactly the same as ( [ eq : variance_operator ] ) . to derive the doubly stochastic gradient update for , we start with stochastic functional gradient of . given ,the stochastic functional gradient of is where which leads to update recall \otimes \ee_{\omega'}[\phi_{\omega'}(x_t)\phi_{\omega'}(\cdot ) ] = \ee_{\omega , \omega'}[\phi_\omega(x_t)\phi_{\omega'}(x_t)\phi_\omega(\cdot)\otimes \phi_{\omega'}(\cdot)],\end{aligned}\ ] ] where are independently sampled from , we could approximate the with random features , . plug random feature approximation into ( [ eq : gp_posterior_variance_update ] ) leads to therefore , inductively , we could approximate by
the general perception is that kernel methods are not scalable , and neural nets are the methods of choice for large - scale nonlinear learning problems . or have we simply not tried hard enough for kernel methods ? here we propose an approach that scales up kernel methods using a novel concept called `` _ _ doubly stochastic functional gradients _ _ '' . our approach relies on the fact that many kernel methods can be expressed as convex optimization problems , and we solve the problems by making _ two unbiased _ stochastic approximations to the functional gradient , one using random training points and another using random features associated with the kernel , and then descending using this noisy functional gradient . our algorithm is simple , does _ not _ need to commit to a preset number of random features , and allows the flexibility of the function class to grow as we see more incoming data in the streaming setting . we show that a function learned by this procedure after iterations converges to the optimal function in the reproducing kernel hilbert space in rate , and achieves a generalization performance of . our approach can readily scale kernel methods up to the regimes which are dominated by neural nets . we show that our method can achieve competitive performance to neural nets in datasets such as 2.3 million energy materials from molecularspace , 8 million handwritten digits from mnist , and 1 million photos from imagenet using convolution features .
the financial quakes model ( fqm ) that we introduce in this paper is defined on a small - world ( sw ) network of agents ( `` traders '' ) .the total number of traders considered is always .note that the purpose of this model is to generate volatility clustering and power - law distributed avalanche sizes in a simple way , while we are not interested in formulating a realistic micro - model of financial markets or to reproduce the exact power - law exponents .the purpose of our study is to explore possible ways to destroy dangerous herding effects by simple and effective means . in our fqm model ,each agent carries a given quantity of information about the financial market considered .the sw network is obtained from a square 2-dimensional lattice with open boundary conditions , by randomly rewiring the nearest neighbors links with a probability of ( see fig.1 ) .the resulting network topology allows the information to spread over the lattice through long - range links , but also preserves the clustering properties of the network and its average degree ( ) .the information spreading is simulated by associating to each trader a real variable , representing the information possessed at time , which initially ( at ) is set to a random value in the interval . is a threshold value that is assumed to be the same for all agents . at each discrete time step , due to public external information sources , all these variablesare simultaneously increased by a quantity , which is different for each agent and randomly extracted within the interval $ ] , where is the maximum value of the agents information at time .( a ) s 500 index data used in this paper , with daily index entries ( from september 11 , 1989 , to june 29 , 2012 ) .( b ) corresponding ( relative ) returns time series , where returns are defined as the ratio .,width=326 ] if , at a given time step , the information of one or moreagents exceeds the threshold value , these agents become `` active '' and take the decision of investing a given quantity of money by betting on the bullish ( increasing ) or bearish ( decreasing ) behavior of the market compared to the day before . as mentioned before, we consider here as a typical example the s index .the time period ranges from september 11 , 1989 , to june 29 , 2012 , over a total of daily index values ( see fig .2 ) . notice that the use of this particular series has no special reasons , it just serves to ensure only a realistic market dynamics as input .other indexes have also been tested with similar results . in order to make their prediction ( positive or negative ) about the sign of the index difference at time , active agents are assumed to follow the standard relative strength index ( rsi ) trading strategy , based on the ratio between the sum of positive returns and the sum of negative returns experienced during the last days ( here we choose ; see ref . for further details of the rsi algorithm ) . as for the time seriesconsidered , this strategy has nothing special and has been chosen just because it is a commonly used technical strategy in the trading community . for financial traders, it is often beneficial to be followed by others , as this increases the likelihood that their investments will be profitable or because they are friends / colleagues and it would be considered appropriate to share part of their own information .therefore , we assume that the agents , once activated , will transfer some information to the neighbors according the following herding mechanism : here `` nn '' denotes the set of nearest - neighbors of the active agent . is the number of direct neighbors , and the parameter controls the level of dissipation of the information during the dynamics ( corresponds to the conservative case ) . in analogy with the ofc model for earthquakes , we set here ( non - conservative case ) , i.e. we consider some information loss during the herding process .this value of has been chosen here to drive the system in a critical state and to obtain large avalanches , since our goal is to study how these avalanches can be reduced by the introduction of random traders .size of avalanches ( `` financial quakes '' ) occurring in our artificial financial market . both positive cascades ( `` bubbles '' ) and negative avalanches ( `` crashes '' ) are found .see text for further details.,width=340 ] of course , the herding rule ( [ av_dyn ] ) can activate other agents , thereby producing a chain reaction .the resulting information avalanche may be called a `` financial quake '' : all the agents that are above the threshold become active and invest simultaneously according to eq .( [ av_dyn ] ) , such that the agent bets with the same prediction as the agent from which they have received the information .the financial quake is over , when there are no more active agents in the system ( i.e. when ) .then , the prediction is finally compared with the sign of : if they are in agreement , all the agents who have contributed to the avalanche win , otherwise they lose . in any case , the process of information cascades build up again due to the random public `` information pressure '' acting on the system . the number of investments ( i.e. the number of active agents ) during a single financial quake define the avalanche size . in the next section we present several simulation results obtained by running this model many times , starting each time from a new random initial distribution of the information shared among the agents .we assume that the avalanche process within our sample trading community does not influence the whole market , i.e. does not have any effect on the behavior of the financial series considered , even if we imagine that the market does exert some influence on our community through `` the information pressure '' .on the other hand , this pressure , being random and different for each agent , is also independent from the financial time series considered ( here the ) . in a way, this scenario could be considered analogous to the physical situation of a small closed thermodynamical system in contact with a very large energy _ reservoir _ ( environment ) , typical for statistical mechanics in the canonical ensemble : even if the system can exchange energy with the environment , it is too small to have any influence on the reservoir itself . cumulative distribution of the sizes ( absolute values ) of herding avalanches occurring in the community of investors , with and without random traders . in the absence of random traders ( open circles ) he distribution obeys a power law with an exponent equal to -1.87 ( a fit is also reported as a straight line ) . considering increasing amount of random traders , i.e. ( diamonds ) , ( squares ) , ( full circles ) and ( triangles ), the distribution tends to become exponential for sufficiently large percentage of random investors .an exponential fit with exponent equal to is also found for the latter case . in these simulationsthe random traders are uniformly distributed ( at random ) over the network , as in fig.1 .for further details see the main text.,width=340 ]in fig . 3 we plot the time sequence of the `` financial quake '' sizes during a single simulation run .a positive sign means that all the involved agents win and a negative sign that they loose .each avalanche corresponds to an entry of the s&p index series , since each initial investment ( `` bet '' ) on the market coincides with the occurrence of a financial quake ( this means that the series in fig . 2 and fig .3 have the same length ) . in analogy with the soc behavior characterizing the ofc model , we observe a sequence of quakes that increases in size over time .in other words , the financial system is progressively driven into a critical - like state , where herding - related avalanches of any size can occur : most of them will be quite small , but sometimes a very big financial quake appears , involving a herding cascade of bets , which can be either profitable ( positive ) or lossfull ( negative ) . notice that the daily data of the s&p series only affect the sign of the avalanches in fig . 3 , while their sizes strictly depend on the internal dynamics of our small trading community considered .therefore , as we verified with several simulations not reported here , reshuffling data or removing extreme events in the s&p series would not produce any change in the sizes of the financial quakes .the soc - like nature of this dynamics is well shown in fig . 4 , where we report the probability distribution of financial quakes size , measured by its absolute value and cumulated over 10 simulations ( open circles ) .the resulting distribution can be very well fitted by a power - law , a slope consistent with the one obtained for earthquakes in the ofc model on a sw topology ( see ) .behavior of the maximum size of avalanches as a function of an increasing number of random traders , for three different ways of spreading them over the network : randomly distributed ( circles ) , grouped in one community ( squares ) and grouped in four communities ( diamonds ) .the results were averaged over 10 different realizations , each one with a different initial random position of traders and communities within the network.,width=336 ] let us now discuss what happens if a certain number of agents in the network adopt a random trading strategy , i.e. if they invest in a completely random way instead of following the standard rsi strategy .we have already shown that an individual random trading strategy , if played along the whole s&p 500 series ( and also along other european financial indices ) , performs as well as various standard trading strategies ( such as rsi , macd or momentum ) , but it is less risky than other strategies . here , we study whether a widespread adoption of such a random investment strategy would also have a beneficial ( collective ) effect at the macro - level ( where other important phenomena like herding , asymmetric information or rational bubbles may matter ) .would random investment strategies reduce the level of volatility and induce a greater stability of financial markets ? in the following , we test this hypothesis by introducing a certain percentage of random traders ( colored agents in fig.1 ) , uniformly distributed at random among the investors .we assume that all agents are aware of the trading strategy ( rsi or random ) adopted by their respective neighbors . in this respectit is worthwhile to stress again that , in our model , traders behave according to a bounded rationality framework with no feedback mechanism on the market .note that , in contrast to rsi traders , random traders are not activated by their neighbors , since they invest at random .we also assume that they do not activate their neighbors , since a random trader has no specific information to transfer .in other words , random traders only receive information from external sources , but do not exchange individual information with other agents apart from the fact that they bet at random .two different examples of small - world networks , with agents as in fig . 1 , but where random traders ( colored agents , of the total ) are grouped in one community ( left panel ) or four communities ( right panel ) , respectively.,title="fig:",width=158 ] two different examples of small - world networks , with agents as in fig . 1 , but where random traders ( colored agents , of the total ) are grouped in one community ( left panel ) or four communities ( right panel ) , respectively.,title="fig:",width=158 ] in order to simulate random investors within our model , we simply set for them in eq .( [ av_dyn ] ) .this means that random traders ( when they overcome their information threshold ) can invest their capital exactly in the same way as other agents , but they do not take part in any herding - related activation avalanche .coming back again to fig . 4 , one can see the effect of an increasing percentage of random traders on the size distribution of financial quakes . besides the power - law curve already discussed ,corresponding to , we report also the results obtained considering different percentages of random traders , when namely ( diamonds ) , ( squares ) , ( full circles ) and ( triangles ) .the data show that the original power - law distribution evolves towards an exponential one .an exponential fit with an exponent equal to -0.2 ( dashed - dotted curve ) is also reported for the maximum number of random traders considered by us , i.e. . one can also investigate how the size of the avalanche changes with the increase of the amount of random traders considered , if they are uniformly distributed over the network .this is shown in fig.5 ( full circles ) , where one can see that the maximum size of the avalanches observed drops by a factor of in the presence of only of random traders , reaching almost its final saturation level of when .these results indicate that even a relatively small number of random investors distributed at random within the market is able to suppress dangerous herding - related avalanches .but what would happen if these random traders , instead of being uniformly distributed at random over the population , were grouped together in one or more communities ?capital / wealth distribution of all agents at the end of the simulation , cumulated over 10 realizations , for a network with various percentages of uniformly distributed random traders . in any casewe had a pareto - like power law with an exponent of , independently of the amount of random traders considered.,width=345 ] in fig.6 we show two examples of small - world networks with random traders ( colored agents ) grouped in one or four communities , respectively ( for clarity , we use the same sample network as in fig.1 , with ) .if one repeats the previous simulations for our network of agents with an increasing percentage of random traders , but now grouping these traders together in either one community or four communities , respectively , the result is that the original power law distribution of avalanches is less affected by random investments for any percentage .this , in turn , implies a slower decrease of the maximum avalanche sizes as increases , as shown again in fig.5 ( squares and diamonds , respectively ) .this means that the uniform random distribution of random investors over the whole network is quite crucial in order to significantly dampen avalanche formation ( notice that a percentage between and of uniformly distributed random traders is enough to reduce the financial quakes size as much as grouped random investors would do ) . in this respect ,the effect of random traders is to increase the frequency of small financial quakes and , consequently , avoid the occurrence of large ones .it is also interesting to study the capital gain or loss , i.e. the change in wealth , of the agents involved in the trading process during the whole period considered ( in the following we will use the terms `` capital '' and `` wealth '' synonymously ) . at the beginning of each simulation , we assign to each trader ( rsi or random ) an initial capital according to a normal distribution with an average of credits and a standard deviation equal to . then we let them invest in the market according to the following rules : - if an agent wins thanks to a given bet ( for example after being involved in a given , big or small , positive financial quake ) , in the next investment he will bet a quantity of money equal to one half of his / her total capital , i.e. ; - if an agent loses due to an unsuccessful investment ( for example after a negative financial quake ) , the next time he will invest only ten percent of his / her total capital , i.e. .we have checked , however , that our result are quite robust to adopting a number of different investment criteria . after a financial quake , the capital of each agent involved in the herding - related avalanche will increase or decrease by the quantity .of course random traders , who do not take part in avalanches , can invest .their wealth changes only when they overcome their information threshold due to the external information sources . in fig.7we show the distribution of the total wealth cumulated by all agents during the whole sequence of financial quakes , i.e. over the whole s&p 500 series ( cumulated over 10 different runs ) , for three different trading networks with increasing percentages of random investors ( namely , , and ) .interestingly , a pareto power law with an exponent equal to ( see the fit reported as dashed line ) emerges spontaneously from the dynamics of the asymmetric investments , independently of the number of random traders .we have checked that this result is quite robust and does not substantially change if we modify the quantity that agents choose to invest in case of a win or a loss .it is also interesting to study the wealth distribution of the random investors in case of a network with random traders ( results are cumulated over realizations ) .this is reported in fig.8 , in comparison with the corresponding distribution already shown in the previous figure for the whole trading community .as one can see from the plot , random traders have a final wealth distribution very different from a power law , which can be fitted very well with an exponential curve , represented by a dashed line , with an exponent equal to ( the fact that the random traders component is not changing the global power law distribution of fig.7 is evidently due to its small size , i.e. agents as compared to ) .in addition , we compare average final wealth of all the traders , 767 credits , with the average wealth of random investors only , 923 credits .this should be compared with the initial value of the capital which is 1000 credits .therefore of rsi traders have more wealth in the end than in the beginning , whereas the analogous percentage for random traders is .comparison between the final capital / wealth distribution for the whole trading network with of random traders shown in fig.7 ( reported again as square symbols ) and the same distribution for the random traders component only ( full circles ) .the latter can be fitted with an exponential distribution , also reported as dashed line , with an exponent equal to .we also report the average capital calculated over all the traders ( 767 credits ) and over the random traders only ( 923 credits).see text for details.,width=345 ] these findings allow us to extend our previous results for single traders to collective effects in a community of traders .in fact , they suggest that the adoption of random strategies would diminish the probability of extreme events , in this case large increases or losses of wealth , but also ensure almost the same average wealth over a long time period , at variance with technical strategies .in particular , looking at the details of the two distribution shown in fig.8 , we also find that of rsi traders have a final capital smaller than the worst random trader , whereas only of rsi traders perform better than the best random trader .this means that , for technical traders , the risk of losses is much greater than the probability of gains , compared to those of random investors .random trading seems therefore , after all , a very good combination of low risk and high performance . before closing this sectionwe note that all the numerical simulations presented are quite robust and that similar results were obtained adopting other historical time series , such as , for example , ftse uk or ftse mib .even though our results were obtained for a `` toy model '' of financial trading , we think that they have potentially interesting policy implications . according to the conventional assumption , neither bubbles or crashes should occur when all agents are provided with the same complete and credible set of information about monetary and asset values traded in the market .this is the basis of the well - known efficient market hypothesis , based on the paradigm of rational expectations .bubbles and crashes should also not occur according to the wide - spread dynamic stochastic general equilbrium ( dsge ) models .however , the wisdom of crowds effect , which induces the equilibrium price , can be undermined by information feedbacks and social influence .such social influence may lead to bubbles and crashes . to account for herding effects ,researchers have started to propose concepts such as `` rational bubbles '' , recognizing that it can be profitable to follow a trend . however , on average , trend - following is not more successful than random investments it is rather more risky . from human psychology , we know that people tend to follow others , i.e. to show herding behavior , if it is not clear what is the right thing to do .this fact establishes that many traders will be susceptible to the trading decisions of others .it seems realistic to assume that each agent is endowed with a different quantity and quality of information , coming from private information sources with different reputation , depending on the agents position in the network ( see e.g. refs .however , such information feedbacks may be harmful , particularly when the market is flooded with volatile and self - referential information .our paper supports the hypothesis that introducing `` noisy trading '' ( i.e. random investors ) in financial markets can destroy bubbles and crashes before they become large , and thereby avoid dangerous avalanches . by preventing extreme price variations, random investments also help to identify the equilibrium price .it seems that already a small number of random investors ( relative to the total number of agents ) would be enough to have a beneficial effect on the financial market , particularly if distributed at random .such investors could be central banks , but also large investors , including pension funds or hedge funds with an interest in reducing the risks of their investments .we are aware that further studies with more sophisticated and realistic models of financial markets should be performed to explore the full potentials and limitations of random investment strategies .however , our results suggest that random investments will always reduce both the size and frequency of bubbles / crashes . further research will be devoted to understanding the most opportune timing for the introduction of such random investments and whether this innovative policy instrument can enable a smooth control of financial markets , thereby reducing their fragility .dh acknowledges partial support by the fet flagship pilot project futurict ( grant number 284709 ) , the eth project `` systemic risks , systemic solutions '' ( chirp ii project eth 4812 - 1 ) and the erc advanced investigator grant `` momentum '' ( grant no . 324247 ). s. bikhchandani , d. hirshleifer , and i. welch , _ informational cascades and rational herding : an annotated bibliography and resource reference , working paper _ ( anderson school of management , ucla , available at www.info-cascades.info , 2008 )
building on similarities between earthquakes and extreme financial events , we use a self - organized criticality - generating model to study herding and avalanche dynamics in financial markets . we consider a community of interacting investors , distributed on a small - world network , who bet on the bullish ( increasing ) or bearish ( decreasing ) behavior of the market which has been specified according to the s&p500 historical time series . remarkably , we find that the size of herding - related avalanches in the community can be strongly reduced by the presence of a relatively small percentage of traders , randomly distributed inside the network , who adopt a random investment strategy . our findings suggest a promising strategy to limit the size of financial bubbles and crashes . we also obtain that the resulting wealth distribution of all traders corresponds to the well - known pareto power law , while the one of random traders is exponential . in other words , for technical traders , the risk of losses is much greater than the probability of gains compared to those of random traders . financial markets often experience extremes , called `` bubbles '' and `` crashes '' . the underlying dynamics is related to avalanches , the size of which is distributed according to power laws . power laws imply that crashes may reach any size a circumstance that may threaten the functionality of the entire financial system . many scientists see `` herding behavior '' as the origin of such dangerous avalanches . in our paper , we explore whether there is a mechanism that could stop or reduce them . to generate a power - law dynamics similar to the volatility clustering in financial markets , we use an agent - based model that produces the phenomenon of self - organized criticality ( soc ) . specifically , we adapt the olami - feder - chrstensen ( ofc ) model that has been proposed to describe the dynamics of earthquakes . in this context , we assume information cascades between agents as the underlying mechanism of financial avalanches . we assume that agents interact within a small - world ( sw ) network of financial trading and that there is social influence among them . such kinds of models are recently becoming popular in economics and herding effects are also beginning to be observed in lab experiments . in this connection , it is interesting to consider that humans tend to orient themselves at decisions and behaviors of others , particularly in situations where it is not clear what is the right thing to do . such conditions are typical for financial markets , in particular during volatile periods . in fact , in situations of high uncertainty , personal information exchange may reach market - wide impacts , as the examples of bank runs and speculative attacks on national currencies show . our study explores how huge herding avalanches in financial trading might be reduced by introducing a certain percentage of traders who adopt a random investment strategy . actually , several analogies between socio - economic and physical or biological systems have recently been discussed , where noise and randomness can have beneficial effects , improving the performance of the system . more specifically , in a recent series of papers , it has been explored whether the adoption of random strategies may be advantageous in financial trading from an individual point of view . scientific evidence suggests that , in the long term , random trading strategies are less risky for a single trader , but provide , on average , gains comparable to those obtained by technical strategies . therefore , one might expect that a certain percentage of investors would consider the possibility of adopting a random trading strategy . assuming this and using real data sets of the s&p500 index , we here extend our previous analysis to a sample community of interacting investors . we investigate whether the presence of randomly distributed agents performing random investments influences the formation of herding - related avalanches and how the wealth of the traders is distributed . we show that the presence of random traders is able to reduce financial avalanches , which we call - in analogy with earthquakes - financial quakes. furthermore , we find that the wealth distribution , even if normally distributed in the beginning , spontaneously evolves towards the well - known pareto power law . finally , we address possible policy implications . small - world network of traders , as used in our numerical simulations ( here as an example , but in the simulations we always considered agents ) . short - range and long - range links are simultaneously present . white agents are rsi traders ( active agents assumed to follow the standard relative strength index ( rsi ) trading strategy ) ; colored agents ( of the total ) are random traders , here uniformly distributed at random among the population . see text for further details.,width=288 ]
in this paper we shall survey a particular class of problems , which we like to refer to as `` evolutionary equations '' ( to distinguish it from the class of explicit first order ordinary differential equations with operator coefficients predominantly considered under the heading of _ evolution equations _ ) .this problem class is spacious enough to include not only classical evolution equations but also partial differential algebraic systems , functional differential equations and integro - differential equations .indeed , by thinking of elliptic systems as time dependent , for example as constant with respect to time on the connected components of , they also can be embedded into this class .the setting is in its present state largely limited to a hilbert space framework . as a matter of convenience the discussion will indeed be set in a complex hilbert space framework . for the concept of monotonicity it is , however , more appropriate to consider complex hilbert spaces as real hilbert spaces , which can canonically be achieved by reducing scalar multiplication to real numbers and replacing the inner product by its real part .so , a binary relation in a complex hilbert space with inner product would be called strictly monotone if for all holds and is some positive real number . in case the relation would be called monotone . the importance of strict monotonicity , which in the linear operator case reduces to strict positive definiteness in a real or complex hilbert space in the sense naturally induced by the classification of the corresponding quadratic form given by on its domain .so , if is non - negative ( mostly called positive semi - definite ) , positive definite , strictly positive definite , then the operator will be called non - negative ( usually called positive ) , positive definite , strictly positive definite , respectively . if is a complex hilbert space it follows that must be hermitean .note that we do _ not _ restrict the definition of non - negativity , positive definiteness , strict positive definiteness to hermitean or symmetric linear operators .] , is of course well - known from the elliptic case . by a suitable choice of space - time normthis key to solving elliptic partial differential equation problems also allows to establish well - posedness for dynamic problems in exactly the same fashion .the crucial point for this extension is the observation that the one dimensional derivative itself , acting as the time derivative ( on the full time line ) , can be realized as a maximal strictly positive definite operator in an appropriately exponentially weighted -type hilbert space over the real time - line .it is in fact this strict positive definiteness of which opens access to the problem class we shall describe later . indeed , _ _ simply turns out to be a _normal _ operator with being just multiplication by a positive constant .moreover , this time - derivative is continuously invertible and , as a normal operator , admits a straightforwardly defined functional calculus , which can canonically be extended to operator - valued functions . indeed , since we have control over the positivity constant via the choice of the weight , the norm of can be made as small as wanted .this observation is the hilbert space analogue to the technical usage of the exponentially weighted sup - norm as introduced by d. morgenstern , , and allows for the convenient inclusion of a variety of perturbation terms .having established time - differentiation as a normal operator , we are led to consider evolutionary problems as operator equations in a space - time setting , rather than as an ordinary differential equation in a spatial function space .the space - time operator equation perspective implies to deal with sums of unbounded operators , which , however , in our particular context is due to the limitation of remaining in a hilbert space setting and considering only sums , where one of the terms is a function of the normal operator not so deep an issue . for more general operator sums or fora banach space setting more sophisticated and powerful tools from the abstract theory of operator sums initiated by the influential papers by da prato and grisvard , , and brezis and haraux , , may have to be employed . in these papersoperator sums typically occurring in the context of explicit first order differential equations in banach spaces are considered as applications of the abstract theory , compare also e.g. ( * ? ? ?* chapter 2 , section 7 ) .the obvious overlap with the framework presented in this paper would be the hilbert space situation in the case .we shall , however , not pursue to explore how the strategies developed in this context may be expanded to include more complicated material laws , which indeed has been done extensively in the wake of these ideas , but rather stay with our limited problem class , which covers a variety of diverse problems in a highly unified setting .naturally the results available for specialized cases are likely to be stronger and more general . for introductory purposes let us consider the typical linear case of such a space - time operator equation where are given data, is a usually purely spatial prototypically _ _ skew - selfadjoint__note that in our canonical reference situation is _skew_-selfadjoint rather than selfadjoint and so we have for all and coercitivity of is out of the question . to make this concrete: let denote the weak -derivative .then our paradigmatic reference example on this elementary level would be the transport operator rather than the heat conduction operator . ] operator and the quantities are linked by a so - called material law solving such an equation would involve establishing the bounded invertibility of .as a matter of `` philosophy '' we shall think of the here linear material law operator as encoding the complexity of the physical material whereas is kept simple and usually only contains spatial derivatives .if commutes with we shall speak of an autonomous system , otherwise we say the system is non - autonomous .another more peripheral observation with regards to the classical problems of mathematical physics is that they are predominantly of first order not only with respect to the time derivative , which is assumed in the above , but frequently even in _ both _ the temporal and spatial derivatives . indeed ,acoustic waves , heat transport , visco - elastic and electro - magnetic waves etc .are governed by first order systems of partial differential operators , i.e. is a first order differential operator in spatial derivatives , which only after some elimination of unknowns turn into the more common second order equations , i.e. the wave equation for the pressure field , the heat equation for the temperature distribution , the visco - elastic wave equation for the displacement field and the vectorial wave equation for the electric ( or magnetic ) field .it is , however , only in the direct investigation of the first order system that , as we shall see , the unifying feature of monotonicity becomes easily visible .moreover , the first order formulation reveals that the spatial derivative operator is of a hamiltonian type structure and consequently , by imposing suitable boundary conditions , turn out in the standard cases to lead to skew - selfadjoint in a suitable hilbert space .so , from this perspective there is also undoubtedly a flavor of the concept of _ symmetric hyperbolic systems _ as introduced by k. o. friedrichs , , and of petrovskii well - posedness , , at the roots of this approach . for illustrational purposes let us consider from a purely heuristic point of view the -dimensional system where and is simply the weak -derivative , compare footnote [ fn : paradigm ] .assuming , , in ( [ eq : ex - system ] ) clearly results in a ( symmetric ) hyperbolic system and eliminating the unknown yields the wave equation in the form for , we obtain a differential algebraic system , which represents the parabolic case in the sense that after eliminating we obtain the heat equation finally , if both parameters vanish , we obtain a 1-dimensional elliptic system and as expected after eliminating the unknown a 1-dimensional elliptic equation for results : allowing now to be -multiplication operators with values in , which would allow the resulting equations to jump in space between elliptic , parabolic and hyperbolic `` material properties '' , could be a possible scenario we envision for our framework . as will become clear , the basic idea of this simple `` toy '' example can be carried over to general evolutionary equations .also in this connotation there are stronger and more general results for specialized cases .a problem of this flavor of `` degeneracy '' has been for example discussed for a non - autonomous , degenerate integro - differential equation of parabolic / elliptic type in .a prominent feature distinguishing general operator equations from those describing dynamic processes is the specific role of time , which is not just another space variable , but characterizes dynamic processes via the property of causality .requiring causality for the solution operator results in very specific types of material law operators , which are causal and compatible with causality of .this leads to deeper insights into the structural properties of mathematically viable models of physical phenomena .the solution theory can be extended canonically to temporal distributions with values in a hilbert space . in this perspectiveinitial value problems , i.e. prescribing in ( [ eq : evo_1 ] ) , amount to allowing a source term of the form defined by for in the space of continuous -valued functions with compact support .this source term encodes the classical initial condition . for the constant coefficient case say , it is a standard approach to establish the existence of a fundamental solution ( or more generally , e.g. in the non - autonomous case , a green s functions ) and to represent general solutions as convolution with the fundamental solution .this is of course nothing but a description of the continuous one - parameter semi - group approach .indeed , such a semi - group is , if extended by zero to the whole real time line , nothing but the fundamental solution with -\infty,0 \right[}. \end{cases} ] and \coloneqq\left\ { x\in x\,|\,\bigvee_{y\in n}(x , y)\in p\right\ } ] of the whole space under is then the domain of the mapping . ]\subseteq y\to x\ ] ] performing the association of `` data '' to `` solutions '' .( `` existence '' for every given data ) we have that =y,\ ] ] i.e. is defined on the whole data space .( `` continuous dependence '' of the solution on the data ) the mapping is continuous . in case of being a mappingthen = p^{-1}\left[y\right] ] .recall that a relation is called _ monotone _ if _ _ for all .such a relation is called _ maximal _ if there exists no proper monotone extension in .in other words , if is such that for all , then .[ thm : minty ] let be a maximal monotone relation for some ,\infty \right[} ] a well - defined mapping. moreover , ] .this is the part we will omit and refer to instead . to establish lipschitz continuity of we observe that holds , from which the desired continuity estimate follows . for many problems ,the strict monotonicity is easy to obtain .the maximality , however , needs a deeper understanding of the operators involved . in the linear case , writing now for , there is a convenient set - up to establish maximality by noting that ^{*}\right)^{\perp}=\overline{a\left[x\right]}\ ] ] according to the projection theorem .here we denote by the _ adjoint of , _ given as the binary relation thus , maximality for the strictly monotone linear mapping ( i.e. strictly accretive ) is characterized has closed range . ] by ^{*}=\left\ { 0\right\ } , \label{eq : max}\ ] ] i.e. the uniqueness for the adjoint problem. characterization ( [ eq : max ] ) can be established in many ways , a particularly convenient one being to require that is also strictly monotone . with thiswe arrive at the following result .[ lin - theo]let and be closed linear strictly monotone relations in a hilbert space .then for every there is a unique such that indeed , the solution depends continuously on the data in the sense that we have a ( lipschitz- ) continuous linear operator with of course , the case that is a closed , densely defined linear operator is a common case in applications .[ lin : op : theo]let a be a closed , densely defined , linear operator and strictly accretive in a hilbert space .then for every there is a unique such that indeed , solutions depend continuously on the data in the sense that we have a ( lipschitz ) continuous linear operator with in the case that and are linear operators with the situation simplifies , since then strict accretivity of implies strict accretivity of due to for all .[ lin : op : d(a):theo]let be a closed , densely defined , linear strictly accretive operator in a hilbert space with . then for every there is a unique such that indeed , the solution depends continuously on the data in the sense that we have a continuous linear operator with the domain assumption of the last corollary is obviously satisfied if is a continuous linear operator .this observation leads to the following simple consequence .[ cont - lin - op - theo]let be a strictly accretive , continuous , linear operator in the hilbert space .then for every there is a unique such that indeed , the solution depends continuously on the data in the sense that we have a continuous linear operator with note that since continuous linear operators and continuous sesqui - linear forms are equivalent , the last corollary is nothing but the so - called lax - milgram theorem . indeed ,if is in the space of a continuous linear operators then is in turn a continuous sesqui - linear form on , i.e. an element of the space of continuous sesqui - linear forms on , and conversely if then and utilizing the unitary accordingly as for every and every continuous linear functional on . ] we get via the riesz representation theorem where defines indeed a continuous linear operator on .moreover , is not only a bijection but also an isometry . indeed , strict accretivity for the corresponding operator results in the so - called _ _ coercitivity _ _ ) can be weakened to requiring merely , which yields in an analogous way a corresponding well - posedness result .this option is used in some applications .] of the sesqui - linear form : for some ,\infty \right[} ] .then , the operator , where ] and set \right)\ ] ] for some .we denote the corresponding multiplication - operator on \right) ] by then \right)\to r(\partial_{1,c}) ] to show that is invertible , we have to solve the problem for given the latter can be written as as is continuously invertible ( since ) we derive since we obtain yielding providing that the latter holds if and only if thus , assuming that we get which clearly defines a lipschitz - continuous mapping . summarizing , if and then for each there exists a unique satisfying ( [ eq : elliptic ] ) .\(a ) if ( [ eq : elliptic ] ) is replaced by the problem with homogeneous neumann - boundary conditions , then the constraint can be dropped , since in this case \right) ] .we emphasize here that has a natural meaning as a function of a normal operator .we refer to ( * ? ? ?* subsection 2.1 ) for a comparison with classical notions of fractional derivatives ( see e.g. for an introduction to fractional derivatives ) . as in the case we get that is boundedly invertible for each ,1] ] is finite and for some hilbert space and each one imposes the following conditions on the operators in order to get an estimate of the form ( [ eq : pos_material_law ] ) for the material law ( [ eq : material_law_fractional ] ) : [ thm : frac]let be a monotonically increasing enumeration of assume that the operators and are selfadjoint for each moreover , let be three orthogonal projectors satisfying and assume , that and commute with and for every if and and are strictly positive definite on the ranges of and respectively , then the material law ( [ eq : material_law_fractional ] ) satisfies the solvability condition ( [ eq : pos_material_law ] ) . with this theoremwe end our tour through different kinds of evolutionary equations , which are all covered by the solution theory stated in theorem [ thm : sol_theo_skew ] .besides the well - posedness of evolutionary equations , it is also possible to derive a criterion for ( exponential ) stability in the abstract setting of theorem [ thm : sol_theo_skew ] .since the systems under consideration do not have any regularizing property , we are not able to define exponential stability as it is done classically , since our solutions do not have to be continuous .so , point - wise evaluation of does not have any meaning .indeed , the problem class discussed in theorem [ thm : sol_theo_skew ] covers also purely algebraic systems , where definitely no regularity of the solutions is to be expected unless the given data is regular .thus , we are led to define a weaker notion of exponential stability as follows .[ def : exp_stab]let be skew - selfadjoint to be skew - selfadjoint .however , in was assumed to be a linear maximal monotone operator .we will give a solution theory for this type of problem later on .one then might replace the condition of skew - selfadjointness in this definition and the subsequent theorem by the condition of being linear and maximal monotone . ] and satisfying ( [ eq : pos_material_law ] ) for some .let .we call the operator _ exponentially stable _ with stability rate if for each and we have which in particular implies that . as it turns out , this notion of exponential stability yields the exponential decay of the solutions , provided the solution is regular enough .for instance , this can be achieved by assuming more regularity on the given right - hand side ( see ( * ? ? ? * remark 3.2 ( a ) ) ) .the result for exponential stability reads as follows .[ thm : criterion - stab]let be a skew - selfadjoint operator and be a mapping satisfying the following assumptions for some : \(a ) is analytic ; \(b ) for every there exists such that for all we have then for each the solution operator is exponentially stable with stability rate .[ ex : para - hypo - stab]let be an open subset and measurable , disjoint , non - empty , . then the solution operator for the equation for suitable is exponentially stable with stability rate . as in (* initial value problems ) , the stability of the corresponding initial value problems can be discussed similarly .let and assume , in addition , that .then the solution operator for the equation is exponentially stable with stability rate such that we note that the exponential stability of integro - differential equations can be treated in the same way , see ( * ? ? ?* section 4.3 ) . in this sectionwe discuss the closedness of the problem class under consideration with respect to perturbations in the material law .we will treat perturbations in the weak operator topology , which will also have strong connections to issues stemming from homogenization theory . for illustrational purposes we discuss the one dimensional case of an elliptic type equation first .[ ex : simple_dim1 ] let be a bounded , uniformly strictly positive , measurable , -periodic function .we denote the multiplication operator on ,1 \right[}) ] with homogeneous dirichlet boundary conditions by ( see also definition [ def : div_grad ] ) and for its skew - adjoint , we consider the problem of finding such that for given ,1 \right[})]-periodic _ , _ _i.e. , for all and a.e . we have . then ,1 \right[}{}^{n}}a(x)dx\quad({\varepsilon}\to0)\ ] ] in the weak--topology of . for , we define .it is easy to see that is bounded in ,1 \right[}) ] .the arzela - ascoli theorem implies that has a convergent subsequence ( again labelled with ) , which converges in ,1 \right[}) ] as .hence , weakly converges in ,1 \right[}) ] . denoting the respective limit by , we infer now , unique solvability of the latter equation together with a subsequence argument imply convergence of without choosing subsequences .note that examples in dimension or higher are far more complicated .in particular , the computation of the limit ( if there is one ) is more involved . to see this, we refer to ( * ? ? ?* sections 5.4 and 6.2 ) , where the case of so - called laminated materials and general periodic materials is discussed . in the former the limitmay be expressed as certain integral means , whereas in the latter so - called local problems have to be solved to determine the effective equation .having these issues in mind , we will only give structural ( i.e. compactness ) results on homogenization problems of ( evolutionary ) partial differential equations .in consequence , the compactness properties of the differential operators as well as the ones of the coefficients play a crucial role in homogenization theory .regarding proposition [ prop : per_impl_conv ] , the right topology for the operators under consideration is the weak operator topology . indeed , with the examples given in the previous section in mind and modeling local oscillations as in example [ ex : simple_dim1 ], we shall consider the weak--topology of an appropriate -space . now ,if we identify any -function with the corresponding multiplication operator on , we see that convergence in the weak--topology of the functions is equivalent to convergence of the associated multiplication operator in the weak operator topology of .this general perspective also enables us to treat problems with singular perturbations and problems of mixed type . before stating a first theorem concerning the issues mentioned, we need to introduce a topology tailored for the case of autonomous and causal material laws .[ def : topo ] for hilbert spaces , and an open set we define to be the initial topology on induced by the mappings for , where is the set of holomorphic functions endowed with the compact open topology , i.e. , the topology of uniform convergence on compact sets .we write for the topological space and re - use the notation for the underlying set .we note the following remarkable fact .let be hilbert spaces , open .then is compact .if , in addition , and are separable , then is metrizable . for introduce the set . the proof is based on the following equality which itself follows from a dunford type theorem ensuring the holomorphy ( with values in the space ) of the elements on the right - hand side and the riesz - frchet representation theorem for sesqui - linear formsnow , invoking montel s theorem , we deduce that is compact for every .thus , tikhonov s theorem applies to deduce the compactness of .the proof for metrizability is standard .recall for and a hilbert space , we set in accordance to definition [ def : topo ] , we will also write for the set endowed with .the compactness properties from are carried over to .the latter follows from the following proposition : let . then the set is closed .we are now ready to discuss a first theorem on the continuous dependence on the coefficients for autonomous and causal material laws , which particularly covers a class of homogenization problems in the sense mentioned above . for a linear operator in some hilbert space , we denote endowed with the graph norm of by .if a hilbert space is compactly embedded in , we write .a subset is called _ bounded _ , if there is such that .the result reads as follows .[ thm : hom_1_auto_comp ] let , be a bounded sequence in , skew - selfadjoint .assume that .then there exists a subsequence of such that converges in to some and in the weak operator topology .we first apply this theorem to an elliptic type equation .[ ex : hom_dirich]let be open and bounded .let and be the operators introduced in definition [ def : div_grad ] .let be a sequence of uniformly strictly positive bounded linear operators in . for consider for the problem of finding such that the equation holds .observe that if denotes the canonical embedding , this equation is the same as indeed , by poincar s inequality is closed , the projection theorem ensures that is the orthogonal projection on .moreover , yields that now , we realize that due to the positive definiteness of so is .consequently , the latter operator is continuously invertible . introducing for we rewrite the equation ( [ eq : mod_ell ] ) as follows now , let and lift the above problem to the space by interpreting as .then this equation fits into the solution theory stated in theorem [ thm : sol_theo_skew ] with note that the skew - selfadjointness of is easily obtained from . in order to conclude the applicability of theorem [ thm : hom_1_auto_comp ] , we need the following observation .[ prop : null_away_comp ] let be hilbert spaces , densely defined , closed , linear .assume that . then . with the help of the theorem of rellich - kondrachov and proposition [prop : null_away_comp ] , we deduce that has compact resolvent . thus , theorem [ thm : hom_1_auto_comp ] is applicable .we find a subsequence such that exists , where we denoted by the weak operator topology .therefore , weakly converges to some , which itself is the solution of in fact it is possible to show that coincides with the usual homogenized matrix ( if the possibly additional assumptions on the sequence permit the computation of a limit in the sense of - or -convergence , see e.g. ( * ? ? ?* chapter 13 ) and the references therein ) . as a next examplelet us consider the heat equation .[ ex : heat_hom]recall the heat equation introduced in example [ ex : heat_eq ] : to warrant the compactness condition in theorem [ thm : hom_1_auto_comp ] , we again assume that the underlying domain is bounded .similarly to example [ ex : hom_dirich ] , we assume that we are given , a bounded sequence of uniformly strictly monotone linear operators in .consider the sequence of equations now , focussing _ only _ on the behavior of the temperature , we can proceed as in the previous example . assuming more regularity of , e.g., the segment property and finitely many connected components , we can apply theorem [ thm : hom_1_auto_comp ] also to the corresponding homogeneous neumann problems of examples [ ex : hom_dirich ] and [ ex : heat_hom ] .moreover , the aforementioned theorem can also be applied to the homogenization of ( visco-)elastic problems ( see also example [ ex : visco_elastic ] ) . for thiswe need criteria ensuring the compactness condition ( or ) .the latter is warranted for a bounded for the homogeneous dirichlet case or an satisfying suitable geometric requirements ( see e.g. ) .an example of a different type of nature is that of maxwell s equations : [ ex : max_hom]recall maxwell s equation as introduced in example [ ex : maxwell ] : in this case , we also want to consider sequences and corresponding solutions . in any case the nullspaces of both and are infinite - dimensional .thus , the projection mechanism introduced above for the heat and the elliptic equation can not apply in the same manner .moreover , considering the maxwell s equations on the nullspace of , we realize that the equation amounts to be an _ordinary _ differential equation in an _ infinite - dimensional _ state space . for the latter we have not stated any homogenization or continuous dependence result yet .thus , before dealing with maxwell s equations in full generality , we focus on ordinary ( integro-)differential equations next .[ thm : ode_degen]let , , in bounded , separable hilbert space .assume that implies that is selfadjoint .] for all .then there exists a subsequence of and some such that in the weak operator topology .note that in the latter theorem , in general , the sequence does _ not _ converge to .the reason for that is that the computation of the inverse is not continuous in the weak operator topology .so , even if one chose a further subsequence of such that converges in the weak operator topology , then , in general , in .indeed , the latter can be seen by considering the periodic extensions of the mappings to all of with ).\ ] ] we let for odd and if is even .then , by proposition [ prop : per_impl_conv ] , we conclude that , and as in . in a way complementary to the latter theoremis the following .the latter theorem assumes analyticity of the s at .but the zeroth order term in the power series expansion of the s may be non - invertible . in the next theorem ,the analyticity at is not assumed any more .the ( uniform ) positive definiteness condition , however , is more restrictive .[ thm : ode_non_degen]let , in bounded , separable hilbert space .assume that for all .then there exists a subsequence of and some such that in the weak operator topology .now , we turn to more concrete examples . with the methods developed, we can characterize the convergence of a particular ordinary equation . in a slightly more restrictive context these types of equations have been discussed by tartar in 1989 ( see ) using the notion of young - measures , see also the discussion in ( * ? ? ?* remark 3.8 ) .let in be bounded , a separable hilbert space , .then converges in the weak operator topology if and only if for all converges in the weak operator topology to some . in the latter case to in the weak operator topology .the if-part is a straightforward application of a neumann series expansion of , see e.g. ( * ? ? ?* theorem 2.1 ) .the only - if-part follows from the representation the application of the fourier - laplace transform and cauchy s integral formulas for the derivatives of holomorphic functions . for the latter argument note that is a bounded sequence in for and ,thus , contains a -convergent subsequence , whose limit satisfies . one might wonder under which circumstances the conditions in the latter theorem happen to be satisfied .we discuss the following example initially studied by tartar .let .if is -periodic then converges to in the -topology .regard as a multiplication operator on .now , we have the explicit formula we should remark here that the classical approach to this problem uses the theory of young - measures to express the limit equation .this is not needed in our approach . with the latter example in mind, we now turn to the discussion of a general theorem also working for maxwell s equation .as mentioned above , these equations can be reduced to the cases of theorem [ thm : hom_1_auto_comp ] and [ thm : ode_degen ] .consequently , the limit equations become more involved . for sake of this presentation, we do not state the explicit formulae for the limit expressions and instead refer to ( * ? ? ?* corollary 4.7 ) .[ thm : hom_gen_inf_null]let , , in bounded , skew - selfadjoint , separable .assume that and , in addition , for all , where , denote the canonical embeddings .then there exists a subsequence of such that converges in the weak operator topology .it should be noted that , similarly to the case of ordinary differential equations , in general , we do _ not _ have in the weak operator topology . before we discuss possible generalizations of the above results to the non - autonomous case , we illustrate the applicability of theorem [ thm : hom_gen_inf_null ] to maxwell s equations : consider for bounded sequences of bounded linear operators . assuming suitable geometric requirements on the underlying domain ,see e.g. , we realize that the compactness condition is satisfied .thus , we only need to guarantee the compatibility conditions : essentially , there are two complementary cases . on the one hand , one assumes uniform strict positive definiteness of the ( selfadjoint ) operators . on the other hand, we may also consider the eddy current problem , which results in .then , in order to apply theorem [ thm : hom_gen_inf_null ] , we have to assume selfadjointness of and the existence of some such that for all in this respect our homogenization theorem only works under additional assumptions on the material laws apart from ( uniform ) well - posedness conditions .we also remark that the limit equation is of integro - differential type , see ( * ? ? ?* corollary 4.7 ) or .the non - autonomous case is characterized by the fact that the operators and in ( [ eq : gen_evo ] ) does not have to commute with the translation operators .a rather general abstract result concerning well - posedness reads as follows : [ thm : non - auto2]let and .assume that there exists such that let be densely defined , closed , linear and such that .assume there exists such that the positivity conditions and hold for all , , .then is continuously invertible , , and the operator is causal in . in order to capture the main idea of this general abstract result , we consider the following special non - autonomous problem of the form where denotes the time - derivative as introduced in subsection [ sub : the - time - derivative ] , and denotes a skew - selfadjoint operator on some hilbert space ( and its canonical extension to the space ) .moreover , are assumed to be strongly measurable and bounded ( in symbols ) and therefore , they give rise to multiplication operators on by setting for , where and of course , the so defined multiplication operators are bounded with for and . in order to formulate the theorem in a less cluttered way , we introduce the following hypotheses .[ hyp : nonauto_linear]we say that satisfies the property a. [ selfadjoint ] if is selfadjoint , b. [ non - negative ] if is non - negative , c. [ lipschitz ] if the mapping is lipschitz - continuous , where we denote the smallest lipschitz - constant of by , and d. [ differentiable ] if there exists a set of measure zero such that for each the function is differentiable is separable , then the strong differentiability of on for some set of measure zero already follows from the lipschitz - continuity of by rademachers theorem . ] . if satisfies the hypotheses above, then for each the operator becomes a selfadjoint linear operator satisfying for every and consequently .we are now able to state the well - posedness result for non - autonomous problems of the form ( [ eq : non_auto_skew ] ) .[ thm : solutiontheory ] let be skew - selfadjoint and furthermore , assume that satisfies the hypotheses ( [ selfadjoint])-([differentiable ] ) and that there exists a set of measure zero with such that then the operator is continuously invertible in for each .a norm bound for the inverse is .moreover , we get that the result can easily be established , when observing that and using theorem [ thm : non - auto2 ] . independently of theorem[ thm : non - auto2 ] , note that condition ( [ eq : pos_def ] ) is an appropriate non - autonomous analogue of the positive definiteness constraint ( [ eq : pos_material_law ] ) in the autonomous case . with the help of ( [ eq : pos_def ] )one can prove that the operator is strictly monotone and after establishing the equality ( [ eq : adjoint ] ) , the same argumentation works for the adjoint .hence , the well - posedness result may also be regarded as a consequence of corollary [ lin : op : theo ] .[ ex : illustrating_non_autp]as an illustrating example for the applicability of theorem [ thm : solutiontheory ] we consider a non - autonomous evolutionary problem , which changes its type in time and space .let .consider the -dimensional wave equation : as usual we rewrite this equation as a first order system of the form in this case we can compute the solution by duhamel s formula in terms of the unitary group generated by the skew - selfadjoint operator let us now , based on this , consider a slightly more complicated situation , which is , however , still autonomous : -\varepsilon,0[}}(\mathrm{m}_{1 } ) & 0\\ 0 & { { { \chi}}}_{_{\mathbb{r}\setminus\,]-\varepsilon,\varepsilon[}}(\mathrm{m}_{1 } ) \end{array}\right)+\left(\begin{array}{cc } { { { \chi}}}_{_{]-\varepsilon,0[}}(\mathrm{m}_{1 } ) & 0\\ 0 & { { { \chi}}}_{_{]-\varepsilon,\varepsilon[}}(\mathrm{m}_{1 } ) \end{array}\right)+\left(\begin{array}{cc } 0 & -\partial_{1}\\ -\partial_{1 } & 0 \end{array}\right)\right)\left(\begin{array}{c } u\\ v \end{array}\right)\nonumber \\ & = \left(\begin{array}{c } \partial_{0,\nu}^{-1}f\\ 0 \end{array}\right),\label{eq : auto_ex}\end{aligned}\ ] ] where denotes the spatial multiplication operator with the cut - off function , given by for almost every , every and .hence , ( [ eq : auto_ex ] ) is an equation of the form ( [ eq : non_auto_skew ] ) with -\varepsilon,0[}}(\mathrm{m}_{1 } ) & 0\\ 0 & { { { \chi}}}_{_{\mathbb{r}\setminus\,]-\varepsilon,\varepsilon[}}(\mathrm{m}_{1 } ) \end{array}\right)\ ] ] and -\varepsilon,0[}}(\mathrm{m}_{1 } ) & 0\\ 0 & { { { \chi}}}_{_{]-\varepsilon,\varepsilon[}}(\mathrm{m}_{1 } ) \end{array}\right)\ ] ] and both are obviously not time - dependent .note that our solution condition ( [ eq : pos_def ] ) is satisfied and hence , problem ( [ eq : auto_ex ] ) is well - posed in the sense of theorem [ thm : solutiontheory ] ., since is autonomous and satisfies ( [ eq : pos_material_law ] ) .] by the dependence of the operators and on the spatial parameter , we see that ( [ eq : auto_ex ] ) changes its type from hyperbolic to elliptic to parabolic and back to hyperbolic and so standard semi - group techniques are not at hand to solve the equation .indeed , in the subregion -\varepsilon,0[ ] we get which yields a parabolic equation for of the form in the remaining sub - domain -\varepsilon,\varepsilon[ ] is bounded in .moreover , let be linear and maximal monotone commuting with and assume that is causal for each . moreover , assume the positive definiteness conditions for all , , and some .assume that there exists a hilbert space such that and and that converges in the weak operator topology to some .then is continuously invertible in and in the weak operator topology of as . as in , we illustrate the latter theorem by the following example , being an adapted version of example [ ex : illustrating_non_autp ] . recalling the definition of on ) ] .now , \cup[\frac{1}{2},\frac{3}{4}]}}(n\cdot\mathrm{m}_{1}\!\!\!\mod1 ) & 0\\ 0 & { { \chi}}_{_{[0,\frac{1}{4}]\cup[\frac{3}{4},1]}}(n\cdot\mathrm{m}_{1}\!\!\!\mod1 ) \end{pmatrix}\\ + \partial_{0,\nu}^{-1}\left.\begin{pmatrix}{{\chi}}_{_{[\frac{1}{4},\frac{1}{2}]\cup[\frac{3}{4},1]}}(n\cdot\mathrm{m}_{1}\!\!\!\mod1 ) & 0\\ 0 & { { \chi}}_{_{[\frac{1}{4},\frac{1}{2}]\cup[\frac{1}{2},\frac{3}{4}]}}(n\cdot\mathrm{m}_{1}\!\!\!\mod1 ) \end{pmatrix}\right.\\ \to\begin{pmatrix}\frac{1}{2 } & 0\\ 0 & \frac{1}{2 } \end{pmatrix}+\partial_{0,\nu}^{-1}\begin{pmatrix}\frac{1}{2 } & 0\\ 0 & \frac{1}{2 } \end{pmatrix}\end{gathered}\ ] ] in the weak operator topology due to periodicity .theorem [ thm : general_cont_depend ] asserts that the sequence weakly converges to the solution of the problem it is interesting to note that the latter system does not coincide with any of the equations discussed above .theorem [ thm : general_cont_depend ] deals with coefficients that live in space - time . going a step further instead of treating ( [ eq : m_n ] ) , we let in be a -convergent sequence of weakly differentiable -functions with limit and support on the positive reals . then it is easy to see that the associated convolution operators converge in to .moreover , using young s inequality , we deduce that uniformly in .thus , the strict positive definiteness of \cup[\frac{1}{2},\frac{3}{4}]}}(\mathrm{m}_{1 } ) & 0\\ 0 & { { \chi}}_{_{[0,\frac{1}{4}]\cup[\frac{3}{4},1]}}(\mathrm{m}_{1 } ) \end{pmatrix}+\partial_{0,\nu}^{-1}\begin{pmatrix}{{\chi}}_{_{[\frac{1}{4},\frac{1}{2}]\cup[\frac{3}{4},1]}}(\mathrm{m}_{1 } ) & 0\\ 0 & { { \chi}}_{_{[\frac{1}{4},\frac{1}{2}]\cup[\frac{1}{2},\frac{3}{4}]}}(\mathrm{m}_{1 } ) \end{pmatrix}\right)\ ] ] in the truncated form as in ( [ eq : truncated_pos_def ] ) in theorem [ thm : general_cont_depend ] above follows from the respective inequality for \cup[\frac{1}{2},\frac{3}{4}]}}(\mathrm{m}_{1 } ) & 0\\ 0 & { { \chi}}_{_{[0,\frac{1}{4}]\cup[\frac{3}{4},1]}}(\mathrm{m}_{1 } ) \end{pmatrix}+\begin{pmatrix}{{\chi}}_{_{[\frac{1}{4},\frac{1}{2}]\cup[\frac{3}{4},1]}}(\mathrm{m}_{1 } ) & 0\\ 0 & { { \chi}}_{_{[\frac{1}{4},\frac{1}{2}]\cup[\frac{1}{2},\frac{3}{4}]}}(\mathrm{m}_{1 } ) \end{pmatrix}.\ ] ] now , the product of a sequence converging in the weak operator topology and a sequence converging in the norm topology converges in the weak operator topology .hence , the solutions of \cup[\frac{1}{2},\frac{3}{4}]}}(n\cdot\mathrm{m}_{1}\!\!\!\mod1 ) & 0\\ 0 & { { \chi}}_{_{[0,\frac{1}{4}]\cup[\frac{3}{4},1]}}(n\cdot\mathrm{m}_{1}\!\!\!\mod1 ) \end{pmatrix}\right.\right.\\ + \left.\left.\partial_{0,\nu}^{-1}\begin{pmatrix}{{\chi}}_{_{[\frac{1}{4},\frac{1}{2}]\cup[\frac{3}{4},1]}}(n\cdot\mathrm{m}_{1}\!\!\!\mod1 ) & 0\\ 0 & { { \chi}}_{_{[\frac{1}{4},\frac{1}{2}]\cup[\frac{1}{2},\frac{3}{4}]}}(n\cdot\mathrm{m}_{1}\!\!\!\mod1 ) \end{pmatrix}\right.\right)\\ \left.+\begin{pmatrix}0 & \partial_{1}\\ \partial_{1,c } & 0 \end{pmatrix}\right)\begin{pmatrix}u_{n}\\ v_{n } \end{pmatrix}=\begin{pmatrix}f\\ g \end{pmatrix}\end{gathered}\ ] ] converge weakly to the solution of the latter considerations dealt with time - translation invariant coefficients .we shall also treat another example , where time - translation invariance is not warranted .for this take a sequence of lipschitz continuous functions with uniformly bounded lipschitz semi - norm and such that converges point - wise almost everywhere to some function .moreover , assume that there exists such that for all .then , by lebesgue s dominated convergence theorem in the strong operator topology , where we anticipated that acts as a multiplication operator with respect to the temporal variable .the strict monotonicity in the above truncated sense of \cup[\frac{1}{2},\frac{3}{4}]}}(n\cdot\mathrm{m}_{1}\!\!\!\mod1 ) & 0\\ 0 & { { \chi}}_{_{[0,\frac{1}{4}]\cup[\frac{3}{4},1]}}(n\cdot\mathrm{m}_{1}\!\!\!\mod1 ) \end{pmatrix}\right.\\ + \left.\partial_{0,\nu}^{-1}\begin{pmatrix}{{\chi}}_{_{[\frac{1}{4},\frac{1}{2}]\cup[\frac{3}{4},1]}}(n\cdot\mathrm{m}_{1}\!\!\!\mod1 ) & 0\\ 0 & { { \chi}}_{_{[\frac{1}{4},\frac{1}{2}]\cup[\frac{1}{2},\frac{3}{4}]}}(n\cdot\mathrm{m}_{1}\!\!\!\mod1 ) \end{pmatrix}\right)\end{gathered}\ ] ] is easily seen using integration by parts , see e.g. ( * ? ? ?* lemma 2.6 ) .our main convergence theorem now yields that the solutions of \cup[\frac{1}{2},\frac{3}{4}]}}(n\cdot\mathrm{m}_{1}\!\!\!\mod1 ) & 0\\ 0 & { { \chi}}_{_{[0,\frac{1}{4}]\cup[\frac{3}{4},1]}}(n\cdot\mathrm{m}_{1}\!\!\!\mod1 ) \end{pmatrix}\right.\right.\\ + \left.\left.\partial_{0,\nu}^{-1}\begin{pmatrix}{{\chi}}_{_{[\frac{1}{4},\frac{1}{2}]\cup[\frac{3}{4},1]}}(n\cdot\mathrm{m}_{1}\!\!\!\mod1 ) & 0\\ 0 & { { \chi}}_{_{[\frac{1}{4},\frac{1}{2}]\cup[\frac{1}{2},\frac{3}{4}]}}(n\cdot\mathrm{m}_{1}\!\!\!\mod1 ) \end{pmatrix}\right.\right)\\ \left.+\begin{pmatrix}0 & \partial_{1}\\ \partial_{1,c } & 0\end{pmatrix}\right)\begin{pmatrix}u_{n}\\ v_{n } \end{pmatrix}=\begin{pmatrix}f\\ g \end{pmatrix}\end{gathered}\ ] ] converge weakly to the solution of last section is devoted to the generalization of the well - posedness results of the previous sections to a particular case of non - linear problems . instead of considering differential equationswe turn our attention to the study of differential inclusions . as in the previous section, we begin to consider the autonomous case and present the well - posedness result .let .the problem class under consideration is given as follows where is again a linear material law , arising from an analytic and bounded function for some , is a given right - hand side and is to be determined .in contrast to the above problems , is now a maximal monotone relation , which in particular need not to be linear . by this lack of linearitywe can not argue as in the previous section , where the maximal monotonicity of the operators were shown by proving the strict monotonicity of their adjoints ( in other words , we can not apply corollary [ lin : op : theo ] ) .thus , the maximal monotonicity has to be shown by employing other techniques and the key tools are perturbation results for maximal monotone operators .in the autonomous case , our hypotheses read as follows : we say that satisfies the hypotheses ( h1 ) and ( h2 ) respectively , if 1 . is maximal monotone and _ translation - invariant _ , i.e. for every and we have 2 . for all the estimate holds .assuming the standard assumption ( [ eq : pos_material_law ] ) for the function , the operator is maximal monotone on thus , the well - posedness of ( [ eq : evol_incl ] ) just relies on the maximal monotonicity of the sum of and .since is assumed to be maximal monotone , we can apply well - known perturbation results in the theory of maximal monotone operators to prove that is indeed maximal monotone , which in particular yields that is a lipschitz - continuous mapping on ( see theorem [ thm : minty ] ) .moreover , using hypothesis ( h2 ) we can prove the causality of the corresponding solution operator the well - posedness result reads as follows : [ thm : well_posedness ] let be a hilbert space , a linear material law for some satisfying ( [ eq : pos_material_law ] ) .let and a relation satisfying ( h1 ) .then for each there exists a unique such that moreover , is lipschitz - continuous with a lipschitz constant less than or equal to if in addition satisfies ( h2 ) , then the solution operator is causal .a typical example for a maximal monotone relation satisfying ( h1 ) and ( h2 ) is an extension of a maximal monotone relation satisfying indeed , if is maximal monotone and , we find that is maximal monotone ( see e.g. ) .moreover , obviously satisfies ( h1 ) and ( h2 ) .it is possible to drop the assumption if one considers the differential inclusion on the half - line instead of . in this case ,an analogous definition of the time derivative on the space can be given and the well - posedness of initial value problems of the form where is given as the extension of a maximal monotone relation and satisfy a suitable monotonicity constraint , can be shown similarly ( see ) . the general coupling mechanism as illustrated e.g. in also works for the non - linear situation .this is illustrated in the following example .we consider the equations of thermo - plasticity in a domain , given by the functions and are the unknowns , standing for the displacement field of the medium and its temperature , respectively . and given source terms .the stress tensor is related to the strain tensor and the temperature by the following constitutive relation , generalizing hooke s law , where and is a linear , selfadjoint and strictly positive definite operator ( the elasticity tensor ) .the operator is the usual trace for matrices and its adjoint can be computed by .the function describes the mass density and is assumed to be real - valued and uniformly strictly positive , are assumed to be uniformly strictly positive definite and selfadjoint and is a real numerical parameter .the additional term models the inelastic strain and is related to by where is a maximal monotone relation satisfying \right]=\{0\} ] is bounded . for more advanced perturbation results we refer to (* ff . ) . ]we are also able to treat non - autonomous differential inclusions .consider the following problem where and is the canonical extension of a maximal monotone relation with as defined in ( [ eq : extension ] ) .as in subsection [ sub : the - non - autonomous - case ] we assume that satisfies hypotheses [ hyp : nonauto_linear ] ( [ selfadjoint])-([differentiable ] ) .our well - posedness result reads as follows : [ thm : sol - theory ] let , where satisfies hypotheses [ hyp : nonauto_linear ] ( [ selfadjoint])-([differentiable ] ) .moreover , we assume that for every and and the canonical embeddings into of and , respectively . ] let be a maximal monotone relation with then there exists such that for every is a lipschitz - continuous , causal mapping .moreover , the mapping is independent of in the sense that , for and we have that note that in subsection [ sub : the - non - autonomous - case ] we do not require that is -independent .however , in order to apply perturbation results , which are the key tools for proving the well - posedness of ( [ eq : non_auto_incl ] ) , we need to impose this additional constraint ( compare ( * ? ? ? * theorem 2.19 ) ) .as we have seen in subsection [ sub : incl_auto ] the maximal monotonicity of the relation plays a crucial role for the well - posedness of the corresponding evolutionary problem ( [ eq : evol_incl ] ) .motivated by several examples from mathematical physics , we might restrict our attention to ( possibly non - linear ) operators of a certain block structure . as a motivating example , we consider the wave equation with impedance - type boundary conditions , which was originally treated in . [ ex : impedance]let be open and consider the following boundary value problem where denotes the outward normal vector field on and such that . formulating ( [ eq : wave ] ) as a first order system we obtain where and the boundary condition ( [ eq : impedance_bd ] ) then reads as the latter condition can be reformulated as where is defined as in definition [ def : div_grad ] .thus , we end up with a problem of the form where with . in order to apply the solution theory, we have to ensure that the operator , defined in that way , is maximal monotone as an operator in . in more abstract version of example [ ex : impedance ] was studied , where the vector field was replaced by a suitable material law operator as it is defined in subsection [ sub : evo_eq_auto ] . following this guiding example ,we are led to consider restrictions of block operator matrices where and are densely defined closed linear operators satisfying and consequently .we set and and obtain densely defined closed linear restrictions of and , respectively . regarding the example above , and , whereas and .having this guiding example in mind , we interpret and as the operators with vanishing boundary conditions and and as the operators with maximal domains .this leads to the following definition of so - called abstract boundary data spaces .let and as above .we define where and are interpreted as closed subspaces of the hilbert spaces and respectively , equipped with their corresponding graph norms .consequently , we have the following orthogonal decompositions the decomposition ( [ eq : orth_decomp_bd ] ) could be interpreted as follows :each element in the domain of can be uniquely decomposed into two elements , one with vanishing boundary values ( the component lying in ) and one carrying the information of the boundary value of ( the component lying in ) . in the particular case of comparison of and the classical trace space can be found in ( * ? ? ?* section 4 ) .let and denote the canonical embeddings .an easy computation shows that \subseteq bd(d) ] and thus , we may define these two operators share a surprising property . the operators and are unitary and coming back to our original question , when defines a maximal monotone operator , we find the following characterization .[ thm : char_bd_cond ] let and be as above .a restriction is maximal monotone , if and only if there exists a maximal monotone relation such that + a. in example [ ex : impedance ] , the operators and are and , respectively and the relation is given by indeed , by the definition of the operator in example [ ex : impedance ] , a pair belongs to if and only if thus , if we show that is maximal monotone , we get the maximal monotonicity of by theorem [ thm : char_bd_cond ] .for doing so , we have to assume that the vector field satisfies a positivity condition of the form for all in case of a smooth boundary , the latter can be interpreted as a constraint on the angle between the vector field and the outward normal vector field . indeed , condition ( [ eq : impedance_bd ] ) implies the monotonicity of and also of the adjoint of ( note that here , is a linear relation ) .both facts imply the maximal monotonicity of ( the proof can be found in ( * ? ? ?* section 4.2 ) ) .b. in the theory of contact problems in elasticity we find so - called frictional boundary conditions at the contact surfaces .these conditions can be modeled for instance by sub - gradients of lower semi - continuous convex functions ( see e.g. ( * ? ? ?* section 5 ) ) , which are the classical examples of maximal monotone relations .+ let be a bounded domain .we recall the equations of elasticity from example [ ex : visco_elastic ] and assume that the following frictional boundary condition should hold on the boundary ( for a treatment of boundary conditions just holding on different parts of the boundary , we refer to ) : where denotes the unit outward normal vector field and is a maximal monotone relation , which , for simplicity , we assume to be bounded . we note that in case of a smooth boundary , there exists a continuous injection ( see ) and we may assume that \cap[l^{2}(\partial\omega)^{n}]g\ne\emptyset.$ ] then , according to ( * ? ? ?* proposition 2.6 ) , the relation is maximal monotone as a relation on and the boundary condition ( [ eq : frictional ] ) can be written as thus , by theorem [ thm : char_bd_cond ] , the operator is maximal monotone and hence , theorem [ thm : well_posedness ] is applicable and yields the well - posedness of ( [ eq : elastic-1 ] ) subject to the boundary condition ( [ eq : frictional ] ) .we have illustrated that many ( initial , boundary value ) problems of mathematical physics fit into the class of so - called evolutionary problems .having identified the particular role of the time - derivative , we realize that many equations ( or inclusions ) of mathematical physics share the same type of solution theory in an appropriate hilbert space setting . the class of problems accessible is widespread and goes from standard initial boundary value problems as for the heat equation , the wave equation or maxwell s equations etc . to problems of mixed type and to integro - differential - algebraic equations .we also demonstrated first steps towards a discussion of issues like exponential stability and continuous dependence on the coefficients in this framework .the methods and results presented provide a general , unified approach to numerous problems of mathematical physics .we thank the organizers , wolfgang arendt , ralph chill and yuri tomilov , of the conference `` operator semigroups meet complex analysis , harmonic analysis and mathematical physics '' held in herrnhut in 2013 for organizing a wonderful conference dedicated to charles batty s 60th birthday .we also thank charles batty for his manifold , inspiring contributions to mathematics and of course for thus providing an excellent reason to meet experts from all over the world in evolution equations and related subjects .r. picard . a class of evolutionary problems with an application to acoustic waves with impedance type boundary conditions . in _ spectral theory , mathematical system theory , evolution equations , differential and difference equations _, volume 221 of _ operator theory : advances and applications _ , pages 533548 .springer basel , 2012 .
the idea of monotonicity is shown to be the central theme of the solution theories associated with problems of mathematical physics . a `` grand unified '' setting is surveyed covering a comprehensive class of such problems . we illustrate on the applicability of this setting with a number examples . a brief discussion of stability and homogenization issues within this framework is also included .
if the universe approaches homogeneity as one looks on ever - larger scales , then we expect the rms peculiar velocities to approach zero as as we average over larger and larger spheres . with this basic notion in mind ,astronomers since the path - breaking work of vera rubin and co - workers in the mid-1970 s have been trying to measure the scale on which the bulk flow approaches zero , or equivalently , on which the velocity field measured in the local group frame simply reflects the 600 motion of the local group with respect to the cmb .a definitive measurement of this scale would both confirm a fundamental prediction of the cosmological principle and the gravitational stability paradigm , and test our interpretation of the dipole moment in the cmb .moreover , the measurement of the amplitude of the flow as a function of scale is a strong probe of the matter power spectrum on large scales ; it is unaffected by bias , and is less sensitive to non - linear effects , and more sensitive to the longest - wavelength modes , than is the measurement of galaxy fluctuations on equivalent scales ( cf . , strauss 1997 ) . with that in mind ,table 1 qualitatively lists some of the important bulk flow measurements in the literature up to the time of this meeting ; apologies to any surveys i may have left out .see strauss & willick ( 1995 ) for references to the older literature .ll reference&result + aaronson _ et al . _ ( 1986)&convergence to hubble flow at 6000 + lynden - bell _ et al . _( 1988)&flow towards the great attractor + courteau _ et al . _ ( 1993)&bulk flow of 300 at 6000 + lauer & postman ( 1994 ) & bulk flow of 600 at 15,000 + riess _ et al . _ ( 1996)&sn are at rest with respect to the cmb + giovanelli _ et al . _ ( 1998)&no bulk flow at 6000 + saglia _ et al . _( 1998 ; efar)&no bulk flow at 10,000 + hudson _ et al ._ ( 1999 ; smac)&bulk flow of 600 at 15,000 , + & in a different direction from lp + willick ( 1999 ; lp10k)&bulk flow of 600 at 15,000 , + & in a different direction from lp + dale _ et al ._ ( 1999)&no bulk flow to 20,000 + theoretical prejudice would suggest that on large scales , the bulk flows should be of low amplitude , and a number of the analyses listed in table 1 indicate this .the most direct challenge to this was that of lauer & postman ( 1994 ) , who found a 6000 flow within a volume of 15,000 , which was completely unexpected .many of the more recent surveys listed above , and discussed in this conference , were carried out to test this startling result .unfortunately , none have confirmed the lp bulk flow , but the smac and lp10k results of hudson _ et al . _ ( 1999 ) and willick ( 1999 ) , respectively , seem to find a bulk flow of similar amplitude , on similar scale , in a _different _ direction .it is worth emphasizing that despite many efforts , no - one has found any serious errors in the lp data or any methodological problem with their analysis .the results in this table are presented graphically in figure 1 , based on a similar figure in postman ( 1995 ) , which shows the dependence of these results on scale .given the scatter in this diagram , we can not yet claim as a community to have come to an agreement as to the nature of the flows on large scales . but keep in mind that figure 1 ( and its variants , which were presented throughout this meeting ) is tremendously misleading .first , no single survey probes a given scale ; the value of the scale assigned to each data point is usually some `` typical '' , or weighted mean distance .second , the errors in the bulk flow are really characterized by a covariance matrix , so the error bars given here leave out a substantial amount of information .third , and most important , because the velocity field is more complex than just a bulk flow , any survey which is not perfectly uniformly sampled across the sky with homogeneous errors will necessarily alias power from smaller wavelengths in its bulk flow determination , a point made most directly by watkins & feldman ( 1995 ) .this means that one has to take into account this aliasing in asking whether the bulk flow results of two different surveys are in contradiction ( see hudson s contribution to these proceedings ) .another warning , made by marc davis several times in this conference , was that the error bars in such analyses often only reflect _ statistical _ measurement errors ( e.g. , due to the known scatter of the distance indicator used ) , and ignore _ systematic _ errors , due , e.g. , to the uncertainty in the calibration of the distance indicator relation itself .these systematic errors need to be properly quantified if we are to decide whether there are indeed serious discrepancies between different measurements , as figure [ fig : postman ] appears to show .linear perturbation theory gives a linear relation between the peculiar velocity and gravity fields ; alternatively , it can be expressed as a linear relationship between the density field and the _ divergence _ of the velocity field . in the linear biasing paradigm ( see the discussion in [ sec : bias ] ) , the proportionality factor is the quantity .there is a very extensive history of attempts to measure using a variety of techniques comparing peculiar velocity and redshift survey data ( see , e.g. , strauss & willick 1995 for a review of the earlier literature ) .i tabulate a rather incomplete list of such attempts in table 2 , with one representative entry per method used , apologizing to those whose results are left out . to keep things simple ,i ve restricted myself to recent analyses of _ iras _ galaxies , to avoid questions of the relative bias of _ iras _ and optical galaxies ( e.g. , baker _et al._1998 ) .i have translated the results from the measured cluster abundance to using the observed of _ iras_galaxies ( fisher _ et al . _1994 ) . ll method / reference&result + versus & + sigad _ et al ._ ( 1998 ) + maximum likelihood fit of velocity field model to tf data & + willick _ et al ._ ( 1997,1998 ) + expansion of velocity field in spherical harmonics & + da costa _ et al ._ ( 1998 ) + maximum likelihood fit of to tf data & + freudling _ et al . _ ( 1999 ) + anisotropy of redshift - space clustering & + hamilton ( 1998 ) + anisotropy of spherical harmonic expansion of & + tadros _ et al . _ ( 1999 ) + dipole moment of redshift surveys & + strauss _ et al . _ ( 1992 ) + cluster abundance & + eke _ et al ._ ( 1996 ) + depth of voids & + dekel & rees ( 1994 ) + gaussianity of velocity field & + nusser & dekel ( 1993 ) + figure [ fig : beta ] plots each of these determinations of as a gaussian with mean and standard deviation as given in the table. a qualitative sense of whether the community is in agreement is whether the sum of these curves ( shown , rescaled , as the heavy curve ) yields something sensible .the answer is not very convincing ; the sum is far from a gaussian , and is wide enough to include values to make cosmologists of all persuasions happy .it is interesting to compare this with the equivalent plot in strauss & willick ( 1995 ) , based on the literature up to that point ; they find a similarly broad sum , with the same mean ( although not quite as bumpy ) , although only a few of the datapoints are shared between the two analyses . needless to say ,this tells us that serious systematic effects are influencing at least some of these analyses , especially considering that many of them use the _ same _ underlying dataset , which means that the error bars are very far from independent . among the likely important effectsare : * the fact that the effective smoothing scales of these different analyses are different ; * nonlinear effects in the gravity ; * a bias relation more complicated than linear deterministic bias ( see 5 ) ; * possible correlated and systematic errors in the data . these issues are discussed in more detail in the panel discussion on the value of ( willick _ et al ._ , these proceedings ) .many workers over the past 15 years have remarked on the quietness of the velocity field on small scales .this result is tremendously important for constraining cosmological models .it was the smallness of the small - scale velocity dispersion that provided a first inkling to davis _et al . _ ( 1985 ) that the standard cdm model was in trouble , and motivated them to consider a model with ( which we now know to be in dramatic contradiction to cobe normalization ) . more generally , the cosmic virial theorem tells us that a small velocity dispersion indicates a small value of .there are a number of ways in which the quietness of the velocity field can be quantified : * measuring the small - scale velocity dispersion by the anisotropy of the galaxy correlation function in redshift space ; * dividing up the observed bulk flow into large - scale and small - scale components ; * quantifying the residuals that remain after fitting the large - scale flows to a smooth model .the small - scale velocity dispersion as measured from the redshift - space correlation function is a pair - weighted statistic , and is therefore heavily weighted by the clusters in the survey volume .however , the small - scale velocity dispersion is a strong function of local density , being quite a bit higher in the cores of clusters than in the field ( this is why redshift pie diagrams show dramatic fingers of god in dense clusters , and thin filaments elsewhere ) , which means that this statistic depends crucially on exactly how many clusters happen to enter your survey volume ( e.g. , mo _ et al . _ 1993 ) .a number of related statistics have been suggested : working in -space ( szalay _ et al._1998 ) , measuring the small - scale velocity dispersion without the pair - weighting ( davis _ et al . _ 1997; see also baker , this conference ) , and measuring the small - scale velocity dispersion as a function of local density ( strauss _ et al . _the last two of these , at least , indicate that the small - scale velocity dispersion in the field is _ very _ small , less than 150 , but not enough work has yet been done to confirm whether this is consistent with , e.g. , currently popular cosmological models with .measuring the coldness of the velocity field from a redshift survey is rather indirect ; better is to work from a peculiar velocity survey directly .sandage ( 1986 ) , brown & peebles ( 1987 ) , burstein ( 1990 ) and others have remarked on how quiet the directly observed velocity field is ; there seems to be little structure on scales smaller than the large - scale bulk flows .et al . _ ( 1989 ) quantified this when they subtracted a bulk flow from the observed peculiar velocity field , and found essentially vanishing residuals .more recently , the velmod technique directly measures the amplitude of velocity field residuals , after accounting for peculiar velocity measurement errors and the _ iras _ predicted velocity field model ; willick _ et al ._ ( 1997 ) and willick & strauss ( 1998 ) also find a small - scale noise below 150 , and evidence that it is an increasing function of density .et al . _ ( 1999 and this conference ) find similar results from their modeling of the flow field of the surface brightness fluctuation data .again , it is not yet clear to me that the models are doing an adequate job of matching this statistic , and and if not , what physical effects might we be missing ? one important complicating effect is the nature of the galaxy bias , which is not going to be simple on small scales , and to which we now turn .galaxy bias exists . we know this from a number of observations : the relative density of elliptical and spiral galaxies is a strong function of environment ( which tells us that it can not be true that _ both _ are unbiased relative to the dark matter ) , and the strong clustering of lyman - break galaxies at high redshift can only be understood if they are strongly biased . once upon a time , we parameterized our ignorance about the relative distribution of dark matter and galaxies with the linear bias model : where was implicitly assumed to be independent of the scale on which was defined . on small scales , where the characteristic fluctuations in are appreciably larger than unity ,equation ( [ eq : linear - bias ] ) can not strictly hold true for , as both and are bounded from below by .more generally , we now believe that bias is quite a bit more complicated a beast than equation ( [ eq : linear - bias ] ) would imply . on very large scales , there are compelling analytic arguments ( e.g. , coles 1993 ; scherrer & weinberg 1998 ) that should be independent of scale . however ,on smaller scales , things can be quite a bit more complicated .in particular , * the galaxy density field can be a non - linear function of the galaxy density field .* there can be physical quantities other than the local density which affect the formation of galaxies , meaning that there will necessarily be some scatter ( sometimes confusingly called `` stochasticity '' ) around any mean relation between the galaxy and mass density field .* the bias function can depend on scale , especially on small scales .* the bias function is already known to depend on cosmological epoch .the continuity equation generically predicts that bias should approach unity with time ( in the absence of galaxy formation and merging ) . *the bias function is already known to depend on the sample of galaxies ; clustering strength depends on surface brightness , luminosity ( both optical and infrared ) , star - formation rate , and morphology , although we are still far from sorting out the dependencies .simulations are starting to give us some sense of the complexities involved ( e.g. , blanton _ et al . _ 1999a , b ; frenk , these proceedings ; klypin , these proceedings ), although they are not yet agreeing among themselves on the important physical mechanisms .the controversies are discussed further in the panel discussion on bias ( strauss _ et al . _ in these proceedings ) .i will simply bring up two points : first , we as a community have not thought enough about how the details of the bias relation in all its glory affect the various statistics we are interested in measuring .i ve already hinted above that this may lie at the heart of both the current confusion about the value of , and the small - scale velocity dispersion .dekel & lahav ( 1999 ) have given us a formalism which incorporates the complexities of bias in large - scale structure statistics , but we need analyses of how large these complicating effects are likely to be in practice .second , when one has one s eyes on basic cosmological parameters , the bias function is a nuisance parameter , which we wish to marginalize . butthe bias relation itself encodes a large amount of information about galaxy formation itself , and i imagine that the study of large - scale flows and large - scale structure , even at zero redshift , will expand more and more to learn about how galaxies formed .to measure a peculiar velocity requires the use of distance indicators .there is a large literature on the searches for extra parameters on our two workhorse di s : tully - fisher and fundamental plane/ ; and there are somewhat controversial hints , e.g. , that the tf relation has a small surface - brightness dependence , or that the fundamental plane has a small star formation history or environmental dependence . as we look on larger and larger scales , the signal - to - noise ratio per galaxy peculiar velocity drops , and we become susceptible to ever more subtle ( and therefore difficult to discern ) systematic effects. we must therefore continue to be vigilant in our search for problems in our data , and astrophysical effects , which can mimic the flows we are trying to measure .this meeting will discuss the next generation of substantially more accurate distance indicators , especially the use of type ia supernovae and the surface brightness fluctuation method , which should allow us to trace the velocity field in quite a bit more detail , which will presumably raise a whole new set of exciting questions .as this happens , we have to think rather hard about the questions we actually want to ask .the best surveys are those that are specifically designed to ask pointed questions ( even if , as is so often in astronomy , it ends up making completely unexpected discoveries that lead the observers in unplanned directions ) .one debate that started at this conference , but which we certainly have not seen the end of , is , what problems will we try to solve with the next generation of surveys ? until we have the answer to this question clearly in mind , we can not effectively design these surveys and expect them to be scientifically relevant when they are completed .i have unfortunately seen in recent conferences or general reviews on cosmology that the constraints that come from the large - scale peculiar velocity field and from redshift surveys are being de - emphasized . this is perhaps partly due to the success of our cmb colleagues in convincing the community that map and planck are going to measure all relevant parameters to the upteenth decimal place ( yes , i know they are not _ quite _ saying that ) , and partly due to the very real controversies on such basic issues as the value of , the convergence of the velocity field , and the nature of bias , that give the impression that our field is in disarray . however , as avishai dekel pointed out during the meeting , this is the mark of a field at the cutting edge .not everything need be clearcut at the frontline of our research . as we are discovering that different results are not as in perfect agreement with one another as we had hoped they would be , we are learning fundamentally new things about our distance indicators , the nature of bias , the effects of non - linearities , and so on .it may be frustrating that the large - scale flows community is not finding a clean set of results on all quantities we measure , but this should inspire us to work harder and learn more , not to give up in frustration .after my talk , tod lauer told me about a conference on dark matter held in princeton in the mid-1980 s .martin schwarzschild gave the introductory lecture , which was apparently quite brief .he listed , and succinctly described , a series of fundamental questions about the nature of dark matter and how it can be measured .he then said , in his familiar german accent : `` these are the questions .you have the next five days to answer them ! ''the rest of this volume indicates how successful we as a community have been in answering the questions i ve laid out in this introductory talk .baker , j.e . ,davis , m. , strauss , m.a ., lahav , o. , & santiago , b.x .1998 , , 508 , 6 blanton , m. , cen , r. , ostriker , j.p ., & strauss , m.a .1999 , , 523 , september 20 issue blanton , m. , cen , r. , ostriker , j.p . , strauss , m.a . , & tegmark , m. 1999 , preprint ( astro - ph/9903165 ) brown , m. e. , & peebles , p. j. e. 1987 , , 317 , 588 burstein , d. 1990 , rep .phys . , 53 , 421 coles , p. 1993, , 262 , 1065 courteau , s. , faber , s. m. , dressler , a. , & willick , j. a. 1993 , , 412 , l51 da costa , l. n. , nusser , a. , freudling , w. , giovanelli , r. , haynes , m.p . , salzer , j.j . , & wegner , g. 1998 , mnras , 299 , 425 dale , d.a . ,giovanelli , r. , haynes , m.p . ,campusano , l.e . , hardy , e. , & borgani , s. 1999 , , 510 , l11 davis , m. , efstathiou , g. , frenk , c. s. , & white , s. d. m. 1985 , , 292 , 371 davis , m. , miller , a. , & white , s.d.m .1997 , , 490 , 63 dekel , a. & lahav , o. 1999 , , 520 , 24 dekel , a. , & rees , m. j. 1994 , , 422 , l1 eke , v.r . , cole , s. , & frenk , c.s .1996 , , 282 , 263 fisher , k. b. , davis , m. , strauss , m. a. , yahil , a. , & huchra , j. p. 1994 , mnras , 266 , 50 freudling , w. , zehavi , i. , da costa , l.n ., dekel , a. , eldar , a. , giovanelli , r. , haynes , m.p . ,salzer , j.j . ,wegner , g. , & zaroubi , s. 1999 , apj , in press ( astro - ph/9904118 ) giovanelli , r. , haynes , m.p ., freudling , w. , da costa , l.s . , salzer , j.j . , & wegner , g. 1998 , , 505 , l91 groth , e. j. , juszkiewicz , r. , & ostriker , j. p. 1989, , 346 , 558 hamilton , a.j.s .1998 , in _ringberg workshop on large - scale structure _ , ed .d. hamilton ( kluwer , amsterdam ) , 185 hudson , m.j . ,smith , r.j ., lucey , j.r . , schlegel , d.j . , & davies , r.l .1999 , , 512 , l79 lauer , t. r. , & postman , m. 1994 , , 425 , 418 lynden - bell , d. , faber , s. m. , burstein , d. , davies , r. l. , dressler , a. , terlevich , r. j. , & wegner , g. 1988 , , 326 , 19 mo , h. j. , jing , y. p. , & brner , g. 1993 , mnras , 264 , 825 nusser , a. , & dekel , a. 1993 , , 405 , 437 postman , m. 1995 , in _ dark matter _ , proceedings of the fifth maryland astrophysics conference , aip conference series , 336 , 371 riess , a. , press , w. , & kirshner , r.p .1995 , , 445 , l91 saglia , r.p ._ et al . _1998 , talk presented at _ evolution of large - scale structure : from recombination to garching _ , garching , august sandage , a. 1986 , , 307 , 1 scherrer .r.j . , & weinberg , d.h .1998 , , 504 , 607 sigad , y. , eldar , a. , dekel , a. , strauss , m.a . ,& yahil , a. 1998 , , 495 , 516 strauss , m.a .1997 , in _critical dialogues in cosmology _ , edited by neil turok ( singapore : world scientific ) , 423 strauss , m.a . , ostriker , j.p . , & cen , r. 1998 , , 494 , 20 strauss , m.a . , & willick , j.a .1995 , physics reports , 261 , 271 strauss , m. a. , yahil , a. , davis , m. , huchra , j. p. , & fisher , k. b. 1992 , , 397 , 395 szalay , a.s . , matsubara , t. , & landy , s.d .1998 , , 498 , l1 tadros , h. _ et al . _ 1999 , ,305 , 527 tonry , j.l . ,blakeslee , j.p . ,ajhar , e.a . , & dressler , a. 1999 , preprint ( astro - ph/9907062 ) watkins , r. , & feldman , h. a. 1995 , , 453 , l73 willick , j.a .1999 , , in press ( astro - ph/9812470 ) willick , j.a . , & strauss , m.a .1998 , , 507 , 64 willick , j.a ., strauss , m.a . ,dekel , a. , & kolatt , t. 1997 , , 486 , 629
this introductory talk to the 1999 victoria conference on large - scale flows will present the `` big questions '' which will be discussed in the conference : * does the velocity field converge on the largest scales ? * why ca nt we agree on the value of ? * how can we properly measure the small - scale velocity dispersion ? * just how complicated can biasing be ? * how universal are the distance indicators we are using ? * how do we design our next generation of surveys to answer the above questions ? psfig one of the great advantages of giving a review talk at the beginning of a conference is that i get to ask all the questions ; it is up to the rest of the participants to come up with the answers . that being said , i will interject my own opinions here and there , occasionally colored by what i learned at the conference itself ( a benefit of hindsight i of course did not have at the time i gave the talk itself ! ) . however , i will resist the temptation to steal too much from avishai dekel s conference summary .
the decomposition of complex dynamical systems into simpler subsystems using different rates of changes ( multiple time scales ) for different subsystems is common in physical and engineering models - .the main difficulty in applications is a `` hidden '' , implicit form of the decomposition of the system evolving at different time scales .namely , there is no explicit representation of the system in relatively fast and slow subsystems available .a formal mathematical basis to cope with this problem is based on the notion of singularly perturbed vector fields .let us briefly introduce some key ideas of the singularly perturbed vector fields ( spvfs ) that can be used to treat the problem of the decomposition in general . roughly speaking a singularly perturbed vector field ( spvf ) is a vector field defined in a domain of euclidian space that depends on a small parameter such that for any point , belongs to an a priori fixed fast subspace of smaller dimension - .moreover , the dimension of does not depend on the choice of the point .thus , in this case the vector field can be decomposed into a fast sub - field that belongs to the fast subspace and its complement representing a slow sub - field .of course this is not a formal description , which is more sophisticated .additionally , if does not depend on then the vector field represents ( by definition ) a linearly decomposed singularly perturbed vector field .accordingly , the notion of the linearly decomposed singularly perturbed vector field is a geometrical analog of a singularly perturbed system. a formal concept ( a theory of spvfs ) can be useful for practical applications if it is supported by an identification algorithm for these fast sub - fields . in a number of previous papers an algorithm for linearly decomposed singularly perturbed vector fields has been constructed .this algorithm is based on a global linear interpolation procedure for an original vector field that we call a global quasi - linearization ( gql ) ( see e.g. ) .the theory of singularly perturbed vector fields is a coordinate free version of singularly perturbed systems for ( odes ) - .it can not be used in the original form for study the influence of transport processes of reaction - diffusion systems .thus , the main formal object of our current study should be modified as it combines a singularly perturbed vector field ( reaction term ) and a linear operator typically of second order ( diffusion term ) . here belongs to a set in euclidian space .typically it is a segment ] is a fast subspace that contains .our main assumption here is simplicity of the slow invariant manifold .it means that the equation a zero approximation of a stable invariant slow manifold .it means that any fast subspace has a one point intersection with the slow invariant manifold that is an attractive singular point of the fast sub - system .our next assumption used simplicity of fast dynamics .namely , a length of the fast trajectory , that joints points and the fast singular point is less than . forany introduce the open set .outside of the slow neighborhood of the slow manifold the fast component of the vector field satisfies to the inequality .the fast trajectory with the initial point is a curve and . under our assumptionsits length for any we have it means that for any the point belongs to the slow neighborhood .therefore the fast motion time is less than . after this time the solution of the fast subsystem belongs to the slow neighborhood .the influence of the slow sub - system to this estimate is negligible .consider the system of pdes with 1d spatial transport / diffusion terms . under main assumption( 1 ) , ( 2 ) above , it can be cast in the fast time as here and are elliptic differential operators of the second order .the transport term is treated as slow compared to the fast component of the vector field , i.e outside of the slow neiboobhood . here is a constant that typically do not exceed .an additional assumption for the fast subsytem is for any ] and check length of the fast trajectory that belongs to the fast subspace , its starting point is and its final point is .the fast trajectory with the initial point is a curve where and . under our assumptionsits length it means that for any the point belongs to the slow neighborhood .therefore , the fast motion time is less than . after this time the solution of the fast subsystem belongs to the slow neighborhood .the influence of the slow sub - system to this estimate is negligible similar as in the previous subsection .in this section the redim method is discussed as a method to construct the manifold approximating relatively slow evolution of the detailed system solution profiles . recall definition of singularly perturbed profiles .accordingly , the following representation of the system eq .( [ eq : main ] ) can be obtained the slow system evolution is then controlled by which are assumed to change slowly comparatively to the fast variables we suppose that , are smooth functions .initial data for the system eq .( [ eq : spp1 ] ) are recall that functions are of the same order .then , while .suppose also that operators ( see the assumption above ( 2 ) ) have the same order as terms .recall that the zero approximation of the slow invariant manifold in the phase space ( the space of species ) is represented in the implicit form the initial profile is . denote a profile that is the solution of ( [ eq : spp1 ] ) at time with the initial profile ( initial data ) . for a system in the general form ( [ eq : main ] ) this information is absent , thus , the question is how to access as e.g. the zero order approximation of which belongs to for all , represents the main problem of model reduction for a reaction - diffusion system . the set is called the reaction - diffusion manifold ( redim ) and is its zero approximation ( for ) . note that if the dimension of the profile is equal to ( ) , then . in the framework of the redim ,the manifold of the relatively slow profile evolution is constructed / approximated by using the so - called _ invariance condition _( see e.g. for more details ) .the construction of an explicit representation of a low - dimensional manifold starts from an initial solution and then it is integrated with the vector field of the pdes reaction - diffusion system : where the evolution of the manifold along its tangential space is forbidden by restricting it to the normal ( or transverse ) subspace .this is achieved by the local projector : , here identity matrix , denotes the tangential subspace and is the moore - penrouse pseudo - inverse of the local coordinates jacobi matrix . in this special casethe evolution of the manifold eq .( [ eq : m_redim ] ) is computed in the normal direction until the stationary solution is reached .now , if the main assumption of the study is valid , the manifold will evolve within fast manifolds of the vector field eq .( [ eq : main ] ) and will converge asymptotically to an invariant system manifold approximating the slow profile evolution . ) is represented by a mesh .system stationary solution profile ( [ eq : m_mentenpde ] ) , black solid curve ) in 1d case corresponds to 1d redim due to dimensional considerations .the approximation of the fast part of the homogeneous system ( [ eq : m_mentenode ] ) solution trajectory starting from the boundary state ( [ eq : for 19 ] ) is shown by the dashed line . ]the 3d michaelis - menten model is considered here as illustrative example of the redim approach .the original mathematical model of the enzyme biochemical system consists of three odes the system parameters are taken as , , , , ( see e.g. for details and references ) . by taking the 1d diffusion into account we obtain the following pdes system with the constant diffusion coefficient was taken as : the system ( [ eq : m_mentenpde ] ) is considered with the following initial and boundary conditions : here are coordinates of the equilibrium point and is spatial variable .initial conditions are chosen to be a straight lines , they satisfy the general assumption - join initial and equilibrium values on the boundaries . first , several numerical experiments were performed ( see fig . [fig : f1 ] ) . a 2d slow manifold for homogeneous system ( [ eq : m_mentenode ] )was found by global quasi - linearisation ( gql ) method ( see appendix for a short description of gql ) .stationary system ( [ eq : m_mentenpde ] ) solution profile was also integrated .figure [ fig : f1 ] shows a connection between the zero approximation of the slow manifold and the profile of the stationary system solution of the pde in the original coordinates . in fig .[ fig : f1 ] the system solution profile can be roughly subdivided into two parts : the slow part of the stationary solution that is very close to the slow manifold of the homogeneous system and second one , which is influenced by the diffusion term .the dashed line in this figure represents an approximation of linear fast sub - field ( 1d in this case ) . as in the previous sectionthe main assumption remains the transport term is slow compared with the fast vector field . by applying the redim approachthe stationary solution of the following system should represent the one - dimensional redim . where following notations have been used , is 3x3 identity matrix , the system state vector and projection matrix to the manifolds tangent space is given by and vector fields of reaction and diffusion terms here is the manifold parameter and is the gradient of the manifold parameter in . now by using as a local manifold parameter , the system ( [ eq : m_mentenredim ] ) can be simplified to only two equations for and for . they were integrated and the stationary solution has been found for 1d redim , which is completely coincides with the system stationary profile ( see fig .[ fig : f2 ] ) .the stationary solution of the system ( [ eq : m_mentenredim ] ) whith represent 2d redim . here, are two manifold parameters . in this case projection matrix of the manifold tangent spaceis considered by where and is the moore - penrose pseudo - inverse of : .the components of the diffusion term are the following : by using , as a local coordinates on the manifold of the system ( [ eq : m_mentenredim ] ) can be simplified to only one equation for .it was integrated and the stationary solution has been found for 2d redim .figure [ fig : f3 ] shows a connection between the 2d redim , initial solution for the redim and/or slow homogeneous system manifold as in figs.[fig : f1 ] and [ fig : f2 ] .the stationary solution profile of the system illustrates the implementation and quality of the the redim approach to approximate the low- dimensional invariant manifold of relatively slow evolution of the reacting - diffusion system .figure [ fig : f3 ] shows the stages of the redim construction . on the left the zero order approximation for a homogeneous system eq .( [ eq : m_mentenode ] ) . in the middle one can see the stationary solution profile of the pdes system eq .( [ eq : m_mentenpde ] ) , and on the right the converged stationary redim equation eq .( [ eq : m_mentenredim ] ) solution is shown together with the stationary systems solution profile .one can see that 2d redim manifold approximates the relatively slow 2d system profile evolution .it means that when the system solution profile evolves eq .( [ eq : m_mentenredim ] ) far form this surface it will evolve relatively fast ( see subsection 4.1 ) towards 2d redim along the fast direction of the fast subspace ( see fig .[ fig : f1 ] ) and then finally attains the stationary system solution profile . in this way ,relative fast system dynamics cab be decoupled and the model is reduced to 2d model as a profile evolving within 2d redim .fast sub - fields and fast manifolds play a pivotal role in the theory and applications of the spvf .the fast manifolds approximation is crucial for practical realization of the suggested spvfs framework .a procedure for evaluation of the dimension and structure of fast sub - fields is proposed in this section . in the case when fast manifolds and the system decomposition have linear structure they can be identified by a gap between the eigenvalues of an appropriate global linear approximation of the right hand side ( rhs ) - vector function of a homogeneous system ( see for detailed discussion ) note that we did not use a hidden small parameter in , because its existence is not known a priori and has to be validated in a course of application of the gql .now , if has two groups of eigenvalues : so - called small eigenvalues and large eigenvalues that have sufficiently different order of magnitude , then the vector field is regarded as linearly decomposed asymptotic singularly perturbed vector field . accordingly , fast and slow invariant sub - spaces given by columns of the matrices corresponding define the slow and variables .namely , now , if we denote then , new coordinates suitable for an explicit decomposition ( and coordinates transformation ) are given by : the decomposed form and corresponding fast and slow subsystems becomes the small system parameter controlling the characteristic time scales in ( [ eq:14_ode_decomposed ] ) can be estimated by the gap between the smallest eigenvalue of the slow group and the largest eigenvalue of the fast group of eigenvalues in principle , the idea of the linear transformation is not new , see e.g. , but the principal point of the developed algorithm concerns evaluation of this transformation .we have developed the efficient and robust method that produces the best possible ( to the leading order ) decomposition with respect to existing multiple - scales hierarchy ( see the attachment and for more details ) .the framework for manifolds based model reduction of the reaction - diffusion system has been established in the current work .this follows the original ideas of the singularly perturbed vector fields developed earlier . within the suggested concept the problem of model reductionis treated as restriction of the original system to a low - dimensional manifold embedded in the systems state space .the manifold encounters the stationary states of the degenerate fast sub - field of the vector field defined by the reaction - diffusion system .the main assumption of weak dependence of the fast system sub - filed of the reaction - diffusion pdes vector field on the diffusion has been formulated . under this assumption the theory of singularly perturbed vector fields was extended to the the systems with the molecular diffusion included .the developed framework can be used to justify the so - called redim method developed for reacting flow systems . for illustration michaelis - menten chemical kinetics modelis extended to describe reaction - diffusion process .this example is used as an application that illustrate the method and the suggested framework .it was found that relatively fast 1d sub - field can be decoupled and the system can be reduced and represented by 2d reduced system .financial support by the dfg within the german - israeli foundation under grant gif ( no : 1162 - 148.6/2011 ) is gratefully acknowledged .
a geometrically invariant concept of fast - slow vector fields perturbed by transport terms ( describing molecular diffusion ) is proposed in this paper . it is an extension of our concept of singularly perturbed vector fields to reaction - diffusion systems . this paper is motivated by an algorithm of reaction - diffusion manifolds ( redim ) . it can be considered as its theoretical justification extending it from a practical algorithm to a robust computational method . fast - slow vector fields can be represented locally as `` singularly perturbed systems of pde '' . the paper focuses on development of the decomposition to a fast and slow subsystems . it is demonstrated that transport terms can be neglected ( under reasonable physical assumptions ) for the fast subsystem . a simple practical application example of the proposed algorithm for numerical treatment of reaction - diffusion systems is demonstrated .
_ in situ _ measurements of solar wind turbulence consistently show one dimensional magnetic energy spectra that obey a broken power law , typically having spectral indices in the inertial range and steepening in the dissipation range .the inertial and dissipation ranges correspond to scales and respectively , where is the wavenumber and is the relevant ion kinetic scale , typically either the ion gyroradius or inertial length .although measurements of the magnetic energy spectrum are common , the spectrum alone does not provide direct insight into the nature of solar wind fluctuations . since the solar wind turbulence is electromagnetic, the expectation is that the solar wind fluctuations will exhibit characteristics of the three basic electromagnetic plasma wave modes at the large scales of the inertial range : alfvn , fast magnetosonic , and slow magnetosonic .similarly , the dissipation range is expected to be populated by the kinetic scale counterparts of the three wave modes . based upon a variety of metrics, the alfvn mode appears to be the dominant wave mode in the inertial range .the composition of solar wind fluctuations in the dissipation range is less well constrained due to the dearth of high frequency measurements in the free solar wind necessary to probe this region , but recent observations suggest the kinetic alfvn wave ( kaw ) is the dominant mode in the free solar wind the kaw is the dissipation range extension of the inertial range alfvn wave in the region of wavenumber space , where parallel and perpendicular are with respect to the local mean magnetic field , .one of the commonly used metrics is the magnetic variance anisotropy ( ) , first introduced by .the is a measure of the fluctuation anisotropy and has come to be defined as , where angle brackets indicate averages , the are fluctuating quantities about the local mean magnetic field , and is the total energy in the plane perpendicular to the local mean magnetic field .it is important to not confuse this quantity with the wavevector anisotropy inherent to and often discussed in plasma turbulence : the wavevector anisotropy and the are not directly related quantities .physically , the can be viewed as a measure of the magnetic compressibility of the plasma , , since .the version of the first introduced by has been expanded upon in recent papers using the defined above , e.g. , .we attempt here to provide a discussion that includes a theoretical basis for the interpretation of measurements in the solar wind . in [ sec : waves ] , we describe in detail the expected behaviour of the of the three constituent electromagnetic wave modes in the solar wind inertial range and discuss their transition into the dissipation range . in [sec : combos ] , we discuss the effect of superposing the three wave modes . [ sec : measure ] explores alternative procedures for constructing the , and compares the linear theory prediction from [ sec : waves ] to fully nonlinear turbulence simulations as a means establish the validity of linear theory to nonlinear turbulence . [ sec : solar ] reviews some of the recent uses of the to quantify the composition of solar wind fluctuations and presents new measurements of the from the stereo a spacecraft .we begin by enumerating the properties of the three linear wave modes which are the collisionless counterparts to the alfvn and fast and slow magnetosonic modes in compressible mhd . since the solar wind is a weakly collisional plasma , we focus here on the roots provided by the collisionless vlasov - maxwell ( vm ) system of equations , which are a function of , , and , where is the ion ( protons only ) thermal speed . and unless otherwise stated .figure [ fig : waves ] presents a schematic diagram summarizing the nomenclature of the three wave branches in different regions of wavenumber space . at scales and in the free solar wind , the alfvn wave is the dominantly observed wave mode , where is the ion ( proton ) gyroradius , is the proton gyrofrequency , is the ion inertial length , and is the ion plasma frequency . the turbulent energy cascade at these scales has been thoroughly explored in the literature , where the one dimensional perpendicular magnetic energy is observed to scale as with a wavevector anisotropy .when the turbulence is in critical balance i.e , the nonlinear cascade rate is of order the linear frequency the theoretical values for and are expected to be and for the model of goldreich - sridhar or and for the dynamic alignment model .which model is correct is not completely settled .solar wind observations typically show a magnetic field spectrum with and total ( kinetic plus magnetic ) energy spectrum with , while mhd turbulence simulations suggest the dynamic alignment model may be more correct . for the purpose of modelling the energy cascade ,we assume the goldreich - sridhar model for simplicity .the choice of model does not significantly affect the behaviour of the .the properties of the mhd alfvn root are well understood .however , the mhd solution of the alfvn root is incompressible since , suggesting the alfvnic is unbounded .the full collisionless vm solution of the alfvn root in the inertial range has a small but non - vanishing which increases with , thereby causing the to decrease with .the vm solution for the alfvn root ( black ) and dispersion relation are plotted against in figures [ fig : betas ] and [ fig : betas_freq ] for , and ( dash - dotted , dotted , solid , and dashed respectively ) , and is the alfvn speed .the solution assumes an inertial range spectral anisotropy as described by the goldreich - sridhar model , , where is the isotropic outer - scale of the turbulence and is taken to be , consistent with _ in situ _ solar wind measurements at au .continuing along the critical balance cascade to kinetic scales naturally produces spectral anisotropy with .so , we next consider the alfvn root with and . at these scales , the alfvn wave transitions into the kinetic alfvn wave ( kaw ) .the kaw is dispersive , damped , and much more compressible than the alfvn wave .the dispersive nature of the root steepens the magnetic energy spectrum to in the undamped case , while the inclusion of damping steepens the spectrum further .damping here refers to collisionless wave - particle interactions , primarily ion transit time damping on the parallel magnetic field that peaks at ion scales for and electron landau damping on the parallel electric field that peaks at electron scales .the spectral anisotropy of the kaw is assumed to scale as .the increased compressibility of the kaw can be seen at scales in figure [ fig : betas ] , where the kaw is seen to have a dependent plateau .the kaw in the dissipation range also depends upon the ion - to - electron temperature ratio , .an analytical form for the kaw can be derived ( see appendix [ app : ermhd ] ) within the framework of electron reduced mhd ( ermhd ) developed in , the kaw has no wavenumber dependence and thus plateaus at a value determined by and .the and temperature ratio dependence of the kaw derived from vm theory is plotted in figure [ fig : kaw_beta_dep ] , where the value for the is averaged across the plateau at ] . ]the majority of _ in situ _ solar wind observations of the inertial range suggest the component responsible for most of the measured free solar wind compressibility are pressure balanced structures ( pbss ) ; however , linear pbss are degenerate with the , non - propagating limit of slow waves . for simplicity , we classify linear pbss as slow modes .the association of the compressible portion of the solar wind to pbss is due to the measured anti - correlation of the thermal and magnetic pressure or anti - correlation of density and magnetic field magnitude at inertial range scales .recent analyses exploring the more telling anti - correlation of density and parallel magnetic field suggest that the compressible portion of the solar wind is primarily composed of propagating slow modes rather than pbss .these analyses suggest that on average alfvn modes comprise of the energy and slow modes of the energy .the notion that the warm , , propagating slow mode exists in the solar wind defies conventional wisdom that the slow mode is strongly damped in such plasmas , so we here elucidate this point .theory and numerical simulations suggest the slow mode does not have its own active turbulent cascade ; rather , the slow mode is passively cascaded by the alfvnic turbulence .the slow mode damping rate is proportional to the parallel wavenumber , , and the strong damping of the slow mode suggests , where , , and are the parallel wavenumber , linear damping rate , and frequency of the slow mode .however , the passive cascade of the slow mode implies that the slow modes are cascaded by the alfvn cascade on the alfvn timescale , .therefore , at a given , the parallel wavenumber of the slow mode compared to the parallel wavenumber of the alfvn mode determines the strength of the damping relative to the cascade rate : at a particular scale , , a slow mode with can be passively cascade before being damped since the slow mode damping rate will be smaller than the alfvn cascade time , . for slow waves in the mhd limit , ( see appendix [ app : mhd ] for the derivation of this equation ) , and the vm solution does not deviate significantly from the mhd solution until finite larmor radius effects become dominant at .the vm solution for the slow mode ( blue ) is plotted in figure [ fig : betas ] assuming a passive cascade of slow waves with more anisotropic cascade decreases the below that in the figure but does not alter the qualitative behaviour .the for a pbs , i.e. , slow mode , is identically zero since for these modes .since the slow mode is strongly damped at large unless , we will exclude a discussion of the behaviour of this root for . at inertial range scales ,the fast mode is well described by mhd , whose solution provides an identical to that of slow waves : . however , the distribution of fast waves in wavenumber space is not as well constrained as that of slow modes since the fast mode is not strongly damped for parallel propagation .further , compressible mhd turbulence simulations indicate that the fast mode is cascaded isotropically in wavenumber space .therefore , a turbulent cascade of fast modes has a that can take on all possible values , and the measured average value of the will depend sensitively on the distribution of fast modes in wavenumber space . to represent the fast mode turbulence , we plot in figure [ fig : betas ] the for a fast wave ( red ) with .we choose this value as representative because the average of an isotropic distribution of fast modes will be dominated by those modes with and the largest measured in a large ensemble of solar wind data is . in the dissipation range at scales ,the fast mode transitions to a parallel whistler ( ) or an oblique whistler ( ) wave with .the whistler mode is well described by the electron mhd ( emhd ) equations , which describes phenomena on scales .note , one must take care when applying the emhd equations , because emhd is only valid for . for scales and , emhd describes the cold ion limit of kaws and not whistler waves . solving the emhd equations provides ( see appendix [ app : emhd ] ) , which is a good approximation of the vm solution and can again attain all values between and infinity and is sensitively dependent upon the wavenumber distribution .therefore , if the propagation angle or whistler wavenumber distribution does not change from that of the inertial range fast modes , the average will approximately double in the dissipation range . however , theory and simulation suggest the cascade of whistler waves is highly anisotropic in the same sense as critically balanced kaws , .the anisotropic nature of the dissipation range cascade suggests the average of a distribution of whistler waves will decrease rapidly to a independent value as the cascade progresses to smaller scales .the whistler portion of the fast modes plotted in figure [ fig : betas ] follows the critical balance prediction described above , with up to and then .note that the of whistler waves has a very weak and ion - to - electron temperature ratio dependence .the apparent dependence of the whistler branch in figure [ fig : betas ] is due to the fast - whistler break point being at ; therefore , the dependence in the figure would vanish if the x - axis were normalized to the ion inertial length . for completeness, the fast mode to whistler transition is shown for several different propagation angles , , in figure [ fig : beta1_whistler_var ] .the agrees well with the predictions above , except in the case . for such highly oblique angles , the fast mode transitions to an ibw : at scales and , the fast mode transitions to the electrostatic ibw with frequency approximately equal to an integer multiple of the ion cyclotron frequency .the transition to the ibw is clear in the fast / whistler wave dispersion relations in figure [ fig : beta1_whistler_freq ] corresponding for the same angles as the presented in [ fig : beta1_whistler_var ] .although the ibw is electromagnetic during the transition from electromagnetic fast or alfvn modes , ibws are dominantly electrostatic .the properties of the transition electromagnetic ibw have not been fully explored in the literature , so we will not consider the potential contribution of ibws to the dissipation range . the wavenumber distribution of whistler waves generated in a local instability such as the parallel firehose and whistler anisotropy instabilities is not easily determined and depends upon the instability ,so we will not consider them further .assuming the solar wind consists of combinations of the three wave modes described in the preceding section both in the inertial and dissipation ranges , we need to consider the effect of superposing the modes .the three linear modes can be combined into a total measure }{ac~c_{\parallel a } + ( 1-ac ) \left[fs~c_{\parallel f } + ( 1-fs ) c_{\parallel s}\right ] } , \end{split}\ ] ] where is the fraction of alfvn to total energy , is the fraction of fast to total compressible ( fast plus slow ) energy , , and subscripts a , f , and s refer to alfvn , fast , and slow modes . note that ] . and for each mode can both be written in terms of the as and the assumptions that only slow modes with are weakly damped and approximately parallel fast modes dominate the inertial range fast mode lead to asymptotically small and large values of the slow and fast mode , respectively . due to the asymptoticallylarge and small values of the component , we can estimate the value of in the inertial range provided and by using the of each mode from figure [ fig : betas ] to estimate the compressibilities , which have only a very weak dependence : , , , , , and .therefore , in the inertial range , note that whether the slow modes are in fact propagating slow modes or pbss does not effect the estimate because is asymptotically small in either case .therefore , the inertial range depends only on the ratios of alfvn to total energy and the fast to total compressible energy ratio and is unable to differentiate between propagating slow modes and pbss .note that for observed average solar wind values of and , the can be further reduced to .we take as fiducial solar wind values and , or equivalently alfvn , slow , and fast wave energy .although we take , there is very little quantitative difference between and ( see figure [ mva_contour ] ) . plotted in green in figure [ fig : betas ] is a single mixture of the three modes representing fiducial solar wind values of and .clearly , the has virtually no or wavenumber dependence in the inertial range , as expected from the above analysis . to explore the dependence of the on fluctuation composition , we plot in figure [ mva_contour ] the given by equation . to confirm the quality of the estimate given by equation , we plot in figure [ fig : inertial_sum ] the given by equation ( thick ) and the estimate given by equation ( thin ) .the figure demonstrates that equation provides a good estimate for the total in the inertial range . for different mixtures of fluctuations determined by the fractions of alfvn to total energy ( ) and fast to total compressible energy ( ) .] ( thick ) and the inertial range estimate given by equation ( thin ) for different combinations of the three roots .the different line types represent alfvn to total energy ( ) and fast to total compressible energy ( ) fractions : , , , , , and are solid , dotted , short dashed , long dashed , short dash - dotted , and long dash - dotted . ]also highlighted by the figure is the degeneracy of the , since different mixtures of modes can replicate nearly identical in the inertial range .this implies that the when used alone is not a good metric for differentiating modes in the inertial range ; however , the can be a useful secondary metric .for instance , when used together with density - parallel magnetic field correlations to identify the proportion of fast to slow wave energy , the can provide an estimate of the alfvn to compressible energy proportion , thereby quantifying the total population of the solar wind .the behaviour of the in the transition between the inertial and dissipation ranges and in the dissipation range is markedly different from the inertial range . unlike the inertial range , where the asymptotically large and small values of the alfvn , fast , and slow waves leads to an that is controlled by the mode fractions and , the kaw dominates the dissipation range for typical solar wind parameters ( see figure [ fig : betas ] for , where the for the mixture closely follows the for kaws ) .the strong and dependence of the kaw implies that for certain values , namely , the whistler and kaw dissipation range become approximately degenerate ( see figure [ fig : betas ] for ) .however , the behaviour through the transition range for the alfvn to kaw and fast to whistler mode transitions differs considerably .for example , the curves in figure [ fig : inertial_sum ] with ( solid black ) and ( short dash - dotted black ) are nearly identical in the inertial and dissipation ranges , but exhibit very different behaviour across the transition range .for this reason , the is best presented as a function of wavenumber rather than as a quantity averaged over a band of wavenumbers , as has often been done in the literature .the conventional method for calculating the is as defined in [ sec : intro ] ; however , different methods of averaging can be performed .we here consider two physically motivated alternatives for measuring the and compare them to the linear prediction .we also explore the validity of using linear theory to describe the in a fully nonlinear turbulent situation . in the solar wind, the mean magnetic field direction is constantly changing and can sweep through a wide range of angles over a period of tens of minutes . as such , defining becomes questionable since is averaged over long , global , periods relative to the rapidly fluctuating field this is especially problematic when measuring in the dissipation range when is even smaller and more rapidly fluctuating .the poor definition of parallel and perpendicular in the global analysis will pollute the parallel energy with perpendicular energy , thereby decreasing the measured solar wind .therefore , a local analysis employing a local mean magnetic field must be employed to accurately measure .the local analysis has the added advantage of being able to differentiate between different measurements , where is the angle between the mean magnetic field and the solar wind flow velocity .this can be helpful because the will have different inertial / dissipation range breakpoints for kaws and whistlers when plotted against or .the conventional is constructed by calculating separately the perpendicular and parallel fluctuating magnetic energies and finding their quotient , however , a similar measure could be constructed from the normalized perpendicular and parallel energies , and , this measure has the physically motivated advantage of avoiding possible small denominators due to small .another possible measure of the is this method has the conceptual advantage of maximizing any local effect of the mean magnetic field since it averages the at each point rather than separately averaging the energies .however , the expression will be dominated by those terms with small denominators , which could make it an unphysical measure of the to explore the effect different averaging procedures have on the , we employ synthetic spacecraft data .the details for constructing the synthetic spacecraft data are discussed in , so we here only detail the relevant parameters .a three - dimensional box is populated with a spectrum of all three wave modes , where the proportion of each mode is given by and , corresponding to alfvn , slow , and fast wave energy .the alfvn and slow modes satisfy critical balance , with all modes less than the critical balance envelope equally populated .the fast modes are isotropically populated from , with equal energy at each angle . to represent solar wind turbulence ,the phase of the wave modes is randomized .the data is sampled by advecting the turbulence past a stationary `` spacecraft '' with velocity at a fixed angle with respect to the mean magnetic field and fixed sampling rate .figure [ fig : synth ] presents the three different methods for calculating the .the is averaged across the interval ] .the three different lines represent different averaging procedures for calculating the . ] for and , the predicted from linear theory is . from the figure ,it is clear that the conventional definition of the , equation , ( solid ) computed from a spectrum of wave modes agrees best with linear theory and is the one we will continue to employ .the averaged anisotropy , equation , ( dotted ) approach is a poor measure of the because at any given point in space , the superposition of a collection of wave modes can lead to anomalously small values of due to cancellation .while the definition based upon normalized energies , equation , ( dashed ) differs from the linear prediction , it captures the correct qualitative behaviour and may be a safer measure to use in some circumstances . herewe present results from a collection of fully nonlinear gyrokinetic turbulence simulations performed with and , , and a realistic mass ratio , .the simulations were performed using the astrophysical gyrokinetics code , astrogk .all runs presented herein are driven with an oscillating langevin antenna coupled to the parallel vector potential at the simulation domain scale .relevant parameters for the simulations are given in table [ tab : agk ] , where is the gyrokinetic expansion parameter , is the antenna amplitude , and is the collision frequency of species .the expansion parameter sets the parallel simulation domain elongation and is determined by assuming critical balance with an outer - scale : , where and subscript naught indicates simulation domain scale quantities .the antenna amplitude is chosen to satisfy critical balance at the domain scale so that the simulations all represent critically balanced , strong turbulence . [ cols="^,^,^,^,^,^,^",options="header " , ] an example instantaneous one - dimensional perpendicular magnetic energy spectrum composed by overlaying the four simulations is plotted in figure [ fig : beta1_spec]the spectrum is produced via conventional fourier analysis techniques .the inertial range simulation ( red ) has a spectral index , which steepens ( green ) to ( blue ) before rolling off exponentially ( cyan ) as the electron collisionless dissipation becomes increasingly strong .the dissipation range simulations and have been explored in detail .the full spectrum agrees well with recent solar wind observations .astrogk turbulence simulations spanning from the inertial range to deep into the dissipation range . ]figures [ fig : b01_agk_all ] and [ fig : b1_agk_all ] overlay the for the and astrogk simulations .the is produced by computing , via conventional fourier techniques , the perpendicular and parallel energy with respect to the global mean magnetic field . due to the restrictions of gyrokinetics andthe method of driving , the simulations are almost purely populated by alfvn / kaws .therefore , we plot as a dashed line in each figure the vm linear solution for the alfvn root .the agreement between the linear prediction and the nonlinear simulation is excellent up to the point of strong damping at , where agreement with linear theory is expected to breakdown .the good agreement supports the validity of using linear theory to describe the of fully nonlinear turbulence .note that in constructing figures [ fig : b01_agk_all ] and [ fig : b1_agk_all ] a global mean magnetic field has been used .this is justified , because in typical numerical simulations , the global mean field direction is fixed and .although a local mean field can be defined in the same sense as in the solar wind , the global mean field provides a relatively accurate measure of the in numerical simulations of turbulence with a fixed global mean field direction and .we here discuss some of the recent solar wind analyses of the , present new solar wind measurements of the , and compare the new measurements to our predictions . presents the most comprehensive collection of the solar wind inertial range made to date , incorporating 960 data intervals recorded by the ace spacecraft .the data is divided into periods of open field lines and magnetic clouds .they find that the is proportional to a power of or but are unable to identify which is the source of the functional relationship since and are themselves positively correlated in the solar wind .they follow the identification of the relationship with a discussion of possible sources for the relationship based on a variety of considerations , some of which contradict the discussion in [ sec : waves ] of this paper . do not consider the superposition of modes that has been shown to exist in the solar wind , neglect the contribution of slow modes due to the belief that they are completely damped outside of local excitation , consider alfvn waves to be completely incompressible ( ) , consider pbss but assert that the dependence of of pbss could contribute the the dependent despite for all .the findings of seem to be in direct contradiction to the conclusions drawn in [ sec : mva_inertial ] ; however , there are possible reasons for the apparent discrepancy . as noted in , the proton temperature and fluctuating magnetic fieldare positively correlated .it has also been shown that the proton temperature and bulk solar wind speed are positively correlated .therefore , the measured might depend on any or a combination of , , or .if the underlying dependence is actually on , then the observed variation of the could be from different types of solar wind launched from different regions of the sun having a variable population of alfvn to compressible components .for instance , slower wind may have evolved to a more alfvnic state , because fast modes would have had time to be dissipated in shocks and slow modes could have collisionlessly damped , leaving a primarily alfvnic and pbs population . since no measure other than binned by or was used in , it is difficult to draw a firm conclusion . extend the study of the same dataset employed in to the dissipation range , defined to be hz hz .they find that dissipation range fluctuations are less anisotropic than those in the inertial range .although not discussed in , this result is consistent with the mixtures of wave modes discussed in this paper .the linear predictions of kaw presented in figure [ fig : kaw_beta_dep ] for pass through the core of the measured for open magnetic field data for the measured in the dissipation range presented in the lower panel of figure 8 of .further , the kaw for and bound the core of the low magnetic cloud data for the measured dissipation range in .the temperature ratio is not provided for the dataset in ; however , magnetic clouds are typically characterized by low proton temperatures , so for the cloud data is expected . also noted that the portion of the same dataset is well fit by the kaw solution ; however , the temperature ratio dependence was not explored . suggest the shallow slope at low might be suggestive of a whistler component .another possible explanation for some of the very low values of the dissipation range in is the use of a global measurement of the magnetic field , which will tend to decrease the measured , especially in the dissipation range .also , due to the limited high frequency information available from ace measurements hz , the region defined as the dissipation range corresponds to the transition region between the inertial and dissipation ranges . return again to the same ace dataset but focus on intervals and augment the set with 29 additional high ace measurements .as such , this dataset suffers from the same high frequency limitations noted above for the analysis . suggest that the measured dissipation range is consistent with kaws , but the inertial range can not be explained by a purely alfvnic population .this observation is partially used to conclude that the alfvn / kaw model can not explain solar wind measurements . however, their measurements agree well with a solar wind population dominated by alfvn / kaws with a small component of slow waves .therefore , we find no contradiction between the ace measurements and the alfvn mode dominant model constructed herein .using stereo measurements , employ a variant of the , , as a secondary metric to the magnetic helicity to differentiate between kaws and whistlers , which both have right - handed helicity for oblique propagation .however , they consider fast modes propagating at fixed , , and , and only the root connects to an oblique whistler .the other two oblique roots connect to ibws , which is made clear in the upper right panel of figure 3 of , where the dispersion relation for these two roots do not extend above . due to this mischaracterization of the roots , they mistakenly find the oblique whistler root . as noted in [ sec : fastmode ] , the whistler ( fast mode for ) for all propagation angles and and is thus of the same orientation as kaw and does not provide a useful secondary metric to the helicity .using cluster measurements , use magnetic compressibility , which is related to the in equation , as a secondary metric to the ratio of the electric to magnetic field , , which is degenerate between kaws and whistlers .although the dissipation range asymptotic behaviour of the fixed propagation angle kaws and the whistlers at certain angles is similar , the transition from the inertial range to dissipation range for the two modes is different , and it is the transition region that is used to differentiate the two modes . find the cluster data to be most consistent with a spectrum of kaws .although much of the work does not directly address the , some recent solar wind analyses of the magnetic field have moved toward three dimensions . presenting the full three dimensional structure of the magnetic fieldis of obvious potential benefit ; however , there are complications due to geometrical and sampling effects of the fluctuations being advected past the spacecraft . to elucidate the complications, we follow and choose a coordinate system with in the direction and in the direction . under taylors hypothesis , the measured components of magnetic energy in the perpendicular plane are then if a scaling between and is assumed , then where is the two - dimensional spectrum , is the one - dimensional scaling exponent , and . find that , implying a nonaxisymmetric energy distribution in the perpendicular plane .this complication is obviated by measuring in the standard way , since avoids the angular sampling issue altogether . to compare the analysis method outlined in [ sec : waves ] and [ sec : combos ] to solar wind measurements , we choose a 5 day interval , 2008 feb 12 06:00 to feb 17 06:00 , of magnetic field data captured by the stereo a spacecraft .this interval is one of 20 similar intervals analysed by , where details concerning the data selection and analysis can be found .so , we here summarize only the most pertinent aspects of the measurement and analysis .the interval was chosen because it represents an interval of high - speed wind , which typically satisfies the assumption that the solar wind flow direction is in the heliocentric radial direction , . also , the interval is sufficiently long to include a statistically large sample of periods during which the mean magnetic field , , is nearly perpendicular to the radial direction , which in practice includes the range .focusing on these orthogonal periods is helpful because it facilitates easier comparison to theory since the measured wavenumber in the solar wind corresponds more closely to the perpendicular wavenumber used in linear theory .the magnetic field data was analysed using wavelet techniques to determine the local mean magnetic field , , at a given scale .the parallel and perpendicular magnetic energies are then given by and , where .the proton plasma parameters for the interval were supplied by the plastic instrument .the relevant proton plasma parameters are , , , , where all quantities are averaged over the entire interval except whose median is taken because of its high variability .electron thermal data is unavailable due to problems with the impact solar wind electron analyzer aboard stereo , so we assume the electron temperature to be the average for fast wind streams found by , .this assumption for the electrons implies for the data interval . to convert spacecraft - frame frequency , , to wavenumber , we employ taylor s hypothesis , which states that , where is the plasma rest - frame frequency . for oblique alfvnic and kaw fluctuations , , and we can assume . to compare to linear theory , we convert spacecraft frame frequency to wavenumber using , where is computed from the average solar wind proton quantities for the measured interval . in figure [fig : stereo ] , we compare the measured for this interval ( blue ) to the from linear theory with and for two different mixtures of wave modes that reproduce the solar wind in the inertial range : a mixture that is dominantly alfvnic ( black ) with and a mixture that is dominantly fast mode ( red dashed ) with . although both mixtures of linear wave modes reproduce well the behaviour in the inertial range , the fast mode dominant construction displays markedly different transition and dissipation range behaviour ; whereas , the alfvn dominant mixture fits well the _ slope _ of the transition range and the asymptotic value in the dissipation range . from this , we can conclude that the measured interval is most likely dominated by alfvn wave fluctuations at and kaw fluctuations for .we speculate that the positive slope in the measured inertial range may be attributable to a decreasing fraction of slow wave energy due to collisionless damping as the turbulent cascade proceeds to kinetic scales . also , the approximate factor of two difference between the predicted from linear theory and the solar wind values in the transition and dissipation ranges are likely due to averaging and over the entire 5 day interval .the of kaws has a moderately strong dependence ( see figures [ fig : betas ] and [ fig : kaw_beta_dep ] ) . for the purposes of comparison, we use a median .however , the of kaws increases with decreasing , so periods of low will tend to dominate the average and shift it upward from the prediction based on a median value of .similarly , we use a value for based on quantities averaged over 5 days of data to determine the relationship between frequency and wavenumber . the variance associated with the averaged cause an apparent horizontal shift of the data in the transition and dissipation ranges . for comparison ,we also plot in figure [ fig : stereo ] the measured for the stereo a data computed via a more conventional global analysis ( green ) .the day interval of data was segmented into hr intervals . in each segment , the mean magnetic field was calculated , the data rotated into mean field coordinates , and a welch windowed fft was applied to compute the parallel and perpendicular magnetic energies .the parallel and perpendicular magnetic energies were then averaged over the hr segments and binned into logarithmic wavenumber bins to smooth the spectra .the global ( green ) was then calculated from the binned and averaged magnetic field data . as expected , this method is inappropriate to compare to linear theory and produces a significantly reduced compared to the wavelet analysis because the parallel magnetic energy is polluted with perpendicular energy , as discussed in [ sec : sw_model ] .this analysis of solar wind data highlights the importance of performing a local analysis and the weaknesses and strengths of the for determining the solar wind fluctuation composition : the is a poor tool to use in the inertial range due to degeneracies of different wave mode mixtures .however , if the inertial range is augmented with a second measure that identifies the the fast to total compressible energy fraction ( ) , such as the density - parallel magnetic field correlation , the can provide a measure of the alfvn to total energy fraction ( ) .also , the slope of the transition range and the asymptotic value in the dissipation range are useful for identifying the solar wind fluctuation composition .data computed via a local wavelet analysis ( blue ) in the bin , the measured stereo a computed with a global , hr averaged , mean magnetic field ( green ) , and the summed vlasov - maxwell with and employing ( black ) and ( red dashed ) . ]we have developed a framework for interpreting the measured solar wind magnetic variance anisotropy ( ) and comparing the measurements to linear theory . in [ sec : waves ] we reviewed the linear properties of the collisionless counterparts to the three mhd wave modes , which are expected to be the primary constituents of the solar wind in those relevant regions of wavenumber space spanning the inertial and dissipation ranges where collisionless damping rate is small compared to the rate nonlinear energy transfer .the of each wave mode for a range of plasma betas , , is shown in figure [ fig : betas ] versus perpendicular wavenumber . in [ sec : combos ] we examined how superpositions of the three wave modes affects the . due to the asymptotically large of the alfvn andfast modes and asymptotically small of the slow mode in the inertial range , the inertial range is dictated by the fraction of each mode rather than the individual behaviour of each mode and has little dependence .note that linear pressure balanced structures ( pbss ) are equivalent to , non - propagating slow modes , and are thus classified as slow modes .an estimate of the inertial range is given by equation , where the is determined by the fraction of alfvn to total ( alfvn plus fast plus slow ) energy and the fraction of fast to total compressible ( fast plus slow ) energy . for , the dissipation range value of the for kinetic alfvn waves ( kaws ) and whistler waves is approximately degenerate ; however , the behaviour of the through the transition to the dissipation range differs in form for kaws and whistlers .the degeneracy in the dissipation range highlights the value of displaying the as a function of wavenumber . in [ sec : sim ] we employ a suite of fully nonlinear gyrokinetic turbulence simulations that span from the inertial range to deep into the dissipation range for and to demonstrate that the predictions of the from linear theory are applicable to nonlinear turbulence .the comparison between the linear prediction for the of alfvn waves and the measured in the nonlinear turbulence simulation are presented in figure 9 , where excellent agreement is seen up to kinetic scales where electron dissipation becomes significant , .some of the recent solar wind measurements of the and their interpretations are presented in [ sec : previous ] .since the of previous solar wind studies typically bins the over a band of wavenumbers and computes the via a global analysis , the data is difficult to interpret and compare to linear theory ; however , the observed in previous studies is mostly consistent with solar wind composed primarily of alfvnic fluctuations and conforms well to the theory outlined in this paper . [ sec : new ] presented a measurement of the solar wind from stereo a performed using the methods outlined in [ sec : sw_model ] , computed locally via wavelet analysis techniques , and plotted as a function of perpendicular wavenumber .we find excellent agreement between linear theory and the local solar wind measurement , which suggests that this interval of solar wind data is composed of approximately alfvn , slow , and negligible fast wave energy across the sampled range of scales spanning the inertial range to the beginning of the dissipation range . here, slow wave energy encompasses both pbss and propagating slow modes because the alone can not differentiate between the two .this analysis highlights the advantages of computing the locally with a wavelet analysis and plotting the as a function of wavenumber rather than binning across wavenumber bands because the transition range breaks the degeneracy of the inertial range .alternatively , a second measure such as the density - parallel magnetic field correlation can be used to break the degeneracy and ascertain a value for fast to total compressible energy fraction ( ) .the can then be used to measure the alfvn to total energy energy fraction ( ) . in summary, we have outlined the salient features of linear theory as they relate to the magnetic variance anisotropy , provided a procedure for producing and comparing solar wind measurements to predictions from linear theory , demonstrated the validity of this technique via both nonlinear turbulence simulations and solar wind measurements , and found previous studies of the solar wind magnetic variance anisotropy and new solar wind data to be consistent with a dominantly alfvnic population in the inertial _ and _ dissipation ranges .this work was supported by nasa grant nnx10ac91 g and nsf career grant ags-1054061 .this research used resources of the oak ridge leadership computing facility at the oak ridge national laboratory , supported by the office of science of the u.s .department of energy under contract no .de - ac05 - 00or22725 .this research was supported by an allocation of advanced computing resources provided by the national science foundation , partly performed on kraken at the national institute for computational sciences .here we provide analytical derivations for the from fluid theories . without loss of generality, we will assume an equilibrium magnetic field and for all of the systems considered .for the purposes of constructing the , we need only consider the eigenfunctions of the magnetic field .the magnetohydrodynamic ( mhd ) equations can be written as the linear dispersion relation for this system is = \\ & 0 , \end{split}\ ] ] where the first term corresponds to the alfvn root and the two remaining roots correspond to the fast and slow compressible roots .finally , the eigenfunctions for the system can be expressed as the fast and slow compressible roots correspond to , which implies .therefore , and after manipulation equation reduces to note , the for the fast and slow roots can be trivially derived from equation since .electron mhd ( emhd ) is valid for ( ) and scales and is most useful for describing the whistler mode . at these scales ,the ions decouple from the magnetic field and are treated as being stationary within the framework of emhd .therefore , ohm s law reduces to . inserting this form of ohm s law into faraday s law together with , we obtain the emhd equation = 0.\ ] ] electron reduced mhd ( ermhd ) was introduced in and is the , limit of gyrokinetics .the system generalizes the emhd equations for low - frequency , anisotropic ( ) fluctuations without assuming incompressibility , and kinetic alfvn waves are described well by ermhd .the equations of ermhd are most simply expressed in terms of scalar flux functions and the ermhd equations are where .note , the anisotropy of the system together with implies , so we need only determine the eigenfunction for .the linear dispersion relation for this system is and the eigenfunction is } \phi .\end{split}\ ] ] combining equations and yields the for the kinetic alfvn wave .
the magnetic variance anisotropy ( ) of the solar wind has been used widely as a method to identify the nature of solar wind turbulent fluctuations ; however , a thorough discussion of the meaning and interpretation of the has not appeared in the literature . this paper explores the implications and limitations of using the as a method for constraining the solar wind fluctuation mode composition and presents a more informative method for interpreting spacecraft data . the paper also compares predictions of the from linear theory to nonlinear turbulence simulations and solar wind measurements . in both cases , linear theory compares well and suggests the solar wind for the interval studied is dominantly alfvnic in the inertial and dissipation ranges to scales .
the no - cloning theorem states that no quantum - mechanical evolution exists which would transform an unknown quantum state according to .this is provided by the linearity of quantum mechanics .the no - cloning theorem guaranties , e.g. , the security ( privacy ) of quantum - communication protocols including quantum key distribution .exact cloning is impossible ; however , imperfect ( optimal ) cloning is possible as it was first shown by buek and hillery by designing a cloning machine , referred to as the _ universal cloner _ ( uc ) .the cloning machine prepares two approximate copies of an unknown pure qubit state .the uc generates two qubit states with the same fidelity .fidelities of the clones to the initial pure state are equal .therefore , the uc is a state - independent symmetric cloner .it was later shown that for the uc , a relation between the optimum fidelity of each copy and the number of copies is given by .setting corresponds to a classical copying process with , which is the best fidelity that one can achieve with only classical operations .further works have extended the concept to include cloning of qudits , cloning of continuous - variable systems or state - dependent cloning ( non - universal cloning ) , which can produce clones of a specific set of qubits with much higher fidelity than the uc ( see also reviews and references therein ) .the study of the state - dependent cloning machines is important because it is often the case that we have some _ a priori _ information on a given quantum state that we want to clone , but we do not know it exactly . then , by employing the available _ a priori _ information , we can design a cloning machine which outperforms the uc for some specific set of qubits .for example , if it is known that the qubit is chosen from the equator of the bloch sphere then the so - called _ phase - covariant cloners _ ( pccs ) have been designed , and it was shown to be optimal providing a higher fidelity than the uc .phase - covariant cloning of qubits has been further explored .for example , fiurek studied the pccs with known expectation value of pauli s operator and provided two optimal symmetric cloners : one for the states in the lower and the other for those in the upper hemisphere of the bloch sphere .et al . _ studied phase - independent cloning of qubits uniformly distributed on a belt of the bloch sphere .et al . _ provided an optimal cloning transformation , referred to as the _ mirror phase - covariant cloning _( mpcc ) , for qubits of known modulus of expectation value of pauli s operator .optimal cloning plays a crucial role in , e.g. , quantum cryptography .security analyses of the quantum key distribution protocols against coherent and incoherent attacks using quantum cloners can be found in refs .optimal phase - independent cloners seem to play a special role there .one example of the optimal phase - independent cloning machines is the pcc , which can be used in an optimal attack on the bb84 quantum key distribution protocol .another example is the uc , which enables an optimal incoherent attack on the six - state protocol .our paper is devoted to _ phase - independent cloning _ , which refers to cloning of qubits assuming that their distribution ( symmetrical around the bloch vector ) is _ a priori _ known .it is related to the optimal state estimation and to phase - independent telecloning and telemapping .phase - independent cloning includes the majority of all known optimal cloning machines as special cases .one of the exceptions is the phase - dependent cloning machine recently described in ref . . in this paper, we find that phase - independent cloning can be implemented analogously to the mpcc , e.g. , in linear - optical systems or quantum dots ( see also ref . ) .phase - independent cloning is an example of the optimal cloning problem being invariant with respect to the discrete weyl - heisenberg group ( see , e.g. , ref . ) .we also show here that the phase - independent cloning exhibits sudden change in average single - copy fidelity , when the symmetry of the system is reduced from to .optimal cloning also limits the capacity of quantum channels .an example is the pauli channel and pauli cloning machines analyzed by cerf .our results could be used in a similar analysis for channels that undergo phase - independent damping .the paper is organized as follows . in sec .ii , we present a general transformation describing an optimal symmetric cloning of a qubit . in sec .iii , we describe the optimal symmetric cloning of a qubit knowing _ a priori _ its axisymmetric distribution on the bloch sphere .this cloning is referred to as the optimal phase - independent cloning . in sec .iv , we analyze two examples of such cloning of qubits described by the von mises - fisher and brosseau distributions . in sec .v , we present , probably the most important result of the paper , the optimality proof for the phase - independent cloning transformation . in sec .vi , we describe a quantum circuit implementing the optimal phase - independent cloning .we conclude in sec .suppose we want to clone a set of qubits , for which some characteristic point is in the following _ a priori _ known pure state : which is parametrized by polar and azimuthal angles on the bloch sphere .we can express all other qubit states as where is a state orthogonal to , i.e. , .note that is equal to the scalar product of the bloch vectors ] + mirror phase - covariant cloner ( mpcc ) & & + cloner of bloch - sphere belt & & eq .( [ fidelity ] ) with eq .( 15)form ref . + cloner of von mises - fisher distribution & & eq .( [ fidelity ] ) with eq .( [ af ] ) + cloner of brosseau distribution & ^{\frac{3}{2}}} ] .therefore , it can be proved that for any distribution for we have . moreover , for we have , where is the single - copy fidelity of the pcc averaged over inputs characterized by .the pcc is derived only from and . hence , in order to maximize the fidelity we have to put .this implies that , and , .now , the cptp map depends only on four parameters and can be described as a direct sum ( denoted by ) of two matrices : \oplus \left [ \begin{array}{cc } \xi_1 & \sqrt{2}\zeta_2\\ \sqrt{2}\zeta_2 & 1-\eta_1 \end{array } \right],\end{aligned}\ ] ] where the first matrix acts in a subspace spanned by and the second by .the map can be parametrized without the loss of generality in the following way : \oplus \left [ \begin{array}{cc } \cos^2\alpha_- & \sqrt{2}\zeta_2\\ \sqrt{2}\zeta_2 & \sin^2\alpha_+ \end{array } \right ] , \ ] ] however , for the extremal cptp maps , we have that . from this condition, we find that and .finally , any cptp map that maximizes the average single - copy fidelity for an arbitrary axisymmetric distribution can be written in the basis as \!\!,\end{aligned}\ ] ] where and .the above map can be decomposed into unitary transformations , given by eq .( [ n04 ] ) ( for ) , by means of the kraus decomposition .this completes the proof .the analyzed cloning problem can be also expressed in the logical basis .the optimal cloning transformation can be now written as where .the quantum circuit , shown in fig .[ fig5 ] , performs the following transformation : where the superscripts indicate qubits for which the corresponding gate is applied .the basic elements of the circuit are the rotation \label{n101c}\end{aligned}\ ] ] about the axis by angle and the controlled rotation by angle .in addition , this circuit is composed of the controlled not ( cnot ) gates , , and the controlled hadamard gate , , which can be decomposed as , where .\label{n101d}\ ] ] the optimal phase - independent cloning can be implemented with the use of a quantum circuit , the mpcc setting for ( see fig . [ fig6 ] ) .therefore , with minor modifications ( concerning the state preparation of the third qubit ) , the optimal phase - independent cloning machine can be realized with , e.g. , linear optics or quantum dots as described for the mpcc in refs . .we analyzed optimal state - dependent cloning of qubit states , which are described by _ a priori _ known arbitrary phase - independent ( axisymmetric ) distribution on the bloch sphere .this optimal cloning reduces in special cases to the universal cloning of buek and hillery , the phase - covariant cloning of bru _ et al . _ and its generalization by fiurek , the mirror phase covariant cloning ( mpcc ) of bartkiewicz _ et al . _ , or cloning of an uniform belt of the bloch sphere of hu_et al ._ . as an example of the state - dependent cloning, we studied the cloning transformations of qubits described on the bloch sphere by the von mises - fisher and brosseau distributions , where the first is an analog of normal distribution on a sphere and the latter describes statistics of the stokes parameters . whereas the first example is more abstract and describes gaussian - like dispersion, the second example can be used directly to estimate the upper bound for the capacity of a depolarizing channel for photons .our results can be also applied in security analysis of various quantum communication protocols , including quantum teleportation and quantum key distribution .recently , it was shown that phase independent - cloning can be parameterized by four parameters . here, we proved that only two parameters are sufficient to describe the optimal phase - independent cloning .moreover , we showed that the phase - independent cloning is a simple generalization of the mpcc and , thus , can be implemented analogously to the mpcc using photon - polarization qubits and quantum - dot spins .
we find an optimal quantum cloning machine , which clones qubits of arbitrary symmetrical distribution around the bloch vector with the highest fidelity . the process is referred to as phase - independent cloning in contrast to the standard phase - covariant cloning for which an input qubit state is _ a priori _ better known . we assume that the information about the input state is encoded in an arbitrary axisymmetric distribution ( phase function ) on the bloch sphere of the cloned qubits . we find analytical expressions describing the optimal cloning transformation and fidelity of the clones . as an illustration , we analyze cloning of qubit state described by the von mises - fisher and brosseau distributions . moreover , we show that the optimal phase - independent cloning machine can be implemented by modifying the mirror phase - covariant cloning machine for which quantum circuits are known .
series - parallel networks are two - terminal graphs , i.e. , they have two distinguished vertices called the source and the sink , that can be constructed recursively by applying two simple composition operations , namely the parallel composition ( where the sources and the sinks of two series - parallel networks are merged ) and the series composition ( where the sink of one series - parallel network is merged with the source of another series - parallel network ) . herewe will always consider series - parallel networks as digraphs with edges oriented in direction from the north - pole , the source , towards the south - pole , the sink .such graphs can be used to model the flow in a bipolar network , e.g. , of current in an electric circuit or goods from the producer to a market . furthermore series - parallel networks and series - parallel graphs ( i.e. , graphs which are series - parallel networks when some two of its vertices are regarded as source and sink ; see , e.g. , for exact definitions and alternative characterizations ) are of interest in computational complexity theory , since some in general np - complete graph problems are solvable in linear time on series - parallel graphs ( e.g. , finding a maximum independent set ) .recently there occurred several studies concerning the typical behaviour of structural quantities ( as , e.g. , node - degrees , see ) in series - parallel graphs and networks under a uniform model of randomness , i.e. , where all series - parallel graphs of a certain size ( counted by the number of edges ) are equally likely .in contrast to these uniform models , mahmoud introduced two interesting growth models for series - parallel networks , which are generated by starting with a single directed arc from the source to the sink and iteratively carrying out serial and parallel edge - duplications according to a stochastic growth rule ; we call them uniform bernoulli edge - duplication rule ( `` bernoulli model '' for short ) and uniform binary saturation edge - duplication rule ( `` binary model '' for short ). a formal description of these models is given in section [ sec : growth_models_recursive_trees ] .using the defining stochastic growth rules and a description via plya - eggenberger urn models ( see , e.g. , ) , several quantities for series - parallel networks ( as the number of nodes of small degree and the degree of the source for the bernoulli model , and the length of a random source - to - sink path for the binary model ) are treated in .the aim of this work is to give an alternative description of these growth models for series - parallel networks by encoding the growth of them via recursive tree structures , to be precise , via edge - coloured recursive trees and so - called bucket - recursive trees ( see and references therein ) .the advantage of such a modelling is that these objects allow not only a stochastic description ( the tree evolution process which reflects the growth rule of the series - parallel network ) , but also a combinatorial one ( as certain increasingly labelled trees or bucket trees ) , which gives rise to a top - down decomposition of the structure .an important observation is that indeed various interesting quantities for series - parallel networks can be studied by considering certain parameters in the corresponding recursive tree model and making use of the combinatorial decomposition .we focus here on the quantities degree of the source and/or sink , length of a random source - to - sink path and the number of source - to - sink paths in a random series - parallel network of size , but mention that also other quantities ( as , e.g. , the number of ancestors , node - degrees , or the number of paths through a random or the -th edge ) could be treated in a similar way .we obtain limiting distribution results for and ( thus answering questions left open in ) , whereas for the r.v . ( whose distributional treatment seems to be considerably more involved ) we are able to give asymptotic results for the expectation . mathematically , an analytic combinatorics treatment of the quantities of interest leads to studies of first and second order non - linear differential equations . in this contextwe want to mention that another model of series - parallel networks called increasing diamonds has been introduced recently in .a treatment of quantities in such networks inherently also yields a study of second order non - linear differential equations ; however , the definition as well as the structure of increasing diamonds is quite different from the models treated here as can be seen also by comparing the behaviour of typical graph parameters ( e.g. , the number of source - to - sink paths in increasing diamonds is trivially bounded by , whereas in the models studied here the expected number of paths grows exponentially ) .we mention that the analysis of the structures considered here has further relations to other objects ; e.g. , it holds that the mittag - leffler limiting distributions occurring in theorem [ thm : bernoulli_degree_limit ] & [ thm : bernoulli_length_path ] also appear in other combinatorial contexts as in certain triangular balanced urn models ( see ) or implicitly in the recent study of an extra clustering model for animal grouping ( after scaling , as continuous part of the characterization given in ( * ? ? ?* theorem 2 ) , since it is possible to simplify some of the representations given there ) .also the characterizations of the limiting distribution for and of binary series - parallel networks via the sequence of -th integer moments satisfies a recurrence relation of `` convolution type '' similar to ones occurring in , for which asymptotic studies have been carried out .furthermore , the described top - down decomposition of the combinatorial objects makes these structures amenable to other methods , in particular , it seems that the contraction method , see , e.g. , , allows an alternative characterization of limiting distributions occurring in the analysis of binary series - parallel networks .moreover , the combinatorial approach presented is flexible enough to allow also a study of series - parallel networks generated by modifications of the edge - duplication rules , in particular , one could treat also a bernoulli model with a `` preferential edge - duplication rule '' , or a -ary saturation model by encoding the growth process via other recursive tree structures ( edge - coloured plane increasing trees and bucket - recursive trees with bucket size , respectively ) ; the authors plan to comment on that in a journal version of this work .in the bernoulli model in step one starts with a single edge labelled connecting the source and the sink , and in step , with , one of the edges of the already generated series - parallel network is chosen uniformly at random , let us assume it is edge ; then either with probability , , this edge is doubled in a parallel way and are switched , but we find it catchier to use for the probability of a parallel doubling .] , i.e. , an additional edge labelled is inserted into the graph ( let us say , right to edge ) , or otherwise , thus with probability , this edge is doubled in a serial way , i.e. , edge is replaced by the series of edges and , with a new node , where gets the label and will be labelled by .the growth of series - parallel networks corresponds to the growth of random recursive trees , where one starts in step with a node labelled , and in step one of the nodes is chosen uniformly at random and node is attached to it as a new child .thus , a doubling of edge in step when generating the series - parallel network corresponds in the recursive tree to an attachment of node to node . additionally , in order to keep the information about the kind of duplication of the chosen edge , the edge incident to is coloured either blue encoding a parallel doubling , or coloured red encoding a serial doubling .such combinatorial objects of edge - coloured recursive trees can be described via the formal equation with and markers ( see ) .of course , one has to keep track of the number of blue and red edges to get the correct probability model according to where and . throughout this workthe term order of a tree shall denote the number of labels contained in , which , of course , for recursive trees coincides with the number of nodes of .then , each edge - coloured recursive tree of order and the corresponding series - parallel network of size occur with the same probability .an example for a series - parallel network grown via the bernoulli model and the corresponding edge - coloured recursive tree is given in figure [ fig : growth_bernoulli ] ., the length of the leftmost source - to - sink path is and there are different source - to - sink paths.[fig : growth_bernoulli],title="fig:",height=94 ] , the length of the leftmost source - to - sink path is and there are different source - to - sink paths.[fig : growth_bernoulli],title="fig:",height=94 ] , the length of the leftmost source - to - sink path is and there are different source - to - sink paths.[fig : growth_bernoulli],title="fig:",height=94 ] , the length of the leftmost source - to - sink path is and there are different source - to - sink paths.[fig : growth_bernoulli],title="fig:",height=94 ] , the length of the leftmost source - to - sink path is and there are different source - to - sink paths.[fig : growth_bernoulli],title="fig:",height=94 ] , the length of the leftmost source - to - sink path is and there are different source - to - sink paths.[fig : growth_bernoulli],title="fig:",height=94 ] , the length of the leftmost source - to - sink path is and there are different source - to - sink paths.[fig : growth_bernoulli],title="fig:",height=94 ] + , the length of the leftmost source - to - sink path is and there are different source - to - sink paths.[fig : growth_bernoulli],title="fig:",height=94 ] , the length of the leftmost source - to - sink path is and there are different source - to - sink paths.[fig : growth_bernoulli],title="fig:",height=94 ] , the length of the leftmost source - to - sink path is and there are different source - to - sink paths.[fig : growth_bernoulli],title="fig:",height=94 ] , the length of the leftmost source - to - sink path is and there are different source - to - sink paths.[fig : growth_bernoulli],title="fig:",height=94 ] , the length of the leftmost source - to - sink path is and there are different source - to - sink paths.[fig : growth_bernoulli],title="fig:",height=94 ] , the length of the leftmost source - to - sink path is and there are different source - to - sink paths.[fig : growth_bernoulli],title="fig:",height=94 ] , the length of the leftmost source - to - sink path is and there are different source - to - sink paths.[fig : growth_bernoulli],title="fig:",height=94 ] in the binary model again in step one starts with a single edge labelled connecting the source and the sink , and in step , with one of the edges of the already generated series - parallel network is chosen uniformly at random ; let us assume it is edge ; but now whether edge is doubled in a parallel or serial way is already determined by the out - degree of node : if node has out - degree then we carry out a parallel doubling by inserting an additional edge labelled into the graph right to edge , but otherwise , i.e. , if node has out - degree and is thus already saturated , then we carry out a serial doubling by replacing edge by the edges and , with a new node , where gets the label and will be labelled by .it turns out that the growth model for binary series - parallel networks corresponds with the growth model for bucket - recursive trees of maximal bucket size , i.e. , where nodes in the tree can hold up to two labels : in step one starts with the root node containing label , and in step one of the labels in the tree is chosen uniformly at random , let us assume it is label , and attracts the new label . if the node containing label is saturated ,i.e. , it contains already two labels , then a new node containing label will be attached to as a new child , otherwise , label will be inserted into node , then containing the labels and .as has been pointed out in such random bucket - recursive trees can also be described in a combinatorial way by extending the notion of increasing trees : namely a bucket - recursive tree is either a node labelled or it consists of the root node labelled , where two ( possibly empty ) forests of ( suitably relabelled ) bucket - recursive trees are attached to the root as a left forest and a right forest .a formal description of the family of bucket - recursive trees ( of bucket size at most ) is in modern notation given as follows : it follows from this formal description that there are different bucket - recursive trees with labels , i.e. , of order , and furthermore it has been shown in that this combinatorial description ( assuming the uniform model , where each of these trees occurs with the same probability ) indeed corresponds to the stochastic description of random bucket - recursive trees of order given before .an example for a binary series - parallel network and the corresponding bucket - recursive tree is given in figure [ fig : growth_binary ] . , the length of the leftmost source - to - sink path is and there are different source - to - sink paths.[fig : growth_binary],title="fig:",height=94 ] , the length of the leftmost source - to - sink path is and there are different source - to - sink paths.[fig : growth_binary],title="fig:",height=94 ] , the length of the leftmost source - to - sink path is and there are different source - to - sink paths.[fig : growth_binary],title="fig:",height=94 ] , the length of the leftmost source - to - sink path is and there are different source - to - sink paths.[fig : growth_binary],title="fig:",height=94 ] , the length of the leftmost source - to - sink path is and there are different source - to - sink paths.[fig : growth_binary],title="fig:",height=94 ] , the length of the leftmost source - to - sink path is and there are different source - to - sink paths.[fig : growth_binary],title="fig:",height=94 ] , the length of the leftmost source - to - sink path is and there are different source - to - sink paths.[fig : growth_binary],title="fig:",height=94 ] + , the length of the leftmost source - to - sink path is and there are different source - to - sink paths.[fig : growth_binary],title="fig:",height=94 ] , the length of the leftmost source - to - sink path is and there are different source - to - sink paths.[fig : growth_binary],title="fig:",height=94 ] , the length of the leftmost source - to - sink path is and there are different source - to - sink paths.[fig : growth_binary],title="fig:",height=94 ] , the length of the leftmost source - to - sink path is and there are different source - to - sink paths.[fig : growth_binary],title="fig:",height=94 ] , the length of the leftmost source - to - sink path is and there are different source - to - sink paths.[fig : growth_binary],title="fig:",height=94 ] , the length of the leftmost source - to - sink path is and there are different source - to - sink paths.[fig : growth_binary],title="fig:",height=94 ] , the length of the leftmost source - to - sink path is and there are different source - to - sink paths.[fig : growth_binary],title="fig:",height=94 ] in our analysis of binary series - parallel networks the following link between the decomposition of a bucket - recursive tree into its root and the left forest ( consisting of the trees } , \dots , t_{\ell}^{[l]} ] ) , and the subblock - structure of the corresponding binary network is important : consists of a left half } ] ( which share the source and the sink ) , where } ] , } ] , and } ] , } ] ; see figure [ fig : link_decomposition_buckettree_network ] for an example . denote the r.v . measuring the degree of the source in a random series - parallel network of size for the bernoulli model , with .a first analysis of this quantity has been given in , where the exact distribution of as well as exact and asymptotic results for the expectation could be obtained .however , questions concerning the limiting behaviour of and the asymptotic behaviour of higher moments of have not been touched ; in this context we remark that the explicit results for the probabilities as obtained in and restated in are not easily amenable for asymptotic studies , because of large cancellations of the alternating summands in the corresponding formula .we will reconsider this problem by applying the combinatorial approach introduced in section [ sec : growth_models_recursive_trees ] , and in order to get limiting distribution results we apply methods from analytic combinatorics .as has been already remarked in the degree of the sink is equally distributed as due to symmetry reasons , although a simple justification of this fact via direct `` symmetry arguments '' does not seem to be completely trivial ( the insertion process itself is a priori not symmetric w.r.t .the poles , since edges are always inserted towards the sink ) ; however , it is not difficult to show this equality by establishing and treating a recurrence for the distribution of the sink , which is here omitted .when considering the description of the growth process of these series - parallel networks via edge - coloured recursive trees it is apparent that the degree of the source in such a graph corresponds to the order of the maximal subtree containing the root node and only blue edges , i.e. , we have to count the number of nodes in the recursive tree that can be reached from the root node by taking only blue edges ; for simplicity we denote this maximal subtree by `` blue subtree '' .thus , in the recursive tree model , measures the order of the blue subtree in a random edge - coloured recursive tree of order . to treat introduce the r.v . , whose distribution is given as the conditional distribution , and the trivariate generating function with the number of recursive trees of order .thus counts the number of edge - coloured recursive trees of order with exactly blue edges , where the blue subtree has order .additionally we introduce the auxiliary function , i.e. , the exponential generating function of the number of edge - coloured recursive trees of order with exactly blue edges .the decomposition of a recursive tree into its root node and the set of branches attached to it immediately can be translated into a differential equation for , where we only have to take into account that the order of the blue subtree in the whole tree is one ( due to the root node ) plus the orders of the blue subtrees of the branches which are connected to the root node by a blue edge ( i.e. , only branches which are connected to the root node by a blue edge will contribute ) .namely , with and , we get the first order separable differential equation with initial condition . throughout this work ,the notation for ( multivariate ) functions shall always denote the derivative w.r.t .the variable .the exact solution of can be obtained by standard means and is given as follows : since we are only interested in the distribution of we will actually consider the generating function according to the definition of the conditional r.v . it holds that , which , after simple computations , gives the relation . thus we obtain the following explicit formula for , which has been obtained already in by using a description of via urn models : extracting coefficients from immediately yields the explicit result for the probability distribution of , with , stated in : f'(z , v ) = \sum_{j=0}^{m-1 } \binom{m-1}{j } ( -1)^{n+j-1 } \binom{p(j+1)-1}{n-1}.\ ] ] in order to describe the limiting distribution behaviour of we first study the integer moments . to do this we introduce , since we get for its derivative the relation , with the -th factorial moment of . plugging into , extracting coefficients and applying stirling s formula for the factorialseasily gives the following explicit and asymptotic result for the -th factorial moments of , with : from which we further deduce thus , the -th integer moments of the suitably scaled r.v . converge to the integer moments of a so - called mittag - leffler distribution with parameter ( see , e.g. , ) , which , by an application of the theorem of frchet and shohat , indeed characterizes the limiting distribution of . from the explicit formula it is also possible to characterize the density function of ( we remark that alternatively we can obtain from the moment generating function and applying the inverse laplace transform . ) .namely , it holds f'(z , v ) = \frac{1}{2 \pi i } \oint \frac{(1-(1-z)^{p})^{m-1}}{z^{n } ( 1-z)^{1-p } } dz,\ ] ] where we have to choose as contour a positively oriented simple closed curve around the origin , which lies in the domain of analyticity of the integrand . to evaluate the integral asymptotically ( and uniformly ) for , and can adapt the considerations done in for the particular instance .after straightforward computations one obtains the following asymptotic equivalent of these probabilities , which determines the density function of the limiting distribution : with a hankel contour starting from , passing around and terminating at .the results are collected in the following theorem .[ thm : bernoulli_degree_limit ] the degree of the source or the sink in a randomly chosen series - parallel network of size generated by the bernoulli model converges after scaling , for , in distribution to a mittag - leffler distribution with parameter : , where is characterized by the sequence of its -th integer moments : as well as by its density function ( with a hankel contour ) : we remark that after simple manipulations we can also write as the following real integral : we further remark that for the particular instance one can evaluate the hankel integral above and obtains that the limiting distribution is characterized by the density function , .thus , is the density function of a so - called half - normal distribution with parameter .we consider the length ( measured by the number of edges ) of a random path from the source to the sink in a randomly chosen series - parallel network of size for the bernoulli model . in this context, the following definition of a random source - to - sink path seems natural : we start at the source and walk along outgoing edges , such that whenever we reach a node of out - degree , , we choose one of these outgoing edges uniformly at random , until we arrive at the sink .the following two observations are very helpful in the analysis of this parameter .first it holds that the length of a random path is distributed as the length } ] is indeed the solution of the recurrence for .these computations will be given in the journal version of this work , here we have to omit them .second we use that the length of the leftmost source - to - sink path in a series - parallel network has the following simple description in the corresponding edge - coloured recursive tree : namely , an edge is lying on the leftmost source - to - sink path if and only if the corresponding node in the recursive tree can be reached from the root by using only red edges ( i.e. , edges that correspond to serial edges ) .this means that the length of the leftmost source - to - sink path corresponds in the edge - coloured recursive tree model to the order of the maximal subtree containing the root node and only red edges .if we switch the colours red and blue in the tree we obtain an edge - coloured recursive tree where the maximal blue subtree has the same order , i.e. , where the source - degree of the corresponding series - parallel network is . butswitching colours in the tree model corresponds to switching the probabilities and for generating a parallel and a serial edge , respectively , in the series - parallel network .thus it simply holds }(p ) \stackrel{(d)}{= } d_{n}(1-p) ] , which , however , due to alternating signs of the summands are not easily amenable for asymptotic considerations . instead , in order to obtain the asymptotic behaviour of we consider the formul for the generating function stated in anddescribe the structure of the singularities : for the dominant singularity at is annihilating the denominator ; there has a simple pole , which due to singularity analysis yields the main term of , i.e. , the asymptotically exponential growth behaviour ; the ( algebraic or logarithmic ) singularity at determines the second and higher order terms in the asymptotic behaviour of , which differ according to the ranges , , and .this yields the following theorem .the expectation of the number of paths from source to sink in a random series - parallel network of size generated by the bernoulli model is given by the following explicit formula : where denotes the -th complete bell polynomial and where denote the -th order harmonic numbers ( see ) .the asymptotic behaviour of is , for , given as follows : are interested in the length of a typical source - to - sink path in a series - parallel network of size .again , it is natural to start at the source of the graph and move along outgoing edges , in a way that whenever we have the choice of two outgoing edges we use one of them uniformly at random to enter a new node , until we finally end at the sink .let us denote by the length of such a random source - to - sink path in a random series - parallel network of size for the binary model .due to symmetry reasons it holds that } ] denotes the length of the leftmost source - to - sink path in a random series - parallel network of size , i.e. , the source - to - sink path , where in each node we choose the left outgoing edge to enter the next node . in order to analyse }$ ]we use the description of the growth of series - parallel networks via bucket - recursive trees : the length of the left path is equal to ( coming from the root node of the tree , i.e. , stemming from the edge in the graph ) plus the sum of the lengths of the left paths in the subtrees contained in the left forest ( which correspond to the blocks of the left part of the graph ) . when we introduce the generating function then the above description yields the following differential equation : where is the exponential generating function of the number of bucket - recursive trees of order . in order to compute the expectation we consider , which satisfies the following linear second order differential equation of eulerian type : the explicit solution can be obtained by a standard technique andis given as follows : extracting coefficients and applying stirling s formula immediately yields the following explicit and asymptotic result for the expectation : in order to characterize the limiting distribution of we will compute iteratively the asymptotic behaviour of all its integer moments .to this aim it is advantageous to consider . differentiating shows that satisfies the following differential equation : introducing , differentiating times w.r.t . and evaluating at yields with .thus satisfies for each an eulerian differential equation , where the inhomogeneous part depends on the functions , with .the asymptotic behaviour of around the dominant singularity can be established inductively , namely it holds : with , and where the constants satisfy a certain recurrence of `` convolution type '' .singularity analysis and an application of the theorem of frchet and shohat shows then the following limiting distribution result .the length of a random path from the source to the sink in a random series - parallel network of size generated by the binary model satisfies , for , the following limiting distribution behaviour , with : where the limiting distribution is characterized by its sequence of -th integer moments via and where the sequence satisfies the recurrence , for , with and . whereas the ( out-)degree of the source of a binary series - parallel network is two ( if the graph has at least two edges ) , typically the ( in-)degree of the sink is quite large , as will follow from our treatments .let us denote by the degree of the sink in a random series - parallel network of size for the binary model .for a binary series - parallel network , the value of this parameter can be determined recursively by adding the degrees of the sinks in the last block of each half of the graph ; in the case that a half only consists of one edge then the contribution of this half is of course .when considering the corresponding bucket - recursive tree this means that the degree of the sink can be computed recursively by adding the contributions of the left and the right forest attached to the root , where the contribution of a forest is either given by in case that the forest is empty ( then the corresponding root node contributes to the degree of the sink ) or it is the contribution of the first tree in the forest ( which corresponds to the last block ) , see figure [ fig : link_decomposition_buckettree_network ] . introducing the generating functions with denoting the corresponding quantity for the left or right forest and counting the number of forests of order , the combinatorial decomposition of bucket - recursive trees yields the following system of differential equations : from system the following non - linear differential equation for be obtained : which , by considering and solving an eulerian differential equation , allows to compute an exact and asymptotic expression for the expectation ; namely it holds however , for asymptotic studies of higher moments it seems to be advantageous to consider the following second order non - linear differential equation for , which follows immediately from : introducing the functions and differentiating times , one obtains that satisfies for the following second order eulerian differential equation : with . from one can inductively show that the local behaviour of the functions around the ( unique ) dominant singularity is given as follows : where the constants are determined recursively .actually we are interested in the functions , which are , due to , related to via .singularity analysis as well as the theorem of frchet and shohat show then the following limiting distribution result .the degree of the sink in a random series - parallel network of size generated by the binary model satisfies , for , the following limiting distribution behaviour : where the limiting distribution is characterized by its sequence of -th integer moments via where the sequence satisfies the recurrence , for , with and . as forthe bernoulli model we are interested in results concerning the number of different paths from the source to the sink in a series - parallel network and denote by the number of source - to - sink paths in a random series - parallel network of size for the binary model . in order to study seems advantageous to start with a stochastic recurrence for this random variable obtained by decomposing the bucket - recursive tree into the root node and the left and right forest ( of bucket - recursive trees ) attached to the root node . as auxiliary r.v .we introduce , which denotes the number of source - to - sink paths in the series - parallel network corresponding to a forest ( i.e. , a set ) of bucket - recursive trees , where each tree in the forest corresponds to a subblock in the left or right half of the graph . by decomposing the forest into its leftmost tree and the remaining set of trees and taking into account that the number of source - to - sink paths in the forestis the product of the number of source - to - sink paths in the leftmost tree and the corresponding paths in the remaining forest , we obtain the following system of stochastic recurrences : with , , , and where the r.v . and are independent of each other and independent of , , , , and .furthermore , they are distributed as follows : introducing and , the stochastic recurrence above yields the following system of equations for and ( with , and ) : introducing and one obtains that satisfies the following non - linear second order differential equation : differential equation is not explicitly solvable ; furthermore , the so - called frobenius method to determine a singular expansion fails for .however , it is possible to apply the so - called psi - series method in the setting introduced in , i.e. , assuming a logarithmic psi - series expansion of when lies near the ( unique ) dominant singularity on the positive real axis .this yields the following result .the expectation of the number of paths from source to sink in a random series - parallel network of size generated by the binary model has , for , the following asymptotic behaviour , with : h .- h .chern , m .-fernndez - camacho , h .- k .hwang , and c. martnez , psi - series method for equality of random trees and quadratic convolution recurrences , _ random structures & algorithms _ 44 , 67108 , 2014 .
we give combinatorial descriptions of two stochastic growth models for series - parallel networks introduced by hosam mahmoud by encoding the growth process via recursive tree structures . using decompositions of the tree structures and applying analytic combinatorics methods allows a study of quantities in the corresponding series - parallel networks . for both models we obtain limiting distribution results for the degree of the poles and the length of a random source - to - sink path , and furthermore we get asymptotic results for the expected number of source - to - sink paths
co - operative diversity is proved to be an efficient means of achieving spatial diversity in wireless networks without the need of multiple antennas at the individual nodes . in comparison with single user colocated multiple antenna transmission ,co - operative communication is based on the relay channel model where a set of distributed antennas belonging to multiple users in the network co - operate to encode the signal transmitted from the source and forward it to the destination so that the required diversity order is achieved .+ in , the idea of space - time coding devised for point to point co - located multiple antenna systems is applied for a wireless relay network with single antenna nodes and pep ( pairwise error probability ) of such a scheme was derived .it is shown that in a relay network with a single source , a single destination with single antenna relays , distributed space time coding ( dstc ) achieves the diversity of a colocated multiple antenna system with transmit antennas and one receive antenna , asymptotically .+ subsequently , in , the idea of is extended to relay networks where the source , the destination and the relays have multiple antennas .but , co - operation between the multiple antennas of each relay is not used , i.e. , the colocatedness of the antennas is not exploited .hence , a total of relays each with a single antenna is assumed in the network instead of a total of antennas in a smaller number of relays . with this set up , for a network with antennas at the source , antennas at the destination and a total of antennas at relays , for large values of , the pep of the network , varies with as in particular , the pep of the scheme in for large when specialized to with 2 antennas at relays is upper - bounded by , ^{2r}\left[\frac{(\mbox{log}_{e}(p))^{2r}}{p^{2r}}\right]\ ] ] where is the minimum singular value of where and are the two distinct codewords of a distributed space time block code and is the total power per channel use used by all the relays for transmitting an information vector . + following the work of , constructions of distributed space time block codes for networks with multiple antenna nodes are presented in , .we refer cooperative diversity schemes in which multiple antennas of a relay do not co - operate i.e when the transmitted vector from every antenna is function of only the received vector in that antenna , or when every relay has only one antenna as regular distributed space - time coding ( rdstc ) .+ the key idea in the proposed scheme is the notion of vector coordinate interleaving defined below : [ def1 ] given two complex vectors we define a coordinate interleaved vector pair of denoted as to be the pair of complex vectors where given by or equivalently , the notion of coordinate interleaving of two complex variables has been used in to obtain single - symbol decodable stbcs with higher rate than the well known complex orthogonal designs .definition [ def1 ] is an extension of the above technique to two complex vectors .the idea of vector co - ordinate interleaving has been used in in order to obtain better diversity results in fast fading mimo channels . in this paper , we show that multiple antennas at the relays can be exploited to improve the performance of the network . towards this end , a single antenna source and a single antenna destination with two antennas at each of the relaysis considered .also , the two phase protocol as in is assumed where the first phase consists of transmission of a length complex vector from the source to all the relays ( not to the destination ) and the second phase consists of transmission of a length complex vector from each of the antennas of the relays to the destination , as shown in fig.[model ] .the modification in the protocol we introduce is that the two received vectors at the two antennas of a relay during the first phase is coordinate interleaved as defined in definition [ def1 ] .then , multiplying the coordinate interleaved vector with the predecided antenna specific unitary matrices , each antenna produces a length vector that is transmitted to the destination in the second phase .the collection of all such vectors , as columns of a matrix constitutes a codeword matrix and collection of all such codeword matrices is referred as coordinate interleaved distributed space time code ( cidstc ) .the contributions of this paper may be summarized as follows in more specific terms : * for , an upper bound on the pep of our scheme with fully diverse cidstc , at large values of the total power is derived . * for the pep of the rdstc scheme in with fully diverse dstbc is upper bounded by the expression given in . comparing this bound , with ours , for equal number of antennas , a term ^{r} ] .this improvement in the pep comes just by vector co - ordinate interleaving at every relay the complexity of which is negligible .* it is shown that cidstc scheme provides asymptotic coding gain compared to the corresponding rdstc scheme .* cidstc in variables is shown not to provide full diversity if the variables take values from any 2-dimensional signal set .* multi - dimensional signal sets are shown to provide full diversity for cidstcs whose choice depends on the design in use . *the number of channel uses needed in the proposed scheme is at least where as only is needed in an rdstc scheme . with for both the schemes , through simulation , it is shown that cidstc gives improved ber ( bit error rate ) performance over that of rdstc scheme . _notations : _ through out the paper , boldface letters and capital boldface letters are used to represent vectors and matrices respectively . for a complex matrix * x * , the matrices , , , ] is used to denote the expectation of the random variable a circularly symmetric complex gaussian random vector with mean and covariance matrix is denoted by .the set of all integers and complex numbers are denoted by and respectively and is used to denote through out the paper refers to .+ the remaining content of the paper is organized as follows : in section [ sec2 ] , the signal model and a formal definition of cidstc is given along with an illustrative example .the pairwise error probability ( pep ) expression for a cidstc is obtained in section [ sec3 ] using which it is shown that ( i ) cidstc scheme gives asymptotic diversity gain equal to the total number of antennas in the relays and ( ii ) offers asymptotic coding gain compared to the corresponding rdstcs .constructions of cidstcs along with conditions on the full diversity of cidstcs are provided in section [ sec4 ] . in section [ sec5 ] ,simulation results are presented to illustrate the superiority of cidstc schemes .concluding remarks and possible directions for further work constitute section [ sec6 ] .the channel from the source node to the -th antenna of the -th relay is denoted as and the channel from the -th antenna of the -th relay to the destination node is represented by for and as shown in fig.[model ] .the following assumptions are made in our system model : * all the nodes are subjected to half duplex constraint . *fading coefficients are i.i.d with coherence time interval , . *all the nodes are synchronized at the symbol level .* destination knows all the fading coefficients . in the first phase the source transmits a length complex vector from the codebook = consisting of information vectors such that ] each relay is equipped with a pair of fixed unitary matrices and , one for each antenna and process the above civp as follows : the and the antennas of the -th relay are scheduled to transmit respectively .the average power transmitted by each antenna of a relay per channel use is .the vector received at the destination is given by where is the additive noise at the destination . using in , * y * can be written as where * the additive noise , in the above equivalent mimo channel is given by , + \textbf{w}\ ] ] with . since , we have + * the equivalent channel * h * is given by ^t \in \mathbb{c}^{4r}\\\ ] ] where ] and to be explicit , with ^{t} ] = and ] and = ( 1 + \frac{p_{2}}{(1 + p_{1})}\sum_{j = 1}^{r}(|g_{1j}|^{2 } + |g_{2j}|^{2}))\textbf{i}_{t}. ] and take values from some 2-dimensional signal set . using the above rdstcs, we can construct a cidstc for relays each having two antennas by assigning the relay matrices of rdstc to every antenna of our setup .therefore , cidstc is of the form , {4r \times 4r}\ ] ] where ^{t} ] .+ from , every codeword of is of the form ] .+ the following proposition shows that is not full rank even if and are full rank .[ 2_d_non_full_diversity ] if variables s take value from a 2-dimensional signal set , then cidstc is not fully diverse .suppose complex variables s take value from any 2-dimensional signal set , then can possibly take values such that . since , some of the rows of are linearly dependent .also , identical rows of will also be linearly dependent .therefore , and together make the corresponding rows of linearly dependent . for the cidstc in example [ exple1 ] ,if , then is given by , \ ] ] it can be observed that the first and the third row of are linearly dependent and hence cidstc in example [ exple1 ] is not fully diverse . from the results of the proposition [ 2_d_non_full_diversity ] , full diversity of cidstc can be obtained by making the real variables for take values from an appropriate multi - dimensional signal set . in particular , the signal set needs to be chosen such that such that is full rank for every pair of codewords .the determinant of will be a polynomial in variables for .therefore , a signal set has to be chosen to make determinant of non - zero for every pair of codewords .a particular choice of the signal set depends on the design in use . however , it is to be noted that , more than one multi dimensional signal set can provide full diversity for a given design . in the rest of this section ,we provide a multi - dimensional signal set , for the cidstc , in example [ exple1 ] such that , when the variables take values from , the cidstc is fully diverse . towards that direction , real and imaginary components of ] is non zero for any pair of codewords . in general, the variable s of the vector ^{t } \in \mathbb{z}^{8} ] as using which a complex vector ] takes value from the above signal set , then the determinant of is non zero for any pair of codewords of and hence is fully diverse . in general , for cidstcs of any dimension , appropriate signal sets needs to be designed so as to make the code fully diverse .in this section , we provide simulation results for the performance comparison of cidstc and rdstc for a wireless network with two relay nodes ( figure [ fig_comp1 ] ) and a single relay node ( figure [ fig_comp2 ] ) .optimal power allocation strategy discussed in subsection [ subsec3 ] has been used in our simulation setup though the strategy is not optimal for smaller values of r. even though the power allocation used is not optimal , cidstcs are found to perform better than their corresponding rdstcs .we have used the bit error rate ( ber ) which corresponds to errors in decoding every bit as error events of interest . for the network with 2 relays ,since we need , for cidstc , we use the channel coherence time of channel use for both the schemes .the real and imaginary parts of information symbols are chosen equiprobably from a 2- pam signal set and are appropriately scaled to maintain the unit norm condition .simulations are carried out using the linear designs in variables , as given in and for rdstc and cidstc respectively .it can be verified that design in is of the required form given in .+ the design in is four group decodable , i.e. , the variables can be partitioned into four groups and the ml decoding can be carried out for each group of variables independently of the variables in the groups and the variables of each groups need to be jointly decoded .the corresponding four groups of real variables are , . we use sphere decoding algorithm for ml decoding . though the design in is four group decodable , the design in is not four group ml decodable . full diversity is obtained by making every group of real variables choose values from a rotated lattice constellation whose generator given by , .\ ] ] \ ] ] \\\ ] ] ber comparison of the two schemes using the above designs is shown in figure [ fig_comp1 ] .the plot shows that cidstc in performs better than the rdstc in by close to 1.5 to 2 db .+ simulation results comparing the ber performance of cidstc in example [ exple1 ] with its corresponding rdstc is shown in figure [ fig_comp2 ] .full diversity is obtained by choosing a rotated lattice constellation ( section [ sec4 ] ) whose generator is given by .the plot shows the superiority of the design in example [ exple1 ] over its rdstc counterpart by 1 db . simulation results comparing the ber performance of cidstc in example [ exple1 ] with rdstc from random coding is also shown in figure [ fig_comp2 ] which shows the superiority of cidstc by 1.75 - 2 db at larger values of p.the technique of co - ordinate interleaved distributed space - time coding at the relays was introduced for wireless relay networks having relays each having two antennas . for , we have shown that cidstc provides coding gain compared to the scheme when transmit and receive signals at different antennas of the same relay are processed independently .this improvement is at the cost of only a marginal additional complexity in processing at the relays .condition on the full diversity of cidstcs is also presented .some of the possible directions for future work is to extended the above technique to relay networks where the source and the destination nodes have multiple antennas . also , if the relays have more than two antennas then a general linear processing need to be employed in the place of civp and new performance bounds need to be derived .the channel * h * as given in can be written as the product of and where and ^t.\ ] ] since is hermitian and positive definite , we can write , where is diagonal matrix containing the eigen values of in . since , is of full rank , the right hand side of the resulting following pep expression + can be upper - bounded by replacing by where is the minimum singular value of then , we have , ] and hence leading to ^{2}.$ ] this completes the proof .from we have let by change of variables in the integral in as we have ^{r}\left [ \int_{2}^{\infty } \frac{1}{t}e^{-\frac{t}{\alpha}}dt - \int_{2}^{\infty } \frac{1}{t^{2}}e^{-\frac{t}{\alpha}}dt \right]^{r}\\\ ] ] and further , changing to ^{r } \left [ \int_{-\frac{2}{\alpha}}^{-\infty}\frac{1}{t}e^{t}dt + \frac{2}{\alpha}\int_{-\frac{2}{\alpha}}^{-\infty}\frac{1}{t^{2}}e^{t } dt \right]^{r}.\\\ ] ] using the chain rule of integration , for any integer we can write the recursive relation , we know that is the exponential integral function , . + as in , for large , and hence , can be written as ^{2r}\left [ \mbox{log}(p ) + \frac{64r}{pt\rho^{2}}\mbox{log}(p ) - 1\right ] ^{r}\ ] ] harshan j and b. sundar rajan , " co - ordinate interleaved distributed space - time coding for two - antenna relay networks , in the proceedings of _ ieee globecom 2007 _ , washington d.c ., usa , nov .26 - 30 , 2007 .r. u. nabar , h. bolcskei and f. w. kneubuhler , `` fading relay channels : performance limits and space time signal design , '' _ ieee journal on selected areas in commun_. , vol . 22 , no . 6 , pp . 1099 - 1109 , aug .2004 .s. yang and j .- c . belfiore , " diversity of mimo multihop relay channels parti : amplify - and - forward , submitted to _ ieee transactions on information theory _ , april 2007 . also available on arxiv cs.it/07043969 .g. susinder rajan and b. sundar rajan , `` a non - orthogonal distributed space - time protocol , part - i : signal model and design criteria and part - ii : code construction and dm - g tradeoff , '' proceedings of itw 2006 , chengdu , china , oct .22 - 26 , pp .385 - 389 and pp .488 - 492 , 2006 .harshan j was born in karnataka , india .he received the b.e .degree from visvesvaraya technological university , karnataka in 2004 .he was working with robert bosch ( india ) ltd , india till december 2005 .he is currently a ph.d .student in the department of electrical communication engineering , indian institute of science , bangalore , india .his research interests include wireless communication , information theory , space - time coding and coding for multiple access channels and relay channels .+ b. sundar rajan ( s84-m91-sm98 ) was born in tamil nadu , india .he received the b.sc .degree in mathematics from madras university , madras , india , the b.tech degree in electronics from madras institute of technology , madras , and the m.tech and ph.d .degrees in electrical engineering from the indian institute of technology , kanpur , india , in 1979 , 1982 , 1984 , and 1989 respectively .he was a faculty member with the department of electrical engineering at the indian institute of technology in delhi , india , from 1990 to 1997 .since 1998 , he has been a professor in the department of electrical communication engineering at the indian institute of science , bangalore , india .his primary research interests include space - time coding for mimo channels , distributed space - time coding and cooperative communication , coding for multiple - access and relay channels , with emphasis on algebraic techniques .rajan is an associate editor of the ieee transactions on information theory , an editor of the ieee transactions on wireless communications , and an editorial board member of international journal of information and coding theory .he served as technical program co - chair of the ieee information theory workshop ( itw02 ) , held in bangalore , in 2002 .he is a fellow of indian national academy of engineering and recipient of the iete pune center s s.v.c aiya award for telecom education in 2004 .also , dr . rajan is a member of the american mathematical society .
distributed space time coding for wireless relay networks when the source , the destination and the relays have multiple antennas have been studied by jing and hassibi . in this set - up , the transmit and the receive signals at different antennas of the same relay are processed and designed independently , even though the antennas are colocated . in this paper , a wireless relay network with single antenna at the source and the destination and two antennas at each of the relays is considered . a new class of distributed space time block codes called co - ordinate interleaved distributed space - time codes ( cidstc ) are introduced where , in the first phase , the source transmits a -length complex vector to all the relays and in the second phase , at each relay , the in - phase and quadrature component vectors of the received complex vectors at the two antennas are interleaved and processed before forwarding them to the destination . compared to the scheme proposed by jing - hassibi , for , while providing the same asymptotic diversity order of , cidstc scheme is shown to provide asymptotic coding gain with the cost of negligible increase in the processing complexity at the relays . however , for moderate and large values of , cidstc scheme is shown to provide more diversity than that of the scheme proposed by jing - hassibi . cidstcs are shown to be fully diverse provided the information symbols take value from an appropriate multi - dimensional signal set . cooperative communication , distributed space - time coding , co - ordinate interleaving , coding gain .
in science and nature we study and utilize collections of objects by organizing them by relations and patterns in such a way that some structure emerges .objects are bound together to form new objects .this process may be iterated in order to obtain higher order collections .evolution works along these lines .when things are being made or constructed it is via binding processes of some kind .this seems to be a very general and useful principle worthy of analyzing more closely . in other wordswe are asking for a general framework in which to study general many - body systems and their binding patterns as organizing principles .let us look at some examples of what we have in mind .[ ex : links ] a link is a disjoint union of embedded circles ( or rings ) in three dimensional space : they may be linked in many ways .linking is a kind of geometrical or topological binding as we see in the following examples .( figures in colour are available at : http://arxiv.org/pdf/1201.6228.pdf ) + in the linking bonds have been extended to higher order links like : in order to iterate this process and study higher order links , it is preferable to study embeddings of tori : a second order brunnian ring binds circles ( rings ) together in a very subtle way , figure [ fig:2ndborromean ] .higher order links ( links of links of ) provide a very good guiding example of what a general framework should cover . for more details , see . in and we discuss possible ways to synthesize such binding structures as molecules .[ ex : many ] efimov ( borromean , brunnian ) states in cold gases are bound states of three particles which are not bound two by two .hence these states are analogous to borromean and brunnian rings . inwe have suggested that this analogy may be extended to higher order links and hence suggests higher order versions of efimov states .for example the second order brunnian rings , see figure [ fig:2ndborromean ] , suggest that there should exist bound states of particles , bound by in a brunnian sense , and that these clusters bound together again in a higher order sense as in : see also for intermediate bound states between a trimer and a dimer and a trimer and a two - singleton . for a discussion of higher order brunnian states ,in general , clustering and higher order clustering of many - body systems represent a binding mechanism of particle systems parametrizing the particles , in a way .one may ask for a general method to describe the binding of particles into higher clusters . [ ex : clust ] as pointed out in the previous example clusters of objects or data represent a binding mechanism between the objects in the cluster .the cluster of course depends on various defining criteria .clusters of clusters represent higher order versions .similarly when we decompose a set or collection we may say that elements in the same part of a decomposition are bound together . in this casewe get higher bonds as decompositions of decompositions [ ex : mathstruc ] we will give a few mathematical examples of how sets are organized into structures .* topological spaces .we organize the `` points '' of a set into open sets in such a way that they satisfy the axioms for a topology .* groups , algebras , vector spaces .the elements are organized by certain operations which satisfy the structure axioms . *organize the points into open sets and glue them together in a prescribed structured way .gluing is an important example of a geometric binding mechanism . in order to form higher order versions of these structuresthere may be many choices , but one way is through higher categories . in a higher category of order oneis given objects ( e.g. groups , topological spaces, ) which are organized by morphisms between them .furthermore , there exist morphisms between morphisms -morphisms : \(x ) at ( 0,0) ; ( y ) at ( 2,0) ; ( x.north east ) edge[bend right=-30 ] node[above] ( y.north west ) ( x.south east ) edge[bend right=30 ] node[below] ( y.south west ) ; ( 1,0.3 ) node[midway , right] ( 1,-0.3 ) ; and this continues up to -morphisms between -morphisms satisfying certain conditions .another type of higher order mathematical binding structure is a moduli space .these are spaces of structures for example the space of all surfaces of a given genus .we will now introduce some new binding mechanisms of general collections of objects : physical , chemical , biological , sociological , abstract and mental .this organization may bring to light some new and useful structure on the collections .we will discuss this in the following , extending the points of view of especially in the appendix .the main concept we will use in order to do this is that of a _ hyperstructure _ as introduced and studied in .let us recall the basic construction from and .we start with a set of objects our basic units . to each subset ( or families of elements ) we assign a set of properties or states , , so where the set of subsets the power set , and denotes a suitable set of sets .( in the language of category theory would be considered a category of subsets , as some category of sets . ) in our notation here we include properties and states of elements and subsets of in . then we want to assign a set of bonds , relations , relationships or interactions of each subset depending on properties and states . herewe will just call them bonds .let us define in our previous notion of hyperstructures the set represents the systems or agents the observables ( obs ) , the interactions ( int ) and a specific choice of represents the resultant `` bond '' system giving rise to the next level of objects called in previous papers , like , obs , int , see .we will often implicitly assume in the following that given a bond we know what it binds .we may require that the set of all bonds of satisfies the following condition : + + ( in other situations this is too strong a condition .for example if we want to consider colimits as bonds , then the in the following are not well - defined .the ( ) condition ensures that the bonds `` know '' what they bind . )+ [ ex : geom_ex ] a finite number of manifolds , property of being smooth , put the set of all smooth manifolds with boundary equal ( isomorphic ) to the disjoint union of the manifolds in . see figure [ fig : surfaces ] .we now just formalize in a general setting the procedure we described in .let us form the next level and define : by definition the image set of , and ( x1 ) at ( 0,1.25) ; ( calp ) at ( 0,0) ; ( x1 ) node[midway , right , font=] ( calp ) ; given by .if is injective we have a factorization : ( x1 ) at ( 0,2.5) ; ( gamma ) at ( 1.5,1.25) ; ( calp ) at ( 0,0) ; ( x1 ) node[left] ( calp ) ; ( x1 ) node[above right] ( gamma ) ; ( gamma ) node[below right]projection ( calp ) ; depending on the actual situation we may consider and as boundary maps . represents the bonds of collections of elements or interactions in a dynamical context .but the bonds come along with the collection they bind just as morphisms in mathematics come along with sources and targets . similarly at this levelwe introduce properties and state spaces and sets of bonds as follows : ( then represents the emergent properties as in ) satisfying the corresponding condition .+ then we form the next level set : and ( x2 ) at ( 0,1.25) ; ( calp ) at ( 0,0) ; ( x2 ) node[midway , right , font=] ( calp ) ; .we now iterate this procedure up to a general level : satisfying the corresponding condition this is not a recursive procedure since at each level new assignments take place .the higher order bonds extend the notion of higher morphisms in higher categories .let us write where ( xi+1 ) at ( 0,1.25) ; ( calp ) at ( 0,0) ; ( xi+1 ) node[midway , right , font=] ( calp ) ; the s generalize the source and target maps in the category theoretical setting , and we think of them as generalized boundary maps .an observer mechanism is implicit in the s .sometimes one may also want to require maps or generalizing the identity such that . as for onemay also for consider .further mathematical properties to be satisfied will be discussed elsewhere , for example composition of bonds .we will then also discuss how to associate a topological space to a hyperstructure a generalized nerve construction .bonds may also have internal structures like topological spaces , manifolds , algebras , vector spaces , wave functions , fields , etc .the intuition behind this is : [ def : hyper ] the system where the elements are related as described , we call _ a hyperstructure of order _ . sometimes one may want to organize a set of agents for a specific purpose .one way to do this is to put a hyperstructure on it organizing the agents to fulfill a given goal .this applies to both concrete physical and abstract situations .an example of this is the procedure whereby we organize molecules ( or abstract topological bonds ) into rings , -rings, , -rings representing new topological structures . in many cases it is natural to view the bonds as geometric or topological spaces .for example if a surface has three circles as boundary components , see figure [ fig : surfaces ] , , we may say that is a geometric bond of the circles .clearly there may be many .this is in analogy with chemical bonds .furthermore , if we have a manifold such that its boundary is see figure [ fig : boundary ] , we may say that is a bond of .even more general is the following situation : given a topological space of some kind and let be subspaces of .then we say that is a bond of , see figure [ fig : slices ] , thinking of as `` the boundary '' of . in a hyperstructure ofhigher order we may let represent a top level bond , then the s will represent bonds of other spaces : etc ., see figure [ fig : towers ] .this slightly extends the pictures of and represent what we could call a _ geometric hyperstructure_. this concept relates to topological quantum field theory and will be studied in a separate paper , see section [ sec : multi ] for some further remarks .hyperstructures and higher order bonds may be viewed as a huge extension of cobordisms ( manifolds with boundary ) and chemicals bonds and interlockings .furthermore , the whole scheme of thinking may be applied to interactions and ways of connecting and interlocking people , groups of people , social and economic systems and organizations , organisms , genes , data , etc .for some of these aspects , see . by describing all such types of systems by means of hyperstructuresone may create entirely new structures which may be useful both in old and new contexts . in 2,6,10( , 0 ) arc(0:-180:1 cm and 0.5 cm ) ; ( , 0 ) arc(0:180:1 cm and 0.5 cm ) ; ( 0,0 ) .. controls ( 0,6 ) and ( 10,6 ) .. ( 10,0 ) ; ( 2,0 ) .. controls ( 2,2 ) and ( 4,2 ) .. ( 4,0 ) ; ( 6,0 ) .. controls ( 6,2 ) and ( 8,2 ) .. ( 8,0 ) ; at ( 1,-1.125) ; at ( 5,-1.125) ; at ( 9,-1.125) ; at ( 5,3) ; at ( 12,2)or ; in 2,6,10 ( , 0 ) arc(0:-180:1 cm and 0.5 cm ) ; ( , 0 ) arc(0:180:1 cm and 0.5 cm ) ; ( 0,0 ) .. controls ( 0,6 ) and ( 10,6 ) .. ( 10,0 ) ; ( 2,0 ) .. controls ( 2,2 ) and ( 4,2 ) .. ( 4,0 ) ; ( 6,0 ) .. controls ( 6,2 ) and ( 8,2 ) .. ( 8,0 ) ; at ( 1,-1.125) ; at ( 5,-1.125) ; at ( 9,-1.125) ; at ( 3,3) ; ( 0,0 ) .. controls ( 0,-0.5 ) and ( 1.5,-0.5 ) .. ( 1.5,0 ) ; ( 0.125,-0.195 ) .. controls ( 0.125,0.25 ) and ( 1.375,0.25 ) .. ( 1.375,-0.195 ) ; ( 0,0 ) .. controls ( 0,-0.5 ) and ( 1.5,-0.5 ) .. ( 1.5,0 ) ; ( 0.125,-0.195 ) .. controls ( 0.125,0.25 ) and ( 1.375,0.25 ) .. ( 1.375,-0.195 ) ; in 2,6,14,18 ( , 0 ) arc(0:-180:1 cm and 0.5 cm ) ; ( , 0 ) arc(0:180:1 cm and 0.5 cm ) ; ( 8,0 ) ( 10,0 ) ; ( 0,0 ) .. controls ( 0,8 ) and ( 18,8 ) .. ( 18,0 ) ; in 2,6,10,14 ( , 0 ) .. controls(,2 ) and ( + 2,2 ) .. ( + 2,0 ) ; at ( 1,-1.125) ; at ( 5,-1.125) ; at ( 13,-1.125) ; at ( 17,-1.125) ; at ( 9,3) ; ( 0.24396864,-1.4967076 ) .. controls ( 0.49101374,-2.4657116 ) and ( 0.665201,-1.8994577 ) .. ( 1.5439687,-2.3767076 ) .. controls ( 2.4227362,-2.8539574 ) and ( 3.017664,-3.2517743 ) .. ( 4.0039687,-3.4167075 ) .. controls ( 4.9902735,-3.581641 ) and ( 5.629609,-3.3427677 ) .. ( 6.6239686,-3.2367077 ) .. controls ( 7.6183286,-3.1306474 ) and ( 8.599862,-3.4573336 ) .. ( 9.383968,-2.8367076 ) .. controls ( 10.168076,-2.2160816 ) and ( 9.585438,-2.3502257 ) .. ( 9.943969,-1.4167076 ) .. controls ( 10.302499,-0.48318952 ) and ( 10.588963,-0.26462105 ) .. ( 10.743969,0.7232924 ) .. controls ( 10.898975,1.7112058 ) and ( 11.202379,1.6006097 ) .. ( 10.663969,2.4432924 ) .. controls ( 10.125558,3.2859752 ) and ( 9.734352,3.3049438 ) .. ( 8.743969,3.4432924 ) .. controls ( 7.753585,3.581641 ) and ( 5.849801,3.2754364 ) .. ( 5.0439687,2.6832924 ) .. controls ( 4.2381363,2.0911484 ) and ( 5.064207,1.7455099 ) .. ( 4.2239685,1.2032924 ) .. controls ( 3.3837304,0.661075 ) and ( 2.3561227,1.5378368 ) .. ( 1.5239687,0.9832924 ) .. controls ( 0.69181454,0.428748 ) and ( -0.003076456,-0.5277036 ) .. ( 0.24396864,-1.4967076 ) ; ( 0,0 ) .. controls ( 1,0.5 ) and ( -0.5,1.25 ) .. ( 0,2 ) ; at ( 0.25,0) ; ( 0,0 ) .. controls ( 1,0.5 ) and ( -0.5,1.25 ) .. ( 0,2 ) ; at ( 0.25,0) ; ( 0,1 ) ( 1,1 ) ; ( 0,0 ) .. controls ( 1,0.5 ) and ( -0.5,1.25 ) .. ( 0,2 ) ; at ( 0.25,0) ; at ( 7.25,3.5) ; ( 0.24396864,-1.4967076 ) .. controls ( 0.49101374,-2.4657116 ) and ( 0.665201,-1.8994577 ) .. ( 1.5439687,-2.3767076 ) .. controls ( 2.4227362,-2.8539574 ) and ( 3.017664,-3.2517743 ) .. ( 4.0039687,-3.4167075 ) .. controls ( 4.9902735,-3.581641 ) and ( 5.629609,-3.3427677 ) .. ( 6.6239686,-3.2367077 ) .. controls ( 7.6183286,-3.1306474 ) and ( 8.599862,-3.4573336 ) .. ( 9.383968,-2.8367076 ) .. controls ( 10.168076,-2.2160816 ) and ( 9.585438,-2.3502257 ) .. ( 9.943969,-1.4167076 ) .. controls ( 10.302499,-0.48318952 ) and ( 10.588963,-0.26462105 ) .. ( 10.743969,0.7232924 ) .. controls ( 10.898975,1.7112058 ) and ( 11.202379,1.6006097 ) .. ( 10.663969,2.4432924 ) .. controls ( 10.125558,3.2859752 ) and ( 9.734352,3.3049438 ) .. ( 8.743969,3.4432924 ) .. controls ( 7.753585,3.581641 ) and ( 5.849801,3.2754364 ) .. ( 5.0439687,2.6832924 ) .. controls ( 4.2381363,2.0911484 ) and ( 5.064207,1.7455099 ) .. ( 4.2239685,1.2032924 ) .. controls ( 3.3837304,0.661075 ) and ( 2.3561227,1.5378368 ) .. ( 1.5239687,0.9832924 ) .. controls ( 0.69181454,0.428748 ) and ( -0.003076456,-0.5277036 ) .. ( 0.24396864,-1.4967076 ) ; ( 0,0 ) rectangle(1,3 ) ; ( 0,1 ) rectangle(1,2 ) ; at ( 1.125,0) ; ( 0,0 ) rectangle(1,3 ) ; ( 0,1 ) rectangle(1,2 ) ; at ( 1.125,0) ; ( 0,1 ) ( 1,1 ) ; ( 0,0 ) rectangle(1,3 ) ; ( 0,1 ) rectangle(1,2 ) ; at ( 1.125,0) ; at ( 7,3.5) ; we could also call a hyperstructure a binding structure since it really binds the elements of a collection . to make the notion simplerwe will suppress the states in the following , and we will express the associated binding mechanisms of collections as simple as possible . at the end of weoffer a categorical version which is more restrictive .further examples of hyperstructures : * _ higher order links _ as in example [ ex : links ] . herethe starting set is a collection of rings , the observed state is circularity and the bonds are the links. then one observes `` circularity '' ( or embedding in a torus ) of the brunnian links and continues the process by forming rings of rings as described in .+ * _ hierarchical clustering _ and multilevel decompositions are typical examples of hyperstructures .the bonds are given by level - wise membership as in example [ ex : many ] .we should point out that hyperstructures encompass and are far more sophisticated and subtle than hierarchies .+ * _ higher categories _ are examples of hyperstructures as well where the higher morphisms are interpreted as higher bonds .+ * _ compositions_. often collections of objects and data may be organized into a composition of mappings where we may think of as a bond of the elements in , and is again a bond of the elements in , etc .see and references therein .similarly one may say that a subset ( or space if we have more structure ) is a bond of subsets in .composition models of hyperstructures are particularly interesting when the and mappings have more structure , for example being smooth manifolds and smooth mappings . in that casethere is an interesting stability theory , see and references therein .+ * _ higher order cobordisms_. in geometry and topology we consider kinds of generalized surfaces in arbitrary dimensions called manifolds . these may be smooth and have various additional structures .amongst manifolds there is a very important notion of cobordism , and we will illustrate how cobordisms of manifolds with boundaries and corners are important as bonds . +two manifolds and ( with or without boundary ) of dimension are cobordant if and only if there exists an dimensional manifold such that stands for boundary and means glued together along the common boundary see figure [ fig : cob1 ] .+ in this paper we are interested in studies of structures etc .so let us see what cobordisms of cobordisms or more generally bonds of bonds mean in this geometric setting .+ let be a bond between and and let be a bond between and .then is bond ( cobordism ) between and if and only if where means glued together along common boundary : is of dimension and of dimension , see figure [ fig : cob2 ] .+ furthermore , a third order bond between and will be given by an manifold such that etc ., see figure [ fig : cube ] .+ in this formal description we have just considered two `` components '' in the boundary , hence a cobordism is then considered as a bond between two parts like a morphism in a category . but clearly the geometry extends to any finite number of components , hence we consider a cobordism as the prototype of a geometric bond between several objects : + + mathematically this requires that we study manifolds with decomposed boundaries , whose boundary components again are decomposed , etc .( as introduced and studied in ) or manifolds with higher order corners ( corners of corners etc . ) , see figure [ fig : cob3 ] .+ hyperstructures seem like the correct mathematical structure to describe this situation . + ( 0,2 ) circle(0.075 cm ) ; ( 0,0 ) circle(0.075 cm ) ; + at ( 0,2) ; at ( 0,0) ; + ( 0,2 ) node[right] ( 0,0 ) ; + ( 0,2 ) ellipse(0.5 cm and 0.25 cm ) ; ( 0.5,0 ) arc(0:-180:0.5 cm and 0.25 cm ) ; ( 0.5,0 ) arc(0:180:0.5 cm and 0.25 cm ) ; + ( -0.5,0 ) ( -0.5,2 ) ; ( 0.5,0 ) ( 0.5,2 ) ; + at ( 0,2) ; at ( 0,0) ; at ( 0,1) ; + ( 0.5,2 ) ellipse ( 0.5 cm and 0.25 cm ) ; ( 2.5,2 ) ellipse ( 0.5 cm and 0.25 cm ) ; ( 4.5,2 ) ellipse ( 0.5 cm and 0.25 cm ) ; ( 2,0 ) arc(0:-180:0.5 cm and 0.25 cm ) ; ( 2,0 ) arc(0:180:0.5 cm and 0.25 cm ) ; ( 4,0 ) arc(0:-180:0.5 cm and 0.25 cm ) ; ( 4,0 ) arc(0:180:0.5 cm and 0.25 cm ) ; + ( 1,2 ) .. controls ( 1,1.5 ) and ( 2,1.5 ) .. ( 2,2 ) ; ( 3,2 ) .. controls ( 3,1.5 ) and ( 4,1.5 ) .. ( 4,2 ) ; + ( 2,0 ) .. controls ( 2,0.5 ) and ( 3,0.5 ) .. ( 3,0 ) ; + ( 0,2 ) .. controls ( 0,1 ) and ( 1,1 ) .. ( 1,0 ) ; ( 5,2 ) .. controls ( 5,1 ) and ( 4,1 ) .. ( 4,0 ) ; + at ( 2.5,2) ; at ( 2.5,0) ; at ( 3,1) ; + ( 0,0 ) .. controls ( 0,-0.5 ) and ( 1.5,-0.5 ) .. ( 1.5,0 ) ; ( 0.125,-0.195 ) .. controls ( 0.125,0.25 ) and ( 1.375,0.25 ) .. ( 1.375,-0.195 ) ; + ( 0.5,2 ) ellipse ( 0.5 cm and 0.25 cm ) ; ( 2.5,2 ) ellipse ( 0.5 cm and 0.25 cm ) ; ( 4.5,2 ) ellipse ( 0.5 cm and 0.25 cm ) ; ( 6.5,2 ) ellipse ( 0.5 cm and 0.25 cm ) ; ( 8.5,2 ) ellipse ( 0.5 cm and 0.25 cm ) ; + in 1,3,5,7 ( , 2 ) .. controls ( , 1.5 ) and ( + 1,1.5 ) .. ( + 1,2 ) ; + ( 0,2 ) .. controls ( 0,-1 ) and ( 9,-1 ) .. ( 9,2 ) ; + at ( 4.5,2) ; at ( 5.5,0.8) ; + ( 0,0 ) .. controls ( 0,-0.5 ) and ( 1.5,-0.5 ) .. ( 1.5,0 ) ; ( 0.125,-0.195 ) .. controls ( 0.125,0.25 ) and ( 1.375,0.25 ) .. ( 1.375,-0.195 ) ; + ( 0,1 ) arc(90:270:0.25 cm and 0.5 cm ) ; ( 0,1 ) arc(90:-90:0.25 cm and 0.5 cm ) ; ( 0,3 ) arc(90:270:0.25 cm and 0.5 cm ) ; ( 0,3 ) arc(90:-90:0.25 cm and 0.5 cm ) ; ( 5,1.5 ) ellipse(0.25 cm and 0.5 cm ) ; ( 3,3.5 ) ellipse(0.5 cm and 0.25 cm ) ; + ( 0,0 ) .. controls ( 2.5,0 ) and ( 2.5,1 ) .. ( 5,1 ) ; ( 0,1 ) .. controls ( 0.5,1 ) and ( 0.5,2 ) .. ( 0,2 ) ; ( 0,3 ) .. controls ( 1,3 ) and ( 2.25,2 ) .. ( 2.5,3.5 ) ; ( 3.5,3.5 ) .. controls ( 3.75,2 ) .. ( 5,2 ) ; + ( 0,0 ) .. controls ( 0,-0.5 ) and ( 1.5,-0.5 ) .. ( 1.5,0 ) ; ( 0.125,-0.195 ) .. controls ( 0.125,0.25 ) and ( 1.375,0.25 ) .. ( 1.375,-0.195 ) ; + at ( -1,1.5) ; at ( 6,1.5) ; at ( 3,4.25) ; at ( 3,2.25) ; + ( 0,0 ) rectangle(3,3 ) ; at ( 0,1.5) ; at ( 3,1.5) ; at ( 1.5,1.5) ; at ( 1.5,3) ; at ( 1.5,0) ; + in -2.5,0.5,2.5,4.5 ( 5 , ) ellipse(0.25 cm and 0.5 cm ) ; + ( 5,1 ) .. controls ( 4,1 ) and ( 4,2 ) .. ( 5,2 ) ; ( 5,3 ) .. controls ( 4,3 ) and ( 4,4 ) .. ( 5,4 ) ; + ( 5,0 ) .. controls ( 3.5,0 ) and ( 3.5,-2 ) .. ( 5,-2 ) ; ( 5,5 ) .. controls ( 0,5 ) and ( 0,-3 ) .. ( 5,-3 ) ; + at ( 1,1) ; + ( 0,0 ) .. controls ( 0,-0.5 ) and ( 1.5,-0.5 ) .. ( 1.5,0 ) ; ( 0.125,-0.195 ) .. controls ( 0.125,0.25 ) and ( 1.375,0.25 ) .. ( 1.375,-0.195 ) ; + in 1,3 ( 0 , ) arc(90:270:0.25 cm and 0.5 cm ) ; ( 0 , ) arc(90:-90:0.25 cm and 0.5 cm ) ; ( 0 , ) .. controls ( 1 , ) and ( 1,+1 ) .. ( 0,+1 ) ; ( 0,5 ) arc(90:270:0.25 cm and 0.5 cm ) ; ( 0,5 ) arc(90:-90:0.25 cm and 0.5 cm ) ; + ( 5,1.5 ) ellipse(0.25 cm and 0.5 cm ) ; ( 5,3.5 ) ellipse(0.25 cm and 0.5 cm ) ; ( 5,2 ) .. controls ( 4,2 ) and ( 4,3 ) .. ( 5,3 ) ; + ( 0,0 ) .. controls ( 2.5,0 ) and ( 2.5,1 ) .. ( 5,1 ) ; ( 0,5 ) .. controls ( 2.5,5 ) and ( 2.5,4 ) .. ( 5,4 ) ; + at ( 0,5) ; at ( 2.5,5) ; at ( 5,5) ; + ( 0,0 ) .. controls ( 0,-0.5 ) and ( 1.5,-0.5 ) .. ( 1.5,0 ) ; ( 0.125,-0.195 ) .. controls ( 0.125,0.25 ) and ( 1.375,0.25 ) .. ( 1.375,-0.195 ) ; + at ( 2.5,-1) ; + ( 0,-2 ) arc(90:270:0.25 cm and 0.5 cm ) ; ( 0,-2 ) arc(90:-90:0.25 cm and 0.5 cm ) ; ( 5,-2.5 ) ellipse(0.25 cm and 0.5 cm ) ; ( 0,-2 ) ( 5,-2 ) ; ( 0,-3 ) ( 5,-3 ) ; + at ( 0,-3) ; at ( 2.5,-3.15) ; at ( 5,-3) ; + ( 0,0 ) .. controls ( 0,-0.5 ) and ( 1.5,-0.5 ) .. ( 1.5,0 ) ; ( 0.125,-0.195 ) .. controls ( 0.125,0.25 ) and ( 1.375,0.25 ) .. ( 1.375,-0.195 ) ; + in -2,2,4 ( 0 , ) arc(90:270:0.25 cm and 0.5 cm ) ; ( 0 , ) arc(90:-90:0.25 cm and 0.5 cm ) ; + ( 0,3 ) .. controls ( 1,3 ) and ( 1,2 ) .. ( 0,2 ) ; ( 0,1 ) .. controls ( 1.5,1 ) and ( 1.5,-2 ) .. ( 0,-2 ) ; ( 0,4 ) .. controls ( 5,4 ) and ( 5,-3 ) .. ( 0,-3 ) ; + at ( 4,1) ; + ( 0,0 ) .. controls ( 0,-0.5 ) and ( 1.5,-0.5 ) .. ( 1.5,0 ) ; ( 0.125,-0.195 ) .. controls ( 0.125,0.25 ) and ( 1.375,0.25 ) .. ( 1.375,-0.195 ) ; + ( 0,0 ) .. controls ( 0,-0.5 ) and ( 1.5,-0.5 ) .. ( 1.5,0 ) ; ( 0.125,-0.195 ) .. controls ( 0.125,0.25 ) and ( 1.375,0.25 ) .. ( 1.375,-0.195 ) ; + ( 0,0 ) rectangle ( 3,3 ) ; + at ( 0,0) ; at ( 0,1.5) ; at ( 0,3) ; at ( 1.5,0) ; at ( 1.5,1.5) ; at ( 1.5,3) ; at ( 3,0) ; at ( 3,1.5) ; at ( 3,3) ; + in 0,3 in 0,3 ( , ) circle(0.075 cm ) ; + ( 1,0 ) rectangle(3,2 ) ; ( 1,0 ) ( 0,0.5 ) ( 0,2.5 ) ( 2,2.5 ) ( 3,2 ) ; ( 1,2 ) ( 0,2.5 ) ; ( 0,0.5 ) ( 2,0.5 ) ( 3,0 ) ; ( 2,2.5 ) ( 2,0.5 ) ; + ( -0.5,2 ) node[above left] ( 0.5,1 ) ; ( 4,1.75 ) node[yshift=0.05cm , right] ( 2.5,1.25 ) ; ( 3,2.75 ) node[yshift=0.1cm , right] ( 1.75,2.15 ) ; + ( 0.5,2 ) ellipse ( 0.5 cm and 0.25 cm ) ; ( 2.5,2 ) ellipse ( 0.5 cm and 0.25 cm ) ; ( 4.5,2 ) ellipse ( 0.5 cm and 0.25 cm ) ; ( 6,2 ) ( 7,2 ) ; ( 8.5,2 ) ellipse ( 0.5 cm and 0.25 cm ) ; + in 1,3,5,7 ( , 2 ) .. controls ( , 1.5 ) and ( + 1,1.5 ) .. ( + 1,2 ) ; + ( 0,2 ) .. controls ( 0,-1 ) and ( 9,-1 ) .. ( 9,2 ) ; + at ( 0.5,2) ; at ( 2.5,2) ; at ( 4.5,2) ; at ( 8.5,2) ; at ( 5.5,0.8) ; at ( 10.5,1.5) ; + ( 0,0 ) .. controls ( 0,-0.5 ) and ( 1.5,-0.5 ) .. ( 1.5,0 ) ; ( 0.125,-0.195 ) .. controls ( 0.125,0.25 ) and ( 1.375,0.25 ) .. ( 1.375,-0.195 ) ; + in the framework we have introduced the geometric examples in the figures correspond to : no states , .+ means that is a disjoint union of circles . is then given by a surface having the circles of as its boundary is a disjoint union of such surfaces .+ is then given by a dimensional manifold having the surfaces of as parts of its boundary , but possibly glued together along common boundaries with additional parts the s . for more details on hyperstructured glueing and decomposition processes , see and .+ in this way it goes on up to a desired dimension . if in addition we add states in the form of letting the s take vector spaces ( hilbert spaces , or some other algebraic structure ) as values we enter the situation of topological quantum field theory which we will not pursue here , see section [ sec : multi ] . + * _ limits_. in category theory we form limits and colimits of a collection of objects more precisely , given a functor we form the colimit : + the colimit binds the collection or pattern of objects into one simple object reflecting the complexity of the pattern . in this senseit is a bond in the hyperstructure framework if we drop the condition giving rise to the `` boundary '' maps .+ if we require the s to exist , then the bond knows which objects it binds . in the colimitthis is not the case .hence we consider hyperstructures with and without s .+ colimits may also be iterated .for example we may consider situations where each already is a colimit of other colimits etc. expressed in a different way we consider a multivariable functor and form the iterated colimit this is clearly an iterated binding structure in the hyperstructure framework which we will discuss in the next section .the colimits over the various indexing categories represent the bonds of the various levels , see for a general context . for a categorical discussion of hyperstructure , see appendix b in .let us illustrate using a metaphor what we mean by putting a hyperstructure on an already existing structure , system or situation .suppose we are given a society or organization of agents , and we want to act upon it in the manner of wielding political power , governing a society or nation .a possible procedure is : create a kind of `` political party '' organization .a structural design of the organization is needed , rules of action ( `` ideology '' ) and incentives ( `` goals '' ) .the fundamental task is to create an organization a `` party '' starting with `` convinced '' individuals , then suitable groups of individuals , groups of groups, basically this is putting a hyperstructure on the society of agents which may act as an ideological amplifier from individuals to the society .this can be done independently of an existing societal organization that one wants to act upon .in such a hyperstructure the bonds may depend on a goal ( ideology ) and incentives like solving common problems , infrastructure , healthcare , poverty , etc . in physicsit could be minimizing or releasing energy to obtain stability .all such factors will play a role in the build up of the hyperstructure in the form of choosing bonds , states , etc .such that they support the goals or `` ideology '' .having established a hyperstructure , then let it act by the `` ideology '' in the sense that it should be instructible like a superdemocracy . hence it may be instructed to maintain , replace or improve the existing structure of society or change it to achieve certain goals .this is what political parties and other organizations do . a hyperstructure on a society ( or space )will facilitate the achievement of desired dynamics for agents or other objects through the bonds which may act dynamically like fusion of old bonds to new bonds .if the hyperstructure is given as then one may initiate a dynamics at the lowest level which may require relatively few resources or little energy .then these changes of actions and states will propagate through the higher bonds leading to a major desired change at the top level , depending on the nature of .this is how a political organization works .these aspects are elaborated in section [ sec : multi ] .the degree of detail of the hyperstructure will depend on the situation and information available like how rich mathematically the hyperstructure can be .one may also ask : what is the sociology of a space ? represents the `` organization of '' or `` the society of x '' .what can be obtained by a political structure on depends on and .this shows through the metaphor how general the idea is and how it may usefully apply in many situations .another interesting idea is how to use this metaphor in the study of the genome as a society of genes .one would like to put on a hyperstructure whose ideology should be to maintain the structure ( homeostasis ) , avoiding and discarding unwanted growth like cancer .how could one possibly create and represent such a `` genomic political party '' ?possibly by an organized collection ( hyperstructure ) of drugs ( or external fields ) that acts on bonds of genes .the protein p53 may already have such a role at a high level in an existing hyperstructure .we will now discuss the main issue in this paper , namely how to organize a collection of objects ( possibly with an existing structure ) into a new structure .let us assume that we are given a basic collection of objects finite , countable or uncountable . may be the elements of a set or a space .let us also assume that we are given a hyperstructure of order in the sense of the previous section . in order to simplify the notation we do not write the s and s .+ how to put an -structure on ? +this means describing how to bind the objects in together into new higher order objects following the pattern given by the bond structure in .the idea is as follows .+ we represent the collection as a collection of elements in the basic set on which is built .( zi ) at ( 0,0) ; at ( 0,-0.5) ; at ( 0,-1.75) ; ( zihat ) at ( 5,0) ; at ( 5,-0.5) ; at ( 5,-1.75) ; ( 0.811657,6.5691524 ) .. controls ( 0.104829654,5.8617663 ) and ( 0.0,-4.1397247 ) .. ( 0.61165696,-4.9308476 ) .. controls ( 1.2233139,-5.7219706 ) and ( 7.5121303,-7.286697 ) .. ( 8.391657,-6.8108478 ) .. controls ( 9.271184,-6.334998 ) and ( 10.63916,-2.4085267 ) .. ( 10.291657,-1.4708476 ) .. controls ( 9.944154,-0.5331687 ) and ( 6.52478,-0.1069224 ) .. ( 6.231657,0.8491523 ) .. controls ( 5.9385343,1.805227 ) and ( 8.929111,4.5499115 ) .. ( 8.491657,5.4491525 ) .. controls ( 8.054203,6.3483934 ) and ( 4.007128,6.5140862 ) .. ( 3.011657,6.6091523 ) .. controls ( 2.016186,6.7042184 ) and ( 1.5184844,7.2765384 ) .. ( 0.811657,6.5691524 ) ; ( 0.811657,6.5691524 ) .. controls ( 0.104829654,5.8617663 ) and ( 0.0,-4.1397247 ) .. ( 0.61165696,-4.9308476 ) .. controls ( 1.2233139,-5.7219706 ) and ( 7.5121303,-7.286697 ) .. ( 8.391657,-6.8108478 ) .. controls ( 9.271184,-6.334998 ) and ( 10.63916,-2.4085267 ) .. ( 10.291657,-1.4708476 ) .. controls ( 9.944154,-0.5331687 ) and ( 6.52478,-0.1069224 ) .. ( 6.231657,0.8491523 ) .. controls ( 5.9385343,1.805227 ) and ( 8.929111,4.5499115 ) .. ( 8.491657,5.4491525 ) .. controls ( 8.054203,6.3483934 ) and ( 4.007128,6.5140862 ) .. ( 3.011657,6.6091523 ) .. controls ( 2.016186,6.7042184 ) and ( 1.5184844,7.2765384 ) .. ( 0.811657,6.5691524 ) ; ( zi.north east ) edge[bend right=-30 ] node[above] ( zihat.north west ) ; hence we get a new collection of objects ( or elements ) in this is similar to the correspondence or analogy in examples [ ex : links ] and [ ex : many ] where particles are represented as rings .more on this later .on we have bonds and can apply these to the new collection in .therefore we put a bond structure on as follows .[ def : b0 ] this means that we pull back the bonds from the hyperstructure on to . since we get an induced hyperstructure on .this means that with the , , coming along .if we already have a good hyperstructure on , we just keep it via the identity representation and use it in the binding process we will describe . with a hyperstructure on the given collection we can introduce new higher order clusters and patterns of interactions .[ def : clustering ] an -binding ( or clustering ) structure on denoted by is given as follows : let and , is an element of a -cluster ( ) if .furthermore , let and , then is an element of a -cluster ( ) if .if , and , then we have a second order clustering and write . in the same way we proceed to -th order clustering by requiring in an obvious extension of the notation .this describes the general -binding ( or clustering ) principle .the same principle applies to the extended representation picture : ( zup ) at ( 0,1.5) ; ( x0 ) at ( 1.5,1.5) ; ( zupr ) at ( 3,1.5) ; ( zdown ) at ( 0,0) ; ( xn ) at ( 1.5,0) ; ( zdownr ) at ( 3,0) ; ( zup ) node ( x0 ) ; ( x0 ) node ( zupr ) ; ( zdown ) node ( xn ) ; ( xn ) node ( zdownr ) ; ( x0 ) ( xn ) ; interpreted in the natural way : giving a binding structure and inducing a `` parametrization '' or decomposition by taking inverse images .the figure indicates that we may represent or induce at any level , but most of the time we use level .one may construct a decomposition of via a hyperstructure , by starting with the top bonds ( reverse the direction of the binding process ) .furthermore , one may then bind the lowest level ( smallest ) elements of the decomposition ( -bonds ) to a new hyperstructure .the situation may also be extended to the s and s being of relational character .the -binding principle that we have described is in a way also a transfer principle of organization showing how to transfer structure and organization from one universe to another ( this is more general than functors between categories ) .for example one may use it to transfer deep geometrical bonds to other interacting systems , like particle systems as described in .the idea may be easier to grasp in the case that the hyperstructure is given by a composition : in this case a given collection should be represented in . for simplicitylet us consider the identity representation .the elements of represent , hence if .similarly and order clusters are formed .hence gives an -th order clustering scheme of .this discussion shows what we mean by putting a hyperstructure on a collection of objects . in a waythe hyperstructure acts as a parametrization of the new objects formed by higher bonds .an important point here is the choice of representations , and how to choose them in an interesting and relevant way . may be any collection of elements , subsets , subspaces , elements of a decomposition , etc . organizes into a `` society '' of objects .putting a hyperstructure on a `` situation '' is not meant to be restricted to a set or a space , but could be a non - mathematical `` situation '' , a category of some kind or in general an already existing hyperstructure ( or families of such ) .the transfer of binding structures may be considered as a kind of generalized `` functoriality '' .in fact one may think of a given situation in representing level , and then always look for a higher order associated situation in the form of a hyperstructure , which may give a better understanding of the given situation ( or object ) .for example : this idea is a bit similar to the idea of associating complexes and resolutions to groups , modules , algebras , categories , etc . derived objects ( or situations ) . sometimes , a representation is given in a natural way by the nature of the object .for example molecules being represented by their own geometric form . on the other hand one may choose more abstract representations embedding the collection in a universe rich with structures and possibilities for interesting bindings and interactions .in this way we put a hyperstructure on a situation by somehow inducing one from a known one .if a good model does not exist , one may have to create a new suitable hyperstructure for further use . in either casethe hyperstructure enables us to form organized abstract matter in the sense of in order to handle a collection of objects or a situation and achieve certain goals .binding structures represent the _ principle of how we make things_. as pointed out in one may study the collection in a selected universe and then ask whether the `` abstract '' binding structure may be realized in the original universe .hence we may use an -structure to synthesize new bond states .this is exactly the situation we study in where is a collection of particles in a cold gas .we represent the particles by rings as in examples [ ex : links ] and [ ex : many ] .the hyperstructure of higher order links pulls back to a hyperstructure of the particle collection .it is a verified fact , see and references therein , that to brunnian ( or borromean ) rings there corresponds quantum mechanical states efimov states with the same binding patterns . on this basisit is tempting to suggest that there is a similar correspondence between higher order links and states corresponding to the higher order clusters of bound particles higher order brunnian states .this is the main suggestion of .for example the binding pattern of a second order brunnian ring suggests a bond or particle state of particles bound to trimers and trimers bound to a single state .this is the key idea in our guiding examples of the general setting of binding structures we have introduced .we may summarize our discussion in the following figure : ( 0,0 ) rectangle ( 2.5,5 ) ; in 1,2,4 ( 0 , ) ( 2.5 , ) ; at ( 1.25,-0.6) ; at ( 1.25,0.4) ; at ( 1.25,1.4) ; at ( 1.25,3) ; at ( 1.25,4.4) ; ( 3,0.4 ) node[midway , above] ( 4.5,0.4 ) ; in 1.5,4.5 ( 3 , ) ( 4.5 , ) ; ( 0,0 ) rectangle ( 2.5,5 ) ; in 1,2,4 ( 0 , ) ( 2.5 , ) ; at ( 1.25,-0.6) ; at ( 1.25,0.4) ; at ( 1.25,1.4) ; at ( 1.25,3) ; at ( 1.25,4.4) ; ( 3,0.4 ) node[midway , above] ( 4.5,0.4 ) ; in 1.5,4.5 ( 3 , ) ( 4.5 , ) ; ( 0,0 ) rectangle ( 2.5,5 ) ; in 1,2,4 ( 0 , ) ( 2.5 , ) ; at ( 1.25,-0.6) ; at ( 1.25,0.4) ; at ( 1.25,1.4) ; at ( 1.25,3) ; at ( 1.25,4.4) ; illustrating the two basic principles : + * the hyperstructure principle which is an organizing principle and a guiding principle for structural architecture and engineering . + * the transfer principle which is a way to transfer a hyperstructure .+ for the situation in example [ ex : many ] the figure looks like : ( 0,0 ) rectangle ( 5,9 ) ; in 3,6 ( 0 , ) ( 5 , ) ; ( 0,0 ) circle(2.5 cm ) ; ( 0,0 ) circle(2.5 cm ) ; ( -0.25,-1.6 ) circle(0.5 cm ) ; ( -1.1,1.2 ) circle(0.5 cm ) ; ( 1.4,0.8 ) circle(0.5 cm ) ; ( 0,0 ) circle(2.5 cm ) ; ( -0.25,-1.6 ) circle(0.5 cm ) ; ( -1.1,1.2 ) circle(0.5 cm ) ; ( 1.4,0.8 ) circle(0.5 cm ) ; ( 0,0 ) circle(2.5 cm ) ; ( -0.25,-1.6 ) circle(0.5 cm ) ; ( -1.1,1.2 ) circle(0.5 cm ) ; ( 1.4,0.8 ) circle(0.5 cm ) ; ( 0,0 ) circle(2.5 cm ) ; ( -0.25,-1.6 ) circle(0.5 cm ) ; ( -1.1,1.2 ) circle(0.5 cm ) ; ( 1.4,0.8 ) circle(0.5 cm ) ; ( 0,0 ) circle(2.5 cm ) ; ( -0.25,-1.6 ) circle(0.5 cm ) ; ( -1.1,1.2 ) circle(0.5 cm ) ; ( 1.4,0.8 ) circle(0.5 cm ) ; ( 0,0 ) circle(2.5 cm ) ; ( -0.25,-1.6 ) circle(0.5 cm ) ; ( -1.1,1.2 ) circle(0.5 cm ) ; ( 1.4,0.8 ) circle(0.5 cm ) ; in 1.25,2.5,3.75 ( , 2.5 ) circle(0.3 cm ) ; ( , 1.5 ) circle(0.3 cm ) ; ( , 0.5 ) circle(0.3 cm ) ; at ( 2.5,9.8)state hyperstructure : ; at ( 2.5,-0.6) ; in 4.5,7.5 ( 6 , ) ( 9 , ) ; ( 6,1.5 ) node[midway , below] ( 9,1.5 ) ; ( 0,0 ) rectangle ( 5,9 ) ; in 3,6 ( 0 , ) ( 5 , ) ; in 1.25,2.5,3.75 ( , 2.5 ) circle(0.3 cm ) ; ( , 1.5 ) circle(0.3 cm ) ; ( , 0.5 ) circle(0.3 cm ) ; at ( 2.5,9.8)link hyperstructure : ; at ( 2.5,-0.6) ; [ 0pt][0pt ] the binding structures and diagrams in figure [ fig : basic_prin ] described here may be considered as extensions of pasting diagrams in higher categories . a sheaf type formulation has been given in appendix b in .using the principles described here one may induce an action on a totality , by acting on individual elements and letting the action propagate through to the top level as in political processes and in social and business organizations . in this way many small actions may lead to major global actions and change of state . hence acts as an amplifier .how do we synthesize new objects or structures from old ones ?a common procedure both in nature and science is to bind objects together to form new objects with new properties , then use these properties in forming new bonds and new higher order objects .this is precisely what a hyperstructure does !let be a collection of objects . by putting a hyperstructure on we have a binding scheme of new higher order synthetic objects .if is pulled back from a given structure , the problem may be to tune the environment of in such a way that the binding pattern of may be realized . on the other hand be considered as a global object that we want to analyze by decomposing it into smaller and smaller pieces . by putting a hyperstructure on we have seen in the previous section how it gives rise to a higher order clustering decomposition of . it is interesting to notice that if we take the smallest pieces in the decomposition ( lowest level elements ) as our basic set , we may put a new hyperstructure on it and recapture as the top level of .this shows that hyperstructures are useful in the _ synthesis _ of new collections of objects from given ones and in the _ analysis _ of them as well .it is very useful to put a hyperstructure on a collection of objects in order to manipulate the collection towards certain goals .we will discuss various applications in the following .many interactions in science and nature may be described and handled as organized and structured collections of objects in certain contexts called many - body systems . in the followingwe would like to point out that hyperstructures of bindings may be interesting and useful in many areas .* _ physics_. -structures of many body systems ( particles ) may as we have already discussed give rise to new and exotic states of matter ( -states ) , see examples [ ex : links ] and [ ex : many ] . +* _ chemistry_. hyperstructures like higher order links are interesting models for the synthesis of new molecules and materials like higher order brunnian rings , see and .+ more generally we may consider a hyperstructure where the bonds are spaces like manifolds or cw - complexes built up of cells .+ let us think of a collection of molecules each represented by a point in space. then they may form + at ( 0,0)chains : ; at ( 1.75,-0.75)[or a ring if we prefer ] ; at ( 8,0)represented by ; at ( 12.4,0)(an interval ) ; at ( 13.8,-0.75)a dim geometric object ; + in 1,2 ( , 0 ) ( + 1,0 ) ; ( , 0 ) circle(0.075 cm ) ; + ( 3,0 ) circle(0.075 cm ) ; ( 3,0 ) ( 3.625,0 ) ; at ( 4,-0.05) ; ( 4.375,0 ) ( 6,0 ) ; ( 5,0 ) circle(0.075 cm ) ; ( 6,0 ) circle(0.075 cm ) ; + ( 9.9,0 ) ( 10.9,0 ) ; + this representation increases the dimension by one . similarly : + in 0,0.5,1,1.5 ( 0, ) ( 2, ) ; + at ( 3.75,0.75)(bound together ) ; at ( 7.5,0.75)may be represented by ; + ( 10,0 ) rectangle(12,1.5 ) ; + at ( 13.5,0.75)(a rectangle ) ; + collection of rectangles forming new chains : + ( 0,0 ) rectangle ( 2,2 ) ; + ( 0.5,2.5 ) ( 2.5,2.5 ) ( 2.5,0.5 ) ; ( 0.5,2.5 ) ( 0.5,2.0675 ) ; ( 0.5,1.875 ) ( 0.5,0.5 ) ( 1.875,0.5 ) ; ( 2.0675,0.5 ) ( 2.5,0.5 ) ; + ( 1,3 ) ( 3,3 ) ( 3,1 ) ; ( 1,3 ) ( 1,2.5675 ) ; ( 1,2.375 ) ( 1,1 ) ( 2.375,1 ) ; ( 2.5675,1 ) ( 3,1 ) ; ( 1,3 ) ( 0.5,2.5 ) ; ( 3,3 ) ( 2.5,2.5 ) ; ( 3,1 ) ( 2.5,0.5 ) ; + ( 5,1.25 ) edge[->,bend right=-30 ] node[midway , above , yshift=0.25cm]may be represented by dim box ( 9,1.25 ) ; + ( 11,0 ) rectangle ( 13,2 ) ; ( 11,2 ) ( 12,3 ) ( 14,3 ) ( 14,1 ) ( 13,0 ) ; ( 13,2 ) ( 14,3 ) ; ( 11,0 ) ( 12,1 ) ( 14,1 ) ; ( 12,3 ) ( 12,1 ) ; + then we form chains of dimensional boxes again to be represented by a dimensional box , etc .we may also introduce holes and we may continue up to a desired dimension .+ then we may glue the molecular cells together following the topological patterns ( for example homotopy type ) of the bond spaces in each dimension .+ in this way we get organized molecules in three dimensions with the structure induced from a higher dimensional binding structure in .clearly many other similar representations are possible .this is a very useful and important principle .+ we could also have molecules representing the bonds as in the following figures : + in 0,0.75,1,,1.25,1.5 ( 0 , ) node[left , yshift=-0.05cm] ( 1 , ) ; + at ( -0.375,0.45) ; at ( 0.5,0.45) ; at ( 0.5,0) ; + in 0,0.75,1,,1.25,1.5 ( 0 , ) ( 1 , ) ; + ( 0,0 ) ( 0,1.5 ) ; at ( 0.5,0.45) ; at ( 0.5,0) ; + in 0,0.75,1.125,1.5 ( 0 , ) ( 1 , ) ; in 0.1,0.35,0.6,0.85 ( , ) ( + 0.15,+0.15 ) ; + ( 0,0 ) ( 0,1.5 ) ; at ( 0.5,0.5) ; at ( 0.5,0) ; + in 0,0.75,1,1.25,1.5 ( 0 , ) ( 1 , ) ; + ( 0,0 ) ( 0,1.5 ) ; + at ( 0.5,0.5) ; at ( 0.5,0) ; + in 2,6,10 ( , 1 ) edge[->,bend right=-30 ] ( + 1,1 ) ; + describing the process : + at ( 0,1.25) ; at ( 0,0)collection of molecules ; at ( 5,1.25) ; at ( 5,0)an -structured bond collection ; + ( 0.5,1.3 ) edge[->,bend right=-15 ] ( 4,1.3 ) ; * _ social and economic systems_. we may here consider populations of individuals , social or economic units. then it may be useful to consider them as many body systems in the physical examples and introduce higher order binding structures , see examples in and section [ sec : meta ] .+ one may for example discuss brunnian investments of agents and continuation to higher order which may be interesting in certain contexts .+ * _ biology_. here we may consider collections of genes , cells , pathways , neurons , etc .as many body systems and bind them together in new ways according to a given -structure .for example wihtin tissue engineering one may make -type tissues for various purposes .+ * _ logic_. we may introduce -type bindings of logical types and data - structures .new `` laws of thought '' are possible based on a logic of -type bindings as `` deductions '' and states / observations in representing the semantics .+ * _ networks_. in we argued that in many situations networks are inadequate and should be replaced by hyperstructures .pairwise binding or interactions would then be replaced by -bindings .look at the brunnian hyperstructure of links as in the introduction . +* _ brain systems_. extend natural and artificial neural networks to -structures of neurons as follows .+ let be a collection of real or abstract neurons .then the binding structure represent new interaction patterns `` parametrized '' by , possibly representing new types of higher order cognitive functions and properties .+ * _ correlations_. one may think of correlations as relations and bindings of variables .an interesting possibility would be to extend pair correlations to -type correlations of higher clusters .they could possibly have brunnian properties as follows : this is in analogy with cup products and massey products in the study of brunnian links . to detect higherorder brunnian linking one introduces higher order massey products . in the correlation languagethis would mean putting and finding a such that represents a second order correlation . + * _ mathematics_. as already indicated in some of the examples collections of mathematical objects may also be bound together in new ways modelled or para - metrized by a hyperstructure , for example collections of spaces like manifolds and cell - complexes along with gluing bonds .+ assume that we have a collection of spaces organized in a well - defined hyperstructure .if we have another collection of spaces or objects we may then induce an -structure on and use it to study the collection .+ ( 1.4652985,5.512821 ) .. controls ( 2.1881015,6.2038755 ) and ( 6.217555,5.638671 ) .. ( 6.9452987,4.9528213 ) .. controls ( 7.673042,4.2669716 ) and ( 7.8358297,2.6816413 ) .. ( 7.4652987,1.7528212 ) .. controls ( 7.0947676,0.8240011 ) and ( 5.517234,0.9829246 ) .. ( 5.2052984,0.032821205 ) .. controls ( 4.8933635,-0.91728216 ) and ( 6.6252465,-3.2035027 ) .. ( 6.4452987,-4.1871786 ) .. controls ( 6.2653503,-5.170855 ) and ( 5.6661596,-6.210148 ) .. ( 4.7052984,-6.487179 ) .. controls ( 3.7444375,-6.7642097 ) and ( 3.4615378,-5.8638744 ) .. ( 2.4852986,-5.6471786 ) .. controls ( 1.5090593,-5.4304833 ) and ( 1.2496852,-6.2882943 ) .. ( 0.62529856,-5.507179 ) .. controls ( 0.0,-4.7260633 ) and ( 1.6402918,0.6538345 ) .. ( 1.6852986,1.6528212 ) .. controls ( 1.7303053,2.6518078 ) and ( 0.7424957,4.821767 ) .. ( 1.4652985,5.512821 ) ; + ( 0,0 ) .. controls ( 1,0.5 ) and ( -0.5,1.25 ) .. ( 0,2 ) ; + ( 0,0 ) .. controls ( 1,0.5 ) and ( -0.5,1.25 ) .. ( 0,2 ) ; + at ( -1.5,0.5) ; + ( 0,0 ) .. controls ( 1,0.5 ) and ( -0.5,1.25 ) .. ( 0,2 ) ; + ( 27.885298,5.972821 ) .. controls ( 27.2742,6.764375 ) and ( 20.968887,6.534102 ) .. ( 20.185299,5.9128213 ) .. controls ( 19.40171,5.29154 ) and ( 19.228222,4.2212214 ) .. ( 19.545298,3.2728212 ) .. controls ( 19.862375,2.324421 ) and ( 21.11391,2.886598 ) .. ( 21.225298,1.8928212 ) .. controls ( 21.336687,0.8990443 ) and ( 16.50174,-3.090676 ) .. ( 16.585299,-4.0871787 ) .. controls ( 16.668858,-5.0836816 ) and ( 18.332634,-6.486274 ) .. ( 19.325298,-6.6071787 ) .. controls ( 20.317963,-6.7280836 ) and ( 24.766945,-4.0730786 ) .. ( 25.385298,-3.2871788 ) .. controls ( 26.003653,-2.501279 ) and ( 28.496397,5.1812673 ) .. ( 27.885298,5.972821 ) ; + ( 0,0 ) .. controls ( 1,0.5 ) and ( -0.5,1.25 ) .. ( 0,2 ) ; + ( 0,0 ) .. controls ( 1,0.5 ) and ( -0.5,1.25 ) .. ( 0,2 ) ; + at ( 5.5,-0.75) ; + ( 0,0 ) .. controls ( 1,0.5 ) and ( -0.5,1.25 ) .. ( 0,2 ) ; + at ( 0,4) ; at ( 8,4) ; + interesting properties for may be asked for like brunnian properties in a categorical setting , see .+ we may take a family of spaces , for example simplicial complexes : and represent them in a family of manifolds on which there is a hyperstructure with given bonds such that and + ( 0.15224439,-1.3088658 ) .. controls ( -0.001173694,-2.297027 ) and ( 0.094758034,-2.0608437 ) .. ( 1.0322444,-2.4088657 ) .. controls ( 1.9697307,-2.7568877 ) and ( 11.833596,-1.1438532 ) .. ( 12.352244,-0.28886575 ) .. controls ( 12.8708935,0.56612176 ) and ( 12.43143,1.6672896 ) .. ( 11.592244,2.2111342 ) .. controls ( 10.753058,2.754979 ) and ( 4.9292545,1.6004366 ) .. ( 3.9922445,1.2511343 ) .. controls ( 3.0552344,0.90183187 ) and ( 0.30566248,-0.32070437 ) .. ( 0.15224439,-1.3088658 ) ; + ( 0,0 ) .. controls ( 1,0.5 ) and ( -0.5,1.25 ) .. ( 0,2 ) ; + ( 0,0 ) .. controls ( 1,0.5 ) and ( -0.5,1.25 ) .. ( 0,2 ) ; + ( 0,0 ) .. controls ( 1,0.5 ) and ( -0.5,1.25 ) .. ( 0,2 ) ; + ( 0,0 ) .. controls ( 1,0.5 ) and ( -0.5,1.25 ) .. ( 0,2 ) ; + at ( 5,-0.25) ; at ( 10,2) ; + being a bond for , for example by . then we may say that the binds by a pullback bond .similar for higher bonds . in this way one may introduce geometric structures for each level which otherwise may have been difficult . in the sense that hyperstructures extend the various notions of higher categories , onemay also introduce -type bundles and stacks with transition and gluing morphisms replaced by appropriate bonds .then one may hope to extend bundle notions like connections , curvature and holonomy in suitable contexts .let us follow up the examples and discussion of hyperstructures in section [ sec : hyper ] .+ how to produce hyperstructures ?+ we have seen that compositions of maps naturally lead to hyperstructures on .this may also be extended to a situation of compositions of functors and sub - categories .geometric bonds are basically constructed by binding families of spaces or sub - spaces of an ambient space : where are successive bonds . herethe s may be general spaces , manifolds , etc .generalized link and knot theory may be viewed as the study of embeddings of topological spaces in other topological spaces , but hyperstructures encompass much more . still for geometric hyperstructures one may consider using and extending the mathematical theory of links and knots ( quantum versions , quandles , etc . ,see ) to the study of geometric hyperstructures .manifolds with singularities as introduced in are also represented by such bond systems .so is also the brunnian link hyperstructure this applies to structures in general for example algebraic , topological , geometric , logical and categorical presented as follows where structure binds the structures for example as substructures like in higher order links and many - body systems .this may be viewed as a kind of many - body problem of general structures , and represent a simple organizing principle for them .all this shows that there is a plethora of possible applications of hyperstructured binding both in abstract and natural systems .hyperstructures apply to all kinds of universes : mathematical , physical , chemical , biological , economic and social .furthermore , the transfer principle makes it possible to connect them .detailed applications will be the subject of future papers .the main point in this paper has been to illustrate the transfer of higher order binding structure as given in a hyperstructure .ultimately one may also consider bindings coming from hyperstructures of hyperstructures as in the case of higher order links .all the examples here and in section [ sec : hyper ] may be used to put a hyperstructure on sets , spaces , structures and situations by the methods described in section [ sec : binding ] .this may be useful to obtain actions like geometrical and physical fusion of objects in various situations similar to the `` political / sociological '' metaphor .after all we `` make things '' through a hyperstructure principle as in modern engineering .this may be so since it is the way nature works through evolution , and after all we are ourselves products of such a process .we have here discussed the organization of many - body systems or general systems of collections of objects . the systems may be finite , infinite or even uncountable .we have advocated hyperstructures as the guiding organizational principle . in this sectionwe will discuss in more detail possible organization of the states of the system through level connections .we use the terminology in section [ sec : hyper ] .when we put a hyperstructure on a situation in order to obtain a certain goal or action often a dynamics is required on : which essentially changes the states . in order to do thisit is advantageous to be able to have as rich structures as possible as states : sets , spaces ( manifolds ) , algebras , ( higher ) vector spaces , ( higher ) categories , etc .for this reason and in order to cover as many interactions as possible we introduce the following extension . instead of letting the states ( ) in take values in we extend this to a family of prescribed _ hyperstructures of states _ : where is a hyperstructure such that : we want these level state structures to be connected in some way .therefore we require that is organized into a hyperstructure itself with as the levels or actually sets of bonds of states , with being the top level ( dual to itself ) .we furthermore assume that we have level connecting assignments or boundary maps which we will for short write as : ( s0 ) at ( 0,0) ; ( s1 ) at ( 1.5,0) ; ( sdots ) at ( 3,0) ; ( sn ) at ( 4.5,0) . ; ( sn ) node ( sdots ) ; ( sdots ) node ( s1 ) ; ( s1 ) node ( s0 ) ; often the order of the hyperstructure will decrease moving from the bottom to the top `` integrating away complexity '' .the s may be of assignment ( functional ) or relational type .this shows how to form hyperstructures of hyperstructures . in such casesone may actually use existing hyperstructures to form bonds and states in new hyperstructures .the assignment of a state to a bond ( or collection of bonds ) is a kind of representation : such that if then in simplified notation ( meaning a family or subset of objects ) .let us illustrate this by an example .this is basically a version of example d ) in section [ sec : hyper ] and studied in in connection with genomic structure . is given by sets meaning that there exist maps to each we assign a state space a manifold ( possibly ) .then the state hyperstructure reduces to the composition of ( smooth ) mappings : in order to influence global states from local actions it is a reasonable and general procedure to put a hyperstructure on the systems with a multilevel state structure given by another hyperstructure with level relations as described : ( s0 ) at ( 0,0) ; ( s1 ) at ( 1.5,0) ; ( sdots ) at ( 3,0) ; ( sn ) at ( 4.5,0) . ; ( sn ) ( sdots ) ; ( sdots ) ( s1 ) ; ( s1 ) ( s0 ) ; the idea is then to act on by introducing a suitable dynamics and let the actions propagate through the hyperstructure to the global level .this is similar to social systems and may be called the `` democratic method of action '' .it is especially useful if the level relations are functional assignments : for example if the s are categories of some kind , the arrows would represent functors and if the s are spaces , the arrows represent mappings .let us specify the mappings by where just means a family or subsets of objects , and requiring assignments ( mappings ) : ( prod ) at ( 0,0) ; ( omega ) at ( 3,0) ; ( elementinprod ) at ( 0,-0.5 ) ; ( elementinomega ) at ( 3,-0.5 ) ; ( sn - i+1 ) at ( 0,-1) ; ( sn - i ) at ( 3,-1) ; ( prod ) node[above , midway , font=] ( omega ) ; ( sn - i+1 ) node[above , midway , font=] ( sn - i ) ; the s are then state level connectors . if the s have a tensor product we will often require ( omega ) at ( 0,0) ; ( omegab ) at ( 3,0) ; at ( 0.15,-0.5 ) ; at ( 0.15,-1) ; ( omega ) node[above , midway , font=] ( omegab ) ; more schematically this may be described as : for bonds , and for states : this is an extension of the framework for extended topological quantum field theories ( see ) where one considers manifolds with general boundaries and corners bound by generalized cobordisms . at the state levelone assigns to these manifolds higher order algebraic structures such as a version -vector spaces ( ) or -algebras as follows : where levels of states are connected via geometrically induced pairings : for all . being the scalars in the case of complex vector spaces .in this sense we can control and regulate the global state from the lowest level which is clearly a desirable thing in systems of all kinds .this is useful in the following situation .let be a desired action or task which may be `` un - managable '' in a given system or context .furthermore , let be a collection of `` managable '' actions in the system. put and design a hyperstructure ( like a society or factory ) where then appears as the top level bond of the hyperstructure .hence will act as a propagator from \{managable actions } to \{desired actions } by dynamically regulating the states of as described .this procedure applies to general systems and in the sense of one may say that hyperstructures are `` tools for thought '' and creation of novelty .in general systems one may often want to change the global state in a desired way and it may be difficult since it would require large resources ( `` high energy '' ) .but via a hyperstructure it may be possible introducing managable local actions and change of states which may require small resources ( `` low energy '' ) , in other words small actions are being organized into large actions via .this is similar to changing the action of a society or organization by influencing individuals .if one wants to join two opposing societies ( or nations ) into one , it may take less resources to act on individuals to obtain the global effect .the photosynthesis works along the same lines collecting , organizing and amplifying energy .it seems like an interesting idea to suggest the use of hyperstructures in order to facilitate fusion of various types of systems , for example `` particles '' in biology , chemistry and physics .even nuclear fusion may profit from this perspective .the hyperstructure in question may be introduced on the system itself or surrounding space and forces .binding structures and hyperstructures as we have described them are basically organizing principles of collections of objects .they apply to all kinds of collections and general systems , and as organizing principles they may be particularly interesting in physical matter ( condensed matter ) of atoms , moleclues , etc .r. laughlin has advocated the importance of organizing principles in condensed matter physics in understanding for example superconductivity , the quantum hall effect , phonons , topological insulators etc . see .he suggests that the very precise measurements made of important physical constants in these situations come from underlying organizing principles of matter .our binding- and hyperstructures are organizing principles that when introduced to physical matter should lead to new emergent properties .it is like in our example [ ex : links ] .if we are given a random tangle of links , a new and non - trivial geometric order emerges when we put a higher order brunnian structure on the collection of links .+ in one way it is analogous to logical systems , where organized statements are more likely to be decidable than random ones .similarly in biology : language , memory , spatial recognition , etc.are related to similar organizing principles .entangled states are studied in quantum mechanics and higher order versions are suggested .zeilinger ( ghz ) states are analogous to borromean and brunnian rings , see . from our previous discussionwe are naturally led to suggest higher order entangled states organized by a hyperstructure using the transfer principle .could such a process lead to collections of particles forming global / macroscopic quantum states ?could also an -structure act as a kind of ( geometric ) protectorate of a desired quantum state from thermodynamic disturbances ( in high temperature superconducitivity for example ) ?* putting a binding- or hyperstructure on a collection of objects. then collections of bound structures will appear , and they may have interesting emergent properties .for example with respect to precise measurements of involved constants of nature .+ * putting a binding- or hyperstructure on the ambient space ( space - time ) of a collection for example using various fields etc .this will introduce a structure on the collection and may result in bindings and fusion of particles and objects , or splitting ( fission ) , stabilizing them into new patterns with new emergent properties .+ in other words space , fields and reactors may all be organized by binding principles .we suggest that putting a binding- or hyperstructure on a collection , situation or system is a very fundamental and useful organizing principle .99 aravind , p.k .( 1997 ) , ` borromean entanglement of the ghz state , ' in _ potensiality , entanglement and passion at a distance _ , ed .cohen et al . ,kluwer academic publishers , 5359 .baas , n.a .( 1973 ) , ` on bordism theory of manifolds with singularities , ' _ math ._ , 33 , 279302 .baas , n.a .( 1994a ) , ` emergence , hierarchies , and hyperstructures , ' in _ artificial life iii _, santa fe institute studies in the sciences of complexity , ed .langton , addison - wesley , 515537 .baas , n.a .( 1994b ) , ` hyperstructures as tools in nanotechnology , ' _ nanobiology _ , 3(1 ) , 4960 .baas , n.a .( 1996 ) , ` a framework for higher order cognition and consciousness , ' in _ toward a science of consciousness _ , ed .s. hameroff , a. kaszniak and a. scott , mit - press , 633648 .baas , n.a .( 2006 ) , ` hyperstructures as abstract matter , ' _ adv .complex syst ._ , 9(3 ) , 157182 .baas , n.a .( 2009 ) , ` new structures in complex systems , ' _ eur .j. special topics _ , 178 , 2544 .baas , n.a .( 2010a ) , ` new states of matter suggested by new topological structures , ' arxiv:1012.2698v2 [ cond-mat.quant-gas ] ( to appear in _ int .j. of gen .syst . _ ) .baas , n.a .( 2010b ) , ` on many - body system interactions , ' arxiv:1012.2705 [ cond - mat .quant - gas ] dec 2010 , also addendum to .baas , n.a . ,cohen , r.l . , and ramrez , a. ( 2006 ) , ` the topology of the category of open and closed strings , _ contemporary mathematics _ , 407 , 1126 .baas , n.a . ,ehresmann , a.c . , and vanbremeersch j .-( 2004 ) , ` hyperstructures and memory evolutive systems , ' _ int .j. of gen ._ , 33(5 ) , 553568 .baas , n.a . ,fedorov , d.v . ,jensen , a.s ., riisager , k. , volosniev , a.g . , and zinner , n.t .( 2012 ) , ` higher - order brunnian structures and possible physical realizations , ' arxiv:1205.0746 [ quant - ph ] may 2012 .baas , n.a . , and helvik , t. ( 2005 ) , ` higher order cellular automata , ' _ advances in complex systems _ , 8(2 & 3 ) , 169192 .baas , n.a . , and seeman , n.c .( 2012 ) , ` on the chemical synthesis of new topological structures , ' _ j. of mathematical chemistry _ ,50 , 220232 .ehresemann , a.c . , and vanbremeersch , j .- p .( 1996 ) , ` multiplicity principle and emergence in memory evolutive systems , ' _ sams _ , 26 , 81117 .laughlin , r.b . , and pines , d. ( 2000 ) , ` the theory of everything , ' _ pnas _ , 97(1 ) , 2831 .lawrence , r. ( 1996 ) , ` an introduction to topological field theory , ' in _ proceedings of symposia in applied mathematics _, 51 , ams , 89128 .nelson , s. ( 2011 ) , ` the combinatorial revolution in knot theory , ' _ notices of ams _ , dec .2011 , 15531561 .waddington , c.h .( 1977 ) , tools for thought , ' paladin .zeilinger , a. , horne , m.a . , and greenberg , d.m .( 1992 ) , ` higher - order .quantum entaglement , ' in _ workshop on squeezed states and uncertainty relations _ , ed .d. han , y.s .kim and w.w .zachary , nasa conference publication 3135 .
we discuss the nature of structure and organization , and the process of making new things . hyperstructures are introduced as binding and organizing principles , and we show how they can transfer from one situation to another . a guiding example is the hyperstructure of higher order brunnian rings and similarly structured many - body systems . + * keywords : * hyperstructure ; organization ; binding structure ; brunnian structure ; many - body systems .
black holes are one of the most intriguing , fascinating and yet unsettling consequences of classical general relativity .even when putting aside the acceptance and understanding of the physical singularities hidden at their centers , the mere existence of an event horizon leads to a number of unsolved problems and long - standing debates . yet , black holes are some of the most cherished objects in modern astronomy and evidence of their existence at different scales appears as common as it is convincing .proof of the existence of an event horizon would not be disputable if it appeared in terms of gravitational radiation , for instance in the form of a quasinormal mode ringdown when a new black hole is formed .however , it would become surely difficult , if possible at all , when using the electromagnetic emission coming from material accreting onto it . at the same time , our increasing ability to perform astronomical observations that probe regions on scales that are comparable or even smaller than the size of the event horizon , will soon put us in the position of posing precise questions on the physical properties of those astronomical objects that appear to have all the properties of black holes in general relativity . a good example in this respect is offered by astronomical observations of the radio compact source sgr a , which resides at the center of our galaxy and is commonly assumed to be a supermassive black hole .recent radio observations of sgr a have been made on scales comparable to what would be the size of the event horizon if it indeed were a black hole .furthermore , in the near future , very long baseline interferometric radio observations are expected to image the so - called black - hole `` shadow '' , namely the photon ring marking the surface where photons on circular orbits will have their smallest stable orbit .these observations , besides providing the long - sought evidence for the existence of black holes , will also provide the possibility of testing the no - hair theorem in general relativity .if sufficiently accurate , the planned astronomical observations will not only provide convincing evidence for the existence of an event horizon , but they will also indicate if deviations exist from the predictions of general relativity .however , given the already large number of alternative theories of gravity , and considering that this is only expected to grow in the near future , a case - by - case validation of a given theory using the observational data does not seem as viable an option .it is instead much more reasonable to develop a model - independent framework that parametrizes the most generic black - hole geometry through a finite number of adjustable quantities .these quantities must be chosen in such a way that they can be used to measure deviations from general relativity ( or a black - hole geometry ) and , at the same time , can be estimated robustly from the observational data .this approach is not particularly new and actually similar in spirit to the parametrized post - newtonian approach ( ppn ) developed in the 1970s to describe the dynamics of binary systems of compact stars .a first step in this direction was done by johannsen and psaltis , who have proposed a general expression for the metric of a spinning non - kerr black hole in which the deviations from general relativity are expressed in terms of a taylor expansion in powers of , where and are the mass of the black hole and a generic radial coordinate .while some of the first coefficients of the expansion can be easily constrained in terms of ppn - like parameters , an infinite number remains to be determined from observations near the event horizon .this approach was recently generalized by relaxing the area - mass relation for non - kerr black holes and introducing two independent modifications of the metric functions and .unfortunately , as discussed in , this approach can face some difficulties : 1 .the proposed metric is described by an infinite number of parameters , which are roughly equally important in the strong - field regime , making it difficult to isolate the dominant terms .2 . the parametrization can be specialized to reproduce a spherically symmetric black hole metric in alternative theories only in the case in which the deviation from the general relativity is small .this was checked for the black holes in dilatonic einstein - gauss - bonnet gravity , for which the corresponding parameters were calculated only in the regime of small coupling .3 . at first order in the spin ,the parametrization can not reproduce deviations from the kerr metric arising in alternative theories of gravity . as an example, it can not reproduce the modifications arising for a slowly rotating black hole in chern - simons modified gravity . in this paperwe propose a solution to these issues and take another step in the direction of deriving a general parametrization for objects in metric theories of gravity .more precisely , we propose a parametrization for spherically symmetric and slowly rotating black hole geometries which can mimic black holes with a high accuracy and with a small number of free coefficients .this is achieved by expressing the deviations from general relativity in terms of a continued - fraction expansion via a compactified radial coordinate defined between the event horizon and spatial infinity .the superior convergence properties of this expansion effectively reduces to a few the number of coefficients necessary to approximate such spherically symmetric metric to the precision that can be in principle probed with near - future observations .while phenomenologically effective , the approach we suggest has also an obvious drawback . because the metric expression we propose is not the consistent result of any alternative theory of gravity , it does not have any guarantee of being physically relevant or nothing more than a mathematical exercise .the paper is organized as follows . in sec .[ sec : param ] we describe the proposed parametrization method . sec .[ sec : jp ] is devoted to the relation between the proposed parameters and the parameters of the johannsen - psaltis spherically symmetric black hole . in sec .[ sec : dilaton ] we obtain values of the parameters that approximate a dilaton black hole , while in sec .[ sec : observe ] we compare the photon circular orbit , the innermost stable circular orbit , and the quasinormal ringing predicted within our approximation with the corresponding quantities obtained for the exact solution of a dilaton black hole . in sec .[ sec : rotation ] we apply our approach to slowly rotating black holes and , in the conclusions , we discuss applications for our framework and its possible generalization for the axisymmetric case . finally , appendix [ appendix_a ] is dedicated to a comparison of our parametrization framework with the alternative parametrization of a spherically symmetric black hole proposed in ref . .the line element of any spherically symmetric stationary configuration in a spherical polar coordinate system can be written as where , and and are functions of the radial coordinate only . for any metric theory of gravity whose line element can be expressed as , we will next require that it could contain a spherically symmetric black hole [ cf ., eq . ] . ] .by this we mean that the spacetime could contain a surface where the expansion of radially outgoing photons is zero , and define this surface as the event horizon .we mark its radial position as and this definition implies that furthermore , we will neglect any cosmological effect , so that the asymptotic properties of the line element will be those of an asymptotically flat spacetime . differently from previous approaches ,we find it convenient to compactify the radial coordinate and introduce the dimensionless variable so that corresponds to the location of the event horizon , while corresponds to spatial infinity .in addition , we rewrite the metric function as where we further express the functions and after introducing three additional terms , , , and , so that where the functions and are introduced to describe the metric near the horizon ( i.e. , for ) and are finite there , as well as at spatial infinity ( i.e. , for ) .since we are not considering any specific theory of gravity , we do not have precise constraints to impose on the metric functions and . at the same time, we can exploit the information deduced from the ppn expansion to constrain their asymptotic expression , i.e. , their behaviour for . more specifically, we can include the ppn asymptotic behaviour by expressing and as here is the arnowitt - deser - misner ( adm ) mass of the spacetime , while and are the ppn parameters , which are observationally constrained to be note that we have expanded the metric function to , but to .the reason for this difference is that the highest - order ppn constraint on , i.e. , the parameter , is at first order in .conversely , the parameters and set constraints on at second order in . by comparing the two asymptotic expansions and ( [ expansion_1])([expansion_2 ] ) , and collecting terms at the same order , we find that hence , the introduced dimensionless constant is completely fixed by the horizon radius and the adm mass as and thus measures the deviations of from . on the other hand , the coefficients and can be seen as combinations of the ppn parameters as or , alternatively , as }{(1+\epsilon)^{2}}\,,\\ & \gamma = 1 + \frac{2b_0}{1+\epsilon}\,.\end{aligned}\ ] ] using now the observations constraints ( [ ppn ] ) on the ppn parameters , we conclude that and are both small and , in particular , .as mentioned above , the functions and have the delicate task of describing the black hole metric near its horizon and should therefore have superior convergence properties than those offered , for instance , by a simple taylor expansion .we chose therefore to express them in terms of rational functions ( see also ref . ) . since the asymptotic behavior of the metricis fixed by the conditions ( [ asympfix_1])([asympfix_2 ] ) , it is convenient to parametrize and by pad approximants in the form of continued fractions , i.e. , as [ contfrac ] where and are dimensionless constants to be constrained , for instance , from observations of phenomena near the event horizon .a few properties of the expansions ( [ contfrac ] ) are worth remarking .first , it should be noted that _ at the horizon _ only the first two terms of the expansions survive , i.e. , which in turn implies that near the horizon only the lowest - order terms in the expansions are important .conversely , _ at spatial infinity _ finally , while the expansions ( [ contfrac ] ) effectively contain an infinite number of undetermined coefficients , we will necessarily consider only the first terms . in this case, we simply need to set to zero the -th terms , since if , then all terms of order are not defined . in practice , and as we will show in the rest of the paper , the superior convergence properties of the continued fractions ( [ contfrac ] ) are such that the approximate metric they yield can reproduce all known ( to us ) spherically symmetric metrics to arbitrary accuracy and with a smaller set of coefficients .the inclusion of higher - order terms obviously improves the accuracy of the approximation but in general expansions truncated at are more than sufficient to yield the accuracy that can be probed by present and near - future astronomical observations .to test the effectiveness of our approach in reproducing other known spherically symmetric metric theories of gravity , we obviously start from the johannsen - psaltis ( jp ) metric in the absence of rotation . in this case , the black - hole line element is spherically symmetric and is given by the following expression for the slowly rotating dilaton black - hole solution \left(1-\frac{2\tilde{m}}{r}\right)dt^2 \nonumber\\ & & + \left[{1+h(r)}\right]\left(1-\frac{2\tilde{m}}{r}\right)^{-1 } dr^2 + r^2 d\omega^2\,,\end{aligned}\ ] ] where the function is a simple polynomial expansion in terms of the expansion parameter , i.e. , by construction , therefore , in the johannsen - psaltis metric the horizon is located at while the relation between the adm mass and the horizon mass is simply given by we can now match the asymptotic expansions for the metrics ( [ ssbh ] ) and ( [ jpbh ] ) . more specifically , we can compare eqs . and , to find that at the following relations apply between our coefficients and those in the jp metric [ jprel ] similarly , comparing eqs . and, we find that at it follows that if we set , as done originally in ref . , then , thus implying that the ppn term are taken to be .we will not make this assumption hereafter .we can also match the expansions for the metrics ( [ ssbh ] ) and ( [ jpbh ] ) near the horizon . more specifically , we can obtain algebraic relations between our coefficients and the coefficients of the jp metric by matching the and metric functions and their derivatives for or .a bit of tedious but straightforward algebra then leads to the following expressions [ as_bs_jp ] where we have indicated with a prime the radial derivative .clearly , expressions can be easily extended to higher orders if necessary .a few remarks are worth doing .first , because of cancellations , the terms do not depend on and ; similarly , the terms do not depend on , but they do depend on .second , in the simplest case and the one considered in ref . , i.e. , when only , the coefficient vanishes and our approximant for the function reproduces it exactly .finally , and more importantly , expressions clearly show the rapid - convergence properties of the expansions .it is in fact remarkable that a few coefficients only are sufficient to capture the infinite series of coefficients needed instead in the jp approach [ cf ., for instance , expressions for the coefficients and .as another test of the convergence properties of our metric parametrization we next consider a dilaton - axion black hole .when both the axion field and the spin vanish , such a black hole is described by a spherically symmetric metric with line element the radial coordinate and the adm mass are expressed , respectively , as where is the dilaton parameter . by comparing now the expansions of ( [ ssbh ] ) and ( [ dbh ] ) at spatial infinity, we find that [ dilaton0 ] similarly , by comparing the near - horizon expansions we find the other coefficients , which also depend on only and are given by [ dilaton1 ] it is clear that and vanish if , in which case we reproduce the line element of the schwarzschild black hole exactly .if , on the other hand , we could in principle calculate as many coefficients of the continued fractions as needed ; in practice already the very first ones suffice . for example, for and setting , the maximum relative difference between the exact and the expanded expression for the metric function is .this relative difference becomes if the order is increased of one , i.e. , if ( see also the discussion below on fig . [fig : freqdiff ] ) .a high precision in the mapping of the metric functions does not necessarily translate in an equivalent accurate measure of near - horizon phenomena .hence , to further test the reliability of our continued - fraction expansions ( [ contfrac ] ) , we next compare a number of potentially observable quantities for a spherically symmetric dilaton black hole and for a black hole in einstein - aether theory , respectively .more specifically , we calculate : the impact parameter for the photon circular orbit , the orbital frequency for the innermost stable circular orbit , and the quasinormal ringing of a massless scalar field . for all of these quantities ,the metric is either expressed analytically [ i.e. , eq . for a dilation black hole ] or numerically [ i.e. , for a black hole in einstein - aether theory ] , or in its parametrized form [ i.e. , via the coefficients for a dilation black hole ] . in a spherically symmetric spacetime , a photon circular orbit is defined as the null geodesic at radial position for which the following equations are satisfied where we have implicitly assumed because of the absence of a preferred direction . from these equationswe find that the equation for the radius is given by and that the corresponding orbital frequency is note that expression depends only on the coefficients and , but not on the coefficients [ cf ., eq . ] .we then define the impact parameter of the photon circular orbit ( not to be confused with the dilaton parameter ) as whose analytic expression in the case of a dilaton black hole is which reduces to in the case of a schwarzschild black hole . in the left panel of fig .[ fig : freqdiff ] we show the difference between the exact values of computed via eq . and the ones obtained after solving numerically eq .( [ photon ] ) and making use the continued - fraction expansions ( [ contfrac ] ) with coefficients . the differences are shown as a function of the dimensionless dilaton parameter and different lines refer to different levels of approximation , i.e. , when setting ( blue line ) , ( red line ) , and ( magenta line ) .the figure shows rather clearly that already when setting , that is , when retaining only the coefficients and , the differences in the impact parameter are of the order of for .these differences become larger with larger dilaton parameter , but when they can nevertheless be reduced to be even for . in a similar way, we can calculate the innermost stable circular orbit ( isco ) exploiting the fact that the geodesic motion of a massive particle in the equatorial plane can be reduced to the one - dimensional motion within an effective potential where and are the constants of motion , i.e. , energy and angular momentum , respectively .a circular orbit then is the one satisfying the following conditions while the isco is defined as the radial position at which substituting ( [ potential ] ) into ( [ circular ] ) and ( [ isco ] ) , we obtain the following algebraic equation for the isco radius , which we can solve numerically to calculate the corresponding orbital frequency as here too , expression depends only on the coefficients and , but not on the coefficients .the isco frequency can of course be compared with the exact expression in the case of a dilaton black hole , which is given by where and expression reduces to the well - known result of in the case of a schwarzschild spacetime .a comparison between the values of the isco frequency estimated from expressions and is shown in the right panel of fig .[ fig : freqdiff ] , which reports the differences in units of and as a function of the dimensionless dilaton parameter . as for the left panel ,different curves refer to different levels of approximation , i.e. , ( blue line ) , ( red line ) , and ( magenta line ) .also in this case , the differences in the isco frequency are of the order of for and can reduced to be even for by including higher - order coefficients .finally , we have compared the values for the impact parameter and the isco frequency also for another spherically symmetric black hole , namely , the one appearing in the alternative einstein - aether theory of gravity . in this case , the metric is not known analytically , but we have used the numerical data for the metric functions as discussed in ref . . more specifically , for a large number of pairs of the aether parameters and , we have obtained a numerical approximation of the metric function in terms of the coefficients , , , and of our continued - fraction expansions . using these coefficients ,we have then calculated numerically the values of and as discussed above and compared with the corresponding values in general relativity .the results of this comparison are reported in fig .[ fig : ae ] , whose left panel refers to the impact parameter for a circular photon orbit , while the right panel to the isco frequency .the two panels are meant to reproduce figs . 2 and 4 of ref . and they do so with an accuracy of fractions of a percent . note that the differences in and with respect to general relativity can be quite large for certain regions of the space of parameters ( e.g. , ) .these regions , however , are de - facto excluded by the observational constraints set by binary pulsars ( see the discussion in ref . ) .another way to probe whether the metric parametrization and the continued fraction expansions represent an effective way to reproduce strong - field observables near a black hole is to compare the response to perturbations .we recall , in fact , that if perturbed , a black hole will start oscillating .such oscillations , commonly referred to as `` quasinormal modes '' , represent exponentially damped oscillations that , at least at linear order , do not depend on the details of the source of perturbations , but only on the black hole parameters ( see for a review ) .the relevance of these oscillations is that they probe regions of the spacetime that are close to the light ring , but are global and hence do not depend on a single radial position . at the same time , the gravitational - wave signal from a perturbed black hole can be separated from a broad class of the environmental effects , allowing us to expect a good accuracy of the quasinormal modes measurement .furthermore , both of the continued - fraction expansions are involved and hence also some of the coefficients will be nonzero . for simplicity , we have considered the evolution of a massless scalar field as governed by the klein - gordon equation where is the dalambertian operator . substituting in the ansatz where are laplace s spherical harmonics, we obtain for each multipole number the following wave - like equation , where we have introduced the ( tortoise - like ) radial coordinate and the effective potential is given by it was shown in ref . that the rational approximation for and in some region near the black hole horizon in reduced einstein - aether theory allows one to calculate accurately at least the quasinormal modes with the longest damping time . in order to test our approximation in the case of dilaton black hole ,we have compared the black hole response in the time domain , found using either the exact representation of the metric or the parametrized one via the coefficients . the numerical solution of the evolution equation was made using a characteristic integration method that involves the light - cone variables and , with initial data specified on the two null surfaces and . the results of these calculations are shown in fig . [fig : timedomain ] , whose left panel reports the solution of the scalar field at as function of time both in the case of an exact dilaton black hole ( blue line ) and of the corresponding parametrized expansion ( red line ) .the relative differences are clearly very small already with , as shown in the right panel of fig .[ fig : timedomain ] , and amounting at most to fractions of a percent .we are in position now to make the first step towards the parametrization of black holes that are not spherically symmetric .we believe that the natural way to choose the parameters is taking into account the asymptotical behaviour of the corresponding metric , which is defined by multipole moments , as well as its near - horizon behaviour .unfortunately , only a very limited number of such metrics is known in alternative theories of gravity that can be used for comparison .indeed , to the best of our knowledge , a black hole with independent multipole moments was studied only in general relativity and discussed in refs . . at the same time , even the parametrization of an axisymmetric stationary black hole is far from being a trivial question , since the corresponding metric is defined by four functions of two variables . as a warm - up exercise , in this section we will consider spacetime metrics having only a small deviation from the spherical symmetry andhence extend the general expression by introducing a new function in the metric function and by retaining it only at the first order , i.e. , and with the condition that has a falloff with radius that is faster than , i.e. , that implying that the event horizon remains at will also involve the square of the metric function , which we take to be zero in the slow - rotation approximation . ] .because we are not considering any consistent ( alternative ) theory of gravity , but we are simply prescribing ad - hoc expression for the metric , we can not impose additional constraints on the function .however , if we assume that the function depends on the radial coordinate only , then we obtain a metric which can be associated with a slowly rotating black hole in hoava - lifshitz theory , in einstein - aether gravity , in chern - simons modified gravity , or with dilatonic einstein - gauss - bonnet and dilaton - axion black holes . in this case , the asymptotic behavior is given where is the spin of the black hole and we take it to be .the parametrization of the function can then be made in analogy with what was done for the nonrotating case and again we use a pad approximation in terms of continued fractions in the form since , the first coefficient is simply given by , while the higher - order ones , , are fixed by comparing series expansion of near the event horizon .as an example , we consider the first - order correction to the dilaton black hole ( [ dbh ] ) due to rotation given by the following line element \sin^2\theta dt\,d\phi + ( \rho^2 + 2b\rho)d\omega^2 \nonumber\\ & & + \ { \cal o}(a^2).\end{aligned}\ ] ] by comparing the asymptotical and near - horizon expansions we find that [ o0o1 ] since we consider , the coefficients ( [ o0o1 ] ) imply that and , thus satisfying the constraint ( [ reqaxion ] ) .the other coefficients are not small and depend on the dilaton parameter only . of course, it is possible to find as many coefficients in ( [ dabhpar ] ) as needed for an accurate approximation for the function . in order to test the convergence properties of ( [ dabhpar ] ) we again study the isco frequency for the equatorial orbits ( i.e. , ) of a massless particle in the background of a slowly rotating dilaton black - hole metric ( [ srbh ] ) . in this case, the effective potential reads we assume now that the energy and the angular momentum are positive , thus implying that for the co - rotating and for the counter - rotating particles , respectively .we then solve numerically the set of equations finding at the radial coordinate of the isco the corresponding frequency of course , these frequencies can be computed also for the parametrized metric ( [ o0o1 ] ) at different level of approximation .a comparison between the two calculations is summarized in fig . [fig : isco - axion - dilaton ] , which shows the absolute value of the difference between the exact value of and the approximate one as a function of the normalized spin parameter . as in the previous figures , here too different curves ( all computed for )refer to different degrees of approximation : ( blue line ) , ( red line ) , and ( magenta line ) .also in this case it is apparent that the use of a larger number of coefficients in continued fraction expansions ( [ contfrac_1 ] ) and ( [ dabhpar ] ) , leads to a monotonic increase of the accuracy of the isco frequency . as a concluding remarkwe note that a possible and rather popular approach to extend the parametrization to rotating black holes would be the application of the newman - janis algorithm to the metric ( [ ssbh ] ) after having fixed the parameters .although there is no proof that such a rotating configuration corresponds to a black hole solution in the same theory , the method works for some theories , e.g. , the application of the newman - janis method to the metric ( [ dbh ] ) allows one to obtain the metric for the axion - dilaton black hole .therefore , it would be interesting to compare the coefficients ( [ o0o1 ] ) with those obtained at first order after the application of the newman - janis algorithm . yet, it is clear that this approach would not provide us with the most generic form for an axisymmetric black hole , simply because , in general , the geometry can not be parametrized by one rotation parameter only .what is needed , instead , is a general framework which naturally comprises a set of parameters that account for the multipole moments of the spacetime and are not necessarily restricted to follow the relations expressed in terms of mass and angular momentum that apply for a kerr black hole . investigating this approach is beyond the scope of this initial paper , but will be the focus of our future workwe have proposed a new parametric framework to describe the spacetime of spherically symmetric and slowly rotating black holes in generic metric theories of gravity .the new framework provides therefore a link between astronomical observations of near - horizon physics with the properties of black holes in alternative theories of gravity , and which would predict deviations from general relativity . unlike similar previous attempts in this direction ,our approach is based on two novel choices .first , we use a continued - fraction expansion rather than the traditional taylor expansion in powers of , where and are respectively the mass of the black hole and a generic radial coordinate .second , the expansion is made in terms of a compactified radial coordinate with values between zero and one between the horizon and spatial infinity .these choices lead to superior convergence properties and allows us to approximate a number of known metric theories with a much smaller set of coefficients .these parameters can be calculated very accurately for any chosen spherically symmetric metric and , at the same time , they can be used via astronomical observations to measure near - horizon phenomena , such as photon orbits or the position isco . as a result ,the new parametrization provides us with powerful tool to efficiently constrain the parameters of alternative theories using future astronomical observations .another important advantage of our approach is that we can use not only the asymptotic parameters from the ppn expansion , but also the near - horizon parameters , which are well - captured already by the first lowest - order coefficients . more specifically ,the most important parameters for the near - horizon geometry are expressed simply in terms of the coefficients ( which relates the adm mass and the event horizon ) , , , and .the use of other higher - order parameters increases the accuracy of the approximation , but does not change significantly the observable quantities .the latter , in fact , are captured to the precision of typical near - future astronomical observations already at the lowest order .the rapid convergence of our expansion is also useful for the analysis of black - hole spacetimes in alternative theories where the metric is known only numerically . using as a practical example the alternative einstein - aether theory of gravity, we have shown that it is possible to reproduce to arbitrary accuracy the numerical results by using a small set of coefficients in the continued - fraction expansion . in turn , adopting such coefficients it is also possible to obtain an analytical representation of the metric functions , which can then be used to study the stability of such black holes , the motion of particles and fields in their vicinity , or to construct viable approximations for metrics with incorrect asymptotical behaviour , e.g. , due to the presence of magnetic fields . as a concluding remarkwe note that our approach has so far investigated spherically symmetric spacetimes and hence black holes that are either nonrotating or slowly rotating. it would be interesting to find a generalization of our framework for the parametrization of axisymmetric black holes , for instance , via the application of the newman - janis algorithm .however , while this is technically possible , it is unclear whether such approach will turn out to be sufficiently robust .we believe , in fact , that a parametrization of axisymmetric black holes must combine , together with a rapidly converging expansion , also information on the parametrized post - newtonian parameters , on the multipole moments , on the horizon shape , as well as parameters that define the near - horizon geometry .this task , which is further complicated by the lack of known axisymmetric back - hole solutions in alternative theories of gravity ( cf ., ) , will be the focus of our future work .in order to explore convergence properties of our parametrization framework , we consider in this appendix the alternative parametrization of a spherically symmetric black hole in generic metric theories of gravity which has been recently proposed in ref . , \left(1-\frac{2\tilde{m}}{r}\right)dt^2 \nonumber\\ & & + \left[{1+h^r(r)}\right]\left(1-\frac{2\tilde{m}}{r}\right)^{-1 } dr^2 + r^2 d\omega^2\,,\label{jp}\end{aligned}\ ] ] where , instead of the function in ( [ jpbh ] ) , two different functions are introduced : in particular , we will determine the numerical values of the coefficients to produce an approximation of the metric of a dilaton black hole ( [ dbh ] ) .although this can be done in different ways , the coefficients must obey the constraints that the theory naturally imposes on them . as a result, the large - distance properties of the functions and provide a series of constraints that fix the coefficients once an asymptotic expansion for the metric functions is made . in this way , we find that [ cf ., eqs . and ]figure [ fig : axion - dilaton - jpn ] shows the relative difference for the impact parameter of the photon circular relative to a dilaton black hole as computed using the parametrization , and shown as function of the dimensionless strength of the dilaton parameter .different lines refer to different levels of approximation , i.e. , ( blue line ) , ( red line ) , and ( magenta line ) , and so on. the dashed lines of the same color correspond to our continued - fraction approximation having the same number of parameters .more specifically , the first three of these lines should be compared with the corresponding ones in fig .[ fig : freqdiff ] , in the following sense : considering , for instance , that the red line in fig .[ fig : freqdiff ] amounts to specifying four coefficients ( i.e. , , , and ) , which is the same number that is involved when considering the red line in fig .[ fig : axion - dilaton - jpn ] ( i.e. , , , , ) .clearly , the errors in the novel parametrization are overall smaller for the same number of fixed coefficients in the expansion .a qualitatively similar figure can be produced also for the measurement of the isco but we do not report it here for compactness . we should note that the errors in the parametrization can be made smaller if we fix the first two coefficients , i.e. , and , from the asymptotic expansion at large distance , but we compute the remaining coefficients from the near - horizon behavior . this is simply because the impact parameter is a strong - field quantity and hence its approximation necessarily improves if the coefficients are constrained near the horizon . on the other hand ,the real problematic feature of the parametrization is that the coefficients are roughly equally important near the horizon . as a result ,if one fixes the coefficients by matching the near - horizon behavior of the metric functions , the expression for the same coefficient will be different for different orders of approximations , making the approach not useful for constraining the parameters of the theory . finally ,as mentioned in the introduction , a particularly serious difficulty of the parametrization is that it does not reproduce the correct rotating metric even in the regime of slow rotation .this can be shown rather simply for the slowly rotating dilaton black hole , for which both the dilaton and the rotating johannsen - psaltis black hole can be obtained with the help of the newman - janis algorithm .more specifically , the generalized johannsen - psaltis black hole in the regime of slow rotation reads \left(1-\frac{2\tilde{m}}{r}\right)dt^2 \nonumber\\ & & + \left[{1+h^r(r)}\right]\left(1-\frac{2\tilde{m}}{r}\right)^{-1 } dr^2 + r^2 d\omega^2\,\label{jpslow}\\\nonumber & & -2a\sin^2\theta\left(\sqrt{(1+h^t(r))(1+h^r(r))}\right.\\\nonumber & & \left.-\left(1-\frac{2\tilde{m}}{r}\right)(1+h^t(r))\right)dt\,d\phi+{\cal o}(a^2).\end{aligned}\ ] ] by comparing the diagonal elements of ( [ jpslow ] ) and ( [ dabh ] ) we conclude that , while and must approximately be such that as a result , an inconsistency emerges for the off - diagonal element of the metric ( [ jpslow ] ) , for which unless .thus , for any approximation of the functions and , the slowly rotating regime of the dilaton black hole is not reproduced by the metric ( [ jpslow ] ) .similar arguments were used to show that the newman - janis algorithm is not able to generate rotating black - hole solutions in modified gravity theories .it is a pleasure to thank e. barausse for stimulating discussions and for providing us with the numerical data from .we also thank e. barausse , v. cardoso , t. johannsen , and d. psaltis for useful comments and suggestions .a. z. was supported by the alexander von humboldt foundation , germany , and coordenao de aperfeioamento de pessoal de nvel superior ( capes ) , brazil .partial support comes from the dfg grant no .sfb / transregio 7 and by `` newcompstar '' , cost action mp1304 .t. johannsen , d. psaltis , s. gillessen , d. p. marrone , f. ozel , s. s. doeleman and v. l. fish , astrophys . j. * 758 * , 30 ( 2012 ) [ arxiv:1201.0758 [ astro-ph.ga ] ] . c. bambi and k. freese , phys .d * 79 * , 043002 ( 2009 ) [ arxiv:0812.1328 [ astro - ph ] ] .t. johannsen and d. psaltis , astrophys .j. * 716 * , 187 ( 2010 ) [ arxiv:1003.3415 [ astro-ph.he ] ] ; astrophys .j. * 718 * , 446 ( 2010 ) [ arxiv:1005.1931 [ astro-ph.he ] ] .a. e. broderick , t. johannsen , a. loeb and d. psaltis , arxiv:1311.5564 [ astro-ph.he ]. s. vigeland , n. yunes and l. stein , phys .d * 83 * , 104027 ( 2011 ) [ arxiv:1102.3706 [ gr - qc ] ] . c. m. will , living rev .* 9 * , 3 ( 2006 ) [ gr - qc/0510072 ] .t. johannsen and d. psaltis , phys .d * 83 * , 124015 ( 2011 ) [ arxiv:1105.3191 [ gr - qc ] ] .v. cardoso , p. pani and j. rico , phys .d * 89 * , 064007 ( 2014 ) [ arxiv:1401.0528 [ gr - qc ] ] .p. kanti , n. e. mavromatos , j. rizos , k. tamvakis and e. winstanley , phys .d * 54 * , 5049 ( 1996 ) [ hep - th/9511071 ] .p. kanti , n. e. mavromatos , j. rizos , k. tamvakis and e. winstanley , phys .d * 57 * , 6255 ( 1998 ) [ hep - th/9703192 ] .r. a. konoplya and a. zhidenko , phys .b * 644 * , 186 ( 2007 ) [ gr - qc/0605082 ] ; phys .b * 648 * , 236 ( 2007 ) [ hep - th/0611226 ] .a. garcia , d. galtsov and o. kechkin , phys .lett . *74 * , 1276 ( 1995 ) .s. -w .wei and y. -x .liu , jcap * 1311 * , 063 ( 2013 ) [ arxiv:1311.4251 [ gr - qc ] ] . t. jacobson and d. mattingly , phys .d * 64 * , 024028 ( 2001 ) [ gr - qc/0007031 ] ; t. jacobson , pos qg * -ph * , 020 ( 2007 ) [ arxiv:0801.1547 [ gr - qc ] ] .e. barausse , t. jacobson and t. p. sotiriou , phys .d * 83 * , 124043 ( 2011 ) [ arxiv:1104.2889 [ gr - qc ] ] .k. yagi , d. blas , n. yunes and e. barausse , phys .lett . * 112 * , 161101 ( 2014 ) [ arxiv:1307.6219 [ gr - qc ] ] ; phys .d * 89 * , 084067 ( 2014 ) [ arxiv:1311.7144 [ gr - qc ] ] .r. a. konoplya and a. zhidenko , rev .* 83 * , 793 ( 2011 ) [ arxiv:1102.4014 [ gr - qc ] ] .e. barausse , v. cardoso and p. pani , phys .d * 89 * , 104059 ( 2014 ) [ arxiv:1404.7149 [ gr - qc ] ] . c. gundlach , r. h. price , and j. pullin , phys .d * 49 * , 883 ( 1994 ) [ arxiv : gr - qc/9307009 ] .f. d. ryan , phys .d * 52 * , 5707 ( 1995 ) ; phys .d * 55 * , 6081 ( 1997 ) ; phys .d * 56 * , 7732 ( 1997 ) .n. a. collins and s. a. hughes , phys .d * 69 * , 124022 ( 2004 ) [ gr - qc/0402063 ] .k. glampedakis and s. babak , class .grav . * 23 * , 4167 ( 2006 ) [ gr - qc/0510057 ] .e. barausse and t. p. sotiriou , phys .d * 87 * , 087504 ( 2013 ) [ arxiv:1212.1334 ] .a. wang , phys .lett . *110 * , 091101 ( 2013 ) [ arxiv:1212.1876 ] .e. barausse and t. p. sotiriou , class .grav . * 30 * ( 2013 ) 244010 [ arxiv:1307.3359 [ gr - qc ] ] . k. konno , t. matsuyama and s. tanda , prog . theor. phys . * 122 * , 561 ( 2009 ) [ arxiv:0902.4767 [ gr - qc ] ] .n. yunes and f. pretorius , phys . rev .d * 79 * , 084043 ( 2009 ) [ arxiv:0902.4669 [ gr - qc ] ] ; k. yagi , n. yunes and t. tanaka , phys .d * 86 * , 044037 ( 2012 ) [ arxiv:1206.6130 [ gr - qc ] ] .p. pani and v. cardoso , phys .d * 79 * , 084031 ( 2009 ) [ arxiv:0902.1569 [ gr - qc ] ] .p. pani , c. f. b. macedo , l. c. b. crispino and v. cardoso , phys .d * 84 * , 087501 ( 2011 ) [ arxiv:1109.3996 [ gr - qc ] ] .d. ayzenberg and n. yunes , phys .d * 90 * , 044066 ( 2014 ) [ arxiv:1405.2133 [ gr - qc ] ] .s. yazadjiev , gen .grav . * 32 * , 2345 ( 2000 ) [ gr - qc/9907092 ] .a. zhidenko , arxiv:0705.2254 [ gr - qc ] .r. a. konoplya and r. d. b. fontana , phys .b * 659 * , 375 ( 2008 ) [ arxiv:0707.1156 [ hep - th ] ] .k. d. kokkotas , r. a. konoplya and a. zhidenko , phys .d * 83 * , 024031 ( 2011 ) [ arxiv:1011.1843 [ gr - qc ] ] .t. johannsen , phys .d * 87 * , no .12 , 124017 ( 2013 ) [ arxiv:1304.7786 [ gr - qc ] ] .d. hansen and n. yunes , phys .d * 88 * , no .10 , 104020 ( 2013 ) [ arxiv:1308.6631 [ gr - qc ] ] .
we propose a new parametric framework to describe in generic metric theories of gravity the spacetime of spherically symmetric and slowly rotating black holes . in contrast to similar approaches proposed so far , we do not use a taylor expansion in powers of , where and are the mass of the black hole and a generic radial coordinate , respectively . rather , we use a continued - fraction expansion in terms of a compactified radial coordinate . this choice leads to superior convergence properties and allows us to approximate a number of known metric theories with a much smaller set of coefficients . the measure of these coefficients via observations of near - horizon processes can be used to effectively constrain and compare arbitrary metric theories of gravity . although our attention is here focussed on spherically symmetric black holes , we also discuss how our approach could be extended to rotating black holes .
the business of quantum state tomography is converting multiple copies of an unknown quantum state into an estimate of that state by performing measurements on the copies .the nave approach to the problem involves measuring different observables ( represented by hermitian operators ) on each copy of the state and constructing the estimate as a function of the measurement outcomes ( corresponding to different eigenvalues of the observables ) .though tomography can be performed in such a way , there are more general ways of interrogating the ensemble ; indeed , generalizations such as ancilla - coupled and joint measurements lead one to evaluate the problem of tomography from the perspective of _ generalized measurements _ , an approach which has yielded many optimal tomographic strategies .an interesting subclass of generalized measurements is the class of _ weak measurements _figure [ fig : weak - measurement ] gives a quantum - circuit description of a weak measurement .weak measurements are often the only means by which an experimentalist can probe her system , thus making them of practical interest .sequential weak measurements are also useful for describing continuous measurements .weak measurements are also central in the more contentious formalism of _ weak values _ .in particular , the technique of _ weak - value amplification _ has generated much debate over its metrological utility .the two proposals we review in this paper assert that it is useful to approach the problem of tomography with weak measurements holding a prominent place in one s thinking .some care needs to be taken in identifying whether a particular emphasis has the potential to be useful when thinking about tomography , given the large body of work already devoted to the subject . sinceweak measurements are included in the framework of generalized measurements , none of the known results for optimal measurements in particular scenarios are going to be affected by shifting our focus to weak measurements . in sec .[ sec : eval - princ ] we outline criteria for evaluating this shift of focus .is a two - system hermitian operator , is the state of the system being measured , is the initial state of the meter , is a real number parameterizing the strength of the measurement , and is a standard observable with outcomes . if the measurement is weak , , and very little is learned or disturbed about the system by measuring the meter . ]we apply these criteria to two specific tomographic schemes that advocate the use of weak measurements . _direct state tomography _[ sec : dst ] ) utilizes a procedure of weak measurement and postselection , motivated by weak - value protocols , in an attempt to give an operational interpretation to wavefunction amplitudes . _ weak - measurement tomography _[ sec : weak - tomo ] ) seeks to outperform so - called `` standard '' tomography by exploiting the small system disturbance caused by weak measurements to recycle the system for further measurement .here we present our criteria for evaluating claims about the importance of weak measurements for quantum state tomography .the primary tool we utilize is generalized measurement theory , specifically , describing a measurement by a positive - operator - valued measure ( povm ) .a povm assigns a positive operator to every measurable subset of the set of measurement outcomes . for countable sets of outcomes ,this means the measurement is described by the countable set of positive operators , the positive operators are then given by the sums for continuous sets of outcomes the positive operator associated with a particular measurable subset is given by the integral these positive operators capture all the statistical properties of a given measurement , in that the probability of obtaining a measurement result within a measurable subset for a particular state is given by the formula that each measurement yields some result is equivalent to the completeness condition , povms are ideal representations of tomographic measurements because they contain all the information relevant for tomography , i.e. , measurement statistics , while removing many irrelevant implementation details .if two wildly different measurement protocols reduce to the same povm , their tomographic performances are .the authors of both schemes we evaluate make claims about the novelty of their approach .these claims seem difficult to substantiate , since no tomographic protocol within the framework of quantum theory falls outside the well - studied set of tomographic protocols employing generalized measurements . to avoid trivially dismissing claims in this way , however , we define a relatively conservative subset of measurements that might be considered `` basic '' and ask if the proposed schemes fall outside of this category .the subset of measurements we choose is composed of randomly chosen one - dimensional orthogonal projective measurements [ hereafter referred to as _ random odops _ ; see fig .[ fig : random - odop - limits](a ) ] .these are the measurements that can be performed using only traditional von neumann measurements , given that the experimenter is allowed to choose randomly the observable he wants to measure .this is quite a restriction on the full set of measurements allowed by quantum mechanics .many interesting measurements , such as symmetric informationally complete povms , like the tetrahedron measurement shown in fig .[ fig : random - odop - limits](b ) , can not be realized in such a way . with odops assumed as basic , however ,if the povm generated by a particular weak - measurement scheme is a random odop , we conclude that weak measurements should not be thought of as an essential ingredient for the scheme .identifying other subsets of povms as `` basic '' might yield other interesting lines of inquiry .for example , when doing tomography on ensembles of atoms , weak collective measurements might be compared with nonadaptive separable projective measurements .users of tomographic schemes are arguably less interested in the novelty of a particular approach than they are in its performance .there is a variety of performance metrics available for state estimates , some of which have operational interpretations relevant for particular applications .given that we have no particular application in mind , we adopt a reasonable figure of merit , haar - invariant average fidelity , which fortuitously is the figure of merit already used to analyze the scheme we consider in sec .[ sec : weak - tomo ] .this is the fidelity , , of the estimated state with the true state , averaged over possible measurement records and further averaged over the unitarily invariant ( maximally uninformed ) prior distribution over pure true states . for the case of discrete measurement outcomes, this quantity is written as an obvious problem with this figure of merit is its dependence on the estimator .we want to compare measurement schemes directly , not measurement estimator pairs . to remove this dependencewe should calculate the average fidelity with the optimal estimator for each measurement , expressed as to avoid straw - man arguments , it is also important to compare the performance of a particular tomographic protocol to the optimal protocol , or at least the best known protocol . both proposals we review in this paper are nonadaptive measurements on each copy of the system individually .since there are practical reasons for restricting to this class of measurements , we compare to the optimal protocol subject to this constraint .this brings up an interesting point that can be made before looking at any of the details of the weak - measurement proposals . for our chosen figure of merit ,the optimal individual nonadaptive measurement is a random odop ( specifically the haar - invariant measurement , which samples a measurement basis from a uniform distribution of bases according to the haar measure ) .therefore , weak - measurement schemes can not hope to do better than random odops , and even if they are able to attain optimal performance , weak measurements are clearly not an essential ingredient for attaining that performance .many proposals for weak - measurement tomography are motivated not by efficacy , but rather by a desire to address some foundational aspect of quantum mechanics .this desire offers an explanation for the attention these proposals receive in spite of the disappointing performance we find when they are compared to random odops .there are two prominent claims of foundational significance .the first is that a measurement provides an operational interpretation of wavefunction amplitudes more satisfying than traditional interpretations .this is the motivation behind the direct state tomography of sec .[ sec : dst ] , where the measurement allegedly yields expectation values directly proportional to wavefunction amplitudes rather than their squares .the second claim is that weak measurement finds a clever way to get around the uncertainty disturbance relations in quantum mechanics .the intuition behind using weak measurements in this pursuit is that , since weak measurements minimally disturb the system being probed , they might leave the system available for further use ; the information obtained from a subsequent measurement , together with the information acquired from the preceding weak measurements , might be more information in total than can be obtained with traditional approaches . of course , generalized measurement theory sets limits on the amount of information that can be extracted from a system , suggesting that such a foundational claim is unfounded .we more fully evaluate this claim in sec .[ sec : weak - tomo ] .in and lundeen _ et al . _ propose a measurement technique designed to provide an operational interpretation of wavefunction amplitudes .they make various claims about the measurement , including its ability to make `` the real and imaginary components of the wavefunction appear directly '' on their measurement device , the absence of a requirement for global reconstruction since `` states can be determined locally , point by point , '' and the potential to `` characterize quantum states _ in situ _ without disturbing the process in which they feature . ''the protocol is thus often characterized as _ direct state tomography _ ( dst ) .to evaluate these claims , we apply the principles discussed in sec . [ sec : eval - princ ] .lundeen _ et al ._ have outlined procedures for both pure and mixed states .we focus on the pure - state problem for simplicity , although much of what we identify is directly applicable to mixed - state . to construct the povm , we need to describe the measurement in detail .the original proposal for dst of lundeen _ et al . _calls for a continuous meter for performing the weak measurements .as shown by maccone and rusconi , the continuous meter can be replaced by a qubit meter prepared in the positive eigenstate , a replacement we adopt to simplify the analysis . since wavefunction amplitudes are basis - dependent quantities , it is necessary to specify the basis in which we want to reconstruct the wavefunction .we call this the _ reconstruction basis _ and denote it by , where is the dimension of the system we are reconstructing .the meter is coupled to the system via one of a collection of interaction unitaries , where the strength of the interaction is parametrized by . a weak interaction , i.e. , one for which , followed by measuring either or on the meter , effects a weak measurement of the system .in addition , after the interaction , there is a strong ( projective ) measurement directly on the system in the _ conjugate basis _ , which is defined by the protocol for dst of lundeen _ et al . _ ,motivated by thinking in terms of weak values , discards all the data except for the case when the outcome of the projective measurement is .this protocol is depicted as a quantum circuit in fig .[ fig : dst - protocol ] . , each of which corresponds to a reconstruction - basis element . the meter is then measured in either the or basis to obtain information about either the real or imaginary part of the wavefunction amplitude of the selected reconstruction - basis element .this procedure is postselected on obtaining the outcome from the measurement of the system in the conjugate basis .while the postselection is often described as producing an effect on the meter , the circuit makes clear that the measurements can be performed in either order , so it is equally valid to say the measurement of the meter produces an effect on the system . ] for each , the expectation values of and , conditioned on obtaining the outcome from the projective measurement , are given by where . the probability for obtaining outcome is \big)\ , , \end{split}\end{aligned}\ ] ] and we can always choose the unobservable global phase of to make real and positive . with this choice , which we adhere to going forward , provides information about the real part of , and provides information about the imaginary part of .specializing these results to weak measurements gives this is a remarkably simple formula for estimating the state ! there is , however , an important detail that should temper our enthusiasm .contrary to the claim in , this formula does not allow one to reconstruct the wavefunction point - by - point ( amplitude - by - amplitude in this case of a finite - dimensional system ) , because one has no idea of the value of the `` normalization constant '' until _ all _ the wavefunction amplitudes have been measured .this means that while ratios of wavefunction amplitudes can be reconstructed point - by - point , reconstructing the amplitudes themselves requires a global reconstruction . admittedly , this reconstruction is simpler than commonly used linear - inversion techniques , but it comes at the price of an inherent bias in the estimator , arising from the weak - measurement approximation , as was discussed in .the scheme as it currently stands relies heavily on postselection , a procedure that often discards relevant data . to determine what information is being discarded and whether it is useful, we consider the measurement statistics of and conditioned on an arbitrary outcome of the strong measurement . to do that , we first introduce a unitary operator , diagonal in the reconstruction basis , which cyclically permutes conjugate - basis elements and puts phases on reconstruction - basis elements : as is illustrated in fig .[ fig : postselect ] , postselecting on outcome with input state is equivalent to postselecting on with input state .armed with this realization , we can write reconstruction formulae for all postselection outcomes , this makes it obvious that all the measurement outcomes in the conjugate basis give `` direct '' readings of the wavefunction in the weak - measurement limit .postselection in this case is not only harmful to the performance of the estimator , it is not even necessary for the interpretational claims of .henceforth , we drop the postselection and include all the data produced by the strong measurement .the uselessness of postselection is not a byproduct of the use of a qubit meter . in the continuous - meter case ,the conditional expectation values in the weak - measurement limit are given as weak values weak - value - motivated dst postselects on meter outcome to hold the amplitude constant and thus make the expectation value proportional to the wavefunction .since is only a phase , however , it is again obvious that postselecting on any value of gives a `` direct '' reconstruction of a rephased wavefunction .et al . _ have developed a variation on lundeen s protocol that requires measuring weak values of only one meter observable .this is made possible by keeping data that is discarded in the original postselection process .we now consider whether the weak measurements in dst contribute anything new to tomography .it is already clear from eqs .( [ eqn : sigmay ] ) and ( [ eqn : sigmaz ] ) that for this protocol to provide data that is proportional to amplitudes in the reconstruction basis , the weakness of the interaction is only important for the measurement of . we are after something deeper than this , however , and to get at it , we change perspective on the protocol of fig .[ fig : dst - protocol ] , asking not how postselection on the result of the strong measurement affects the measurement of or , but rather how those measurements change the description of the strong measurement . as is discussed in fig .[ fig : dst - protocol ] , this puts the protocol on a footing that resembles that of the random odops in fig .[ fig : random - odop - limits](a ) .the measurement of , which provides the imaginary - part information , is trivial to analyze , because the analysis can be reduced to drawing more circuits . in fig .[ fig : imaginary - measurement](a ) , the interaction unitary is written in terms of system unitaries that are controlled in the -basis of the qubit . the measurement on the meter commutes with the interaction unitary , so using the principle of deferred measurement , we can move this measurement through the controls , which become classical controls that use the results of the measurement .the resulting circuit , depicted in fig .[ fig : imaginary - measurement](b ) , shows that the imaginary part of each wavefunction amplitude can be measured by adding a phase to that amplitude , with the sign of the phase shift determined by a coin flip .this is a particular example of the random odop described by fig .[ fig : random - odop - limits](a ) .we conclude that weak measurements are not an essential ingredient for determining the imaginary parts of the wavefunction amplitudes . measuring the real partsis more interesting , since the measurement does not commute with the interaction unitary .we proceed by finding the kraus operators that describe the post - measurement state of the system .the strong , projective measurement in the conjugate basis has kraus operators , whereas the unitary interaction , followed by the measurement of with outcome , has ( hermitian ) kraus operator where the eigenstates of are .the composite kraus operators , , yield povm elements . for each , these povm elements make up a rank - one povm with outcomes .the povm elements can productively be written as the povm for each does not fit into our framework of random odops , but can be thought of as within a wider framework of random povms .indeed , the neumark extension teaches us how to turn any rank - one povm into an odop in a higher - dimensional hilbert space , where the dimension matches the number of outcomes of the rank - one povm .vallone and dequal have proposed an augmentation of the original dst to obtain a `` direct '' wavefunction measurement without the need for the weak - measurement approximation .the essence of their protocol is to perform an additional measurement on the meter .the statistics of this measurement allow the second - order term in to be eliminated from the real - part calculation , giving a reconstruction formula that is exact for all values of .of course , the claim that the original dst protocol `` directly '' measures the wavefunction is misleading , and directness claims for vallone and dequal s modifications are necessarily more misleading. even ratios of real parts of wavefunction amplitudes no longer can be obtained by ratios of simple expectation values , since these calculations rely on both _ and _ expectation values for different measurement settings .we analyze this additional meter measurement in the same way we analyzed the measurement .the kraus operators corresponding to the meter measurements are the composite kraus operators , , yield povm elements that can be written as it is useful to ponder the form of the povm elements for the and measurements of the dst protocols . for the original dst protocol of fig .[ fig : dst - protocol ] , without postselection , the only equatorial measurement on the meter is of ; the corresponding povm elements , given by eq .( [ eqn : dst - povm ] ) , are nearly measurements in the conjugate basis , except that the -component of the conjugate basis vector is changed in magnitude by an amount that depends on the result of the measurement . for the augmented dst protocol of vallone and dequal , the additional povm elements ( [ eqn : dst - x - povm ] ) , which come from the measurement of on the meter , are quite different depending on the result of the measurement . for the result ,the povm element is similar to the povm elements for the measurement of , but with a different modification of the -component of the conjugate vector . for the result , the povm element is simply a measurement in the reconstruction basis ; as we see below , the addition of the measurement in the reconstruction basis has a profound effect on the performance of the augmented dst protocol outside the region of weak measurements , an effect unanticipated by the weak - value motivation .although claims regarding the efficacy of dst are rather nebulous , we consider the negative impact of the weak - measurement limit on tomographic performance . indoing so , we assume for simplicity that the system is a qubit , in some unknown pure state that is specified by polar and azimuthal bloch - sphere angles , and . in this casewe assume that the reconstruction basis is the eigenbasis of ; the conjugate basis is the eigenbasis of .the method we use to evaluate the effect of variations in is taken from the work of de burgh _ et al . _ , which uses the _cramr rao bound _ ( crb ) to establish an asymptotic ( in number of copies ) form of the average fidelity . of eq .( [ eqn : crb ] ) for original dst ( solid black ) and augmented dst ( dotted green for probability for and measurements and probability for a measurement ; dashed red for equal probabilities for all three measurements ) . as the plot makes clear ,the optimal values for are far from the weak - measurement limit .values of for which give exceptionally large crbs , confirming the intuition that weak measurements learn about the true state very slowly .the crb for original dst also grows without bound as approaches , since that measurement strength leads to degenerate kraus operators and a povm that , not informationally complete , consists only of projectors onto and eigenstates of the system .the crb remains finite when the meter measurements are augmented with , since the resultant povm at then includes projectors on the system [ i.e. , projectors in the reconstructions basis ; see eq .( [ eqn : bxnminusm ] ) ] , giving an informationally complete overall . ] in analyzing the two dst protocols , original and augmented , we assume that the two values of are chosen randomly with probability . for the original protocol , we choose the and measurements with probability . for the augmented protocol , we make one of two choices : equal probabilities for the , , and measurements or probabilities of for the measurement and for the and measurements .formally , these assumed probabilities scale the povm elements when all of them are combined into a single overall .the asymptotic form involves the fisher informations , and , for the two bloch - sphere state parameters , calculated from the statistics of whatever measurement we are making on the qubit .the crb already assumes the use of an optimal estimator .when the number of copies , , is large , the average fidelity takes the simple form though we have derived analytic expressions for the fisher informations , it is more illuminating to plot the crb , obtained by numerical integration ( see fig . [fig : fisher - term ] ) . for original dst ,the optimal value of is just beyond , invalidating all qualities of `` directness '' that come from assuming . for the augmented dst of vallone and dequal , the optimal value of moves toward , even further outside the region of weak measurements . in both cases, blows up at ; for weak measurements , is so large that the information gain is glacial .we visualize this asymptotic behavior by estimating the average fidelity over pure states as a function of using the sequential monte carlo technique , for various protocols and values of .figure [ fig : optimal - dst - sim ] plots these results and shows how the average fidelity , for the optimal value of , approaches the asymptotic form ( [ eqn : asympt - avg - fid ] ) as increases .we note that the estimator used in these simulations is the estimator optimized for average fidelity discussed in . if we were to use the reconstruction formula proposed by lundeen __ , the performance would be worse . as a function of the number of system copies for three measurements .the dashed red curve is for augmented dst with equal probabilities for the three meter measurements ; the value is close to the optimal value from fig .[ fig : fisher - term ] .the other two curves are for original dst : the solid black curve is for , which is close to optimal ( this curve nearly coincides with the dashed red curve for augmented dst ) ; the dashed - dotted purple curve is for a small value , where the weak - measurement approximation is reasonable .the three dotted curves give the corresponding asymptotic behavior .the two weak - measurement curves illustrate the glacial information acquisition when weak measurements are used ; the dashed - dotted curve hasnt begun to approach the dotted asymptotic form for . ]our conclusions are the following .first , postselection contributes nothing to .its use comes from attention to weak values , but postselection is actually a negative for tomography because it discards data that are just as cogent as the data that are retained in the weak - value scenario .second , weak measurements in this context add very little to a tomographic framework based on random odops .finally , the `` direct '' in dst is a misnomer because the protocol does not provide point - by - point reconstruction of the wavefunction .the inability to provide point - by - point reconstruction is a symptom of a general difficulty .any procedure , classical or quantum , for detecting a complex amplitude when only absolute squares of amplitudes are measurable involves interference between two amplitudes , say , and , so that some observed quantity involves a product of two amplitudes , say , . if one regards as `` known '' and chooses it to be real , then can be said to be observed directly .this is the way amplitudes and phases of classical fields are determined using interferometry and square - law detectors . of course, quantum amplitudes are not classical fields .one loses the ability to say that one amplitude is known and objective , with the other to be determined relative to the known amplitude . indeed , if one starts from the tomographic premise that nothing is known and everything is to be estimated from measurement statistics , then can not be regarded as `` known . ''dst fits into this description , with the sum of amplitudes , of eq .( [ eqn : upsilon ] ) , made real by convention , playing the role of .the achievement of dst is that this single quantity is the only `` known '' quantity needed to construct all the amplitudes from measurement statistics .single quantity or not , however , must be determined from the entire tomographic procedure before any of the amplitudes can be estimated .the second scheme we consider is a proposal for qubit tomography by das and arvind .this protocol was advertised as opening up `` new ways of extracting information from quantum ensembles '' and outperforming , in terms of fidelity , tomography performed using projective measurements on the system .the optimality claim can not be true , of course , since a random odop based on the haar invariant measure for selecting the odop basis is optimal when average fidelity is the figure of merit , but the novelty of the information extraction remains to be evaluated . and measurements .the circuit makes clear that there is nothing important in the order the measurements are performed after the interactions have taken place , so we consider the protocol as a single ancilla - coupled measurement . ]the weak measurements in this proposal are measurements of pauli components of the qubit .these measurements are performed by coupling the qubit system via an interaction unitary , to a continuous meter , which has position and momentum and is prepared in the gaussian state {\frac{\epsilon}{2\pi}}\int_{-\infty}^{\infty}dq\,e^{-\epsilon q^2/4}{\left\vert{q}\right\rangle}\ , .\label{eqn : weak - meas - gaussian}\end{aligned}\ ] ] the position of the meter is measured to complete the weak measurement .the weakness of the measurement is parametrized by . the das - arvind protocol involves weakly measuring the and pauli components and then performing a projective measurement of .we depict this protocol as a circuit in fig .[ fig : weak - protocol ] .das and arvind view this protocol as providing more information than is available from the projective measurement because the weak measurements extract a little extra information about the and pauli components without appreciably disturbing the state of the system before it is slammed by the projective measurement .again , we turn the tables on this point of view , with its notion of a little information flowing out to the two meters , to a perspective akin to that of the random odop of fig . [fig : random - odop - limits](a ) .we ask how the weak measurements modify the description of the final projective measurement . for this purpose, we again need kraus operators to calculate the povm of the overall .the kraus operators for the projective measurement are , and the ( hermitian ) kraus operator for a weak measurement with outcome on the meter is {\frac{\epsilon}{2\pi}}\exp\!\left(-\frac{\epsilon(q^2 + 1)}{4}\right)\\ & \quad\times\big({\mathds{1}}\cosh(\epsilon q/2)+\sigma_j\sinh(\epsilonq/2)\big ) \sqrt{dq}\ , .\end{split } \label{eqn : weak - kraus - ops}\end{aligned}\ ] ] the kraus operators for the whole measurement procedure are .from these come the infinitesimal povm elements for outcomes , , and : these povm elements are clearly rank - one . using the pauli algebra, we can bring the povm elements into the explicit form , where we have introduced a probability density and unit vectors , we note that and .this means that the overall povm is made up of a convex combination of equally weighted pairs of orthogonal projectors and is therefore a random .from this perspective , the weak measurements are a mechanism for generating a particular distribution from which different projective measurements are sampled ; i.e. , they are a particular way of generating a distribution in fig . [fig : random - odop - limits ] .several of these distributions are plotted in fig .[ fig : bloch - plots ] . as a function of the number of system copies for three measurements : das and arvind s measurement protocol ( dotted - dashed blue ) with ; mub consisting of pauli , , and measurements ( solid black ) ; random odop consisting of projective measurements sampled from the haar - uniform distribution ( dashed red ) .the dotted lines , and , are the crbs defined by eq .( [ eqn : crb ] ) for the optimal generalized tomographic protocol and mub measurements , respectively .] it is interesting to note that the value of that das and arvind identified as optimal ( about ) produces a distribution that is nearly uniform over the bloch sphere .this matches our intuition when thinking of the measurement as a random odop , since the optimal random odop samples from the uniform distribution to visualize the performance of this protocol , we again use sequential monte carlo simulations of the average fidelity .das and arvind compare their protocol to a measurement of , , and , whose eigenstates are _ mutually unbiased bases _ ( mub ) . in fig .[ fig : weak - mub - haar - sim ] , we compare das and arvind s protocol for to a mub measurement and to the optimal projective - measurement - based tomography scheme , i.e. , the haar - uniform random .we do nt discuss the process of binning the position - measurement results that das and arvind engage in , since such a process produces a rank-2 povm that is equivalent to sampling from a discrete distribution over projective measurements and then adding noise , a practice that necessarily degrades tomographic performance .we conclude that the protocol does not offer anything beyond that offered by random odops and that its claim of extracting information about the system without disturbance is not supported by our analysis . in particular , when operated optimally , it is essentially the same as the strong projective measurements of a haar - uniform random .it is true that the presence of the and measurements provides more information than a projective measurement by itself , but this is not because the and measurements extract information without disturbing the system .our analysis of weak - measurement tomographic schemes gives us guidance for future forays into tomography .povms contain the necessary and sufficient information for comparing the performance of tomographic techniques .specific realizations of a povm might provide pleasing narratives , but these narratives are irrelevant for calculating figures of merit .optimal povms for many figures of merit and technical limitations are known .a new tomographic proposal should identify restrictions on the set of available povms that come about from practical considerations and compare itself to the best known povm in that set .the question of the optimality of das and arvind s tomographic scheme is easily answered by identifying what povms arise from `` projective measurement - based tomography '' and realizing these povms are optimal even in the generalized nonadaptive , individual - measurement scenario .claims about novel properties of a state - reconstruction technique should be evaluated as a comparison with a motivated restriction on the set of povms . the false dichotomy between `` tomographic methods '' and whatever new method is being proposed obfuscates that all new methods implement a povm and that reconstructing a state from povm statistics is nothing but tomography .our analysis shows that even the relatively bland and conceptually simple set of random odops captures most of the behavior exhibited by more exotic protocols .it is appropriate to move beyond the minimal , platform - independent povm description when considering ease of implementation or when trying to provide a helpful conceptual framework .nonetheless , a pleasing conceptual framework should not be confused with an optimal experimental arrangement .if the experimental setup described by lundeen _ et al . _ happens to be the easiest to implement in one s lab , the state should still be reconstructed using techniques developed in the general povm setting rather than the perturbative reconstruction formula presented in work on dst , regardless of how attractive one finds the wavefunction - amplitude analogy .we thank josh combes for helpful discussions .this work was support by national science foundation grant no .cf was also supported by the canadian government through the nserc pdf program , the iarpa mqco program , the arc via equs project no .ce11001013 , and by the u.s .army research office grant nos .w911nf-14 - 1 - 0098 and w911nf-14 - 1 - 0103 .e. bagan , m. a. ballester , r. d. gill , a. monras , and r. munoz - tapia , _ optimal full estimation of qubit mixed states _ , http://dx.doi.org/10.1103/physreva.73.032301[physical review a * 73 * , 032301 ( 2006 ) ] .d. gross , y .- k .liu , s. t. flammia , s. becker , and j. eisert , _ quantum state tomography via compressed sensing _ , http://dx.doi.org/10.1103/physrevlett.105.150401[physical review letters * 105 * , 150401 ( 2010 ) ] .a. barchielli , l. lanz , and g. m. prosperi , _ a model for the macroscopic description and continual observations in quantum mechanics _ , http://dx.doi.org/10.1007/bf02894935[il nuovo cimento b * 72 * , 79 ( 1982 ) ] .i. l. chuang , n. gershenfeld , m. g. kubinec , and d. w. leung , _ bulk quantum computation with nuclear magnetic resonance : theory and experiment _, link:[proceedings of the royal society a : mathematical , physical and engineering sciences * 454 * , 447 ( 1998 ) ] .g. a. smith , a. silberfarb , i. h. deutsch , and p. s. jessen , _ efficient quantum - state estimation by continuous weak measurement and dynamical control _ ,http://dx.doi.org/10.1103/physrevlett.97.180403[physical review letters * 97 * , 180403 ( 2006 ) ] .g. g. gillett , r. b. dalton , b. p. lanyon , m. p. almeida , m. barbieri , g. j. pryde , j. l. obrien , k. j. resch , s. d. bartlett , and a. g. white , _ experimental feedback control of quantum systems using weak measurements _ ,http://dx.doi.org/10.1103/physrevlett.104.080503[physical review letters * 104 * , 080503 ( 2010 ) ] . c. sayrin , i. dotsenko , x. zhou , b. peaudecerf , t. rybarczyk , s. gleyzes , p. rouchon , m. mirrahimi , h. amini , m. brune , j .-raimond , and s. haroche , _ real - time quantum feedback prepares and stabilizes photon number states _ , http://dx.doi.org/10.1038/nature10376[nature * 477 * , 73 ( 2011 ) ] .r. vijay , c. macklin , d. h. slichter , s. j. weber , k. w. murch , r. naik , a. n. korotkov , and i. siddiqi , _ stabilizing rabi oscillations in a superconducting qubit using quantum feedback _ , http://dx.doi.org/10.1038/nature11505[nature * 490 * , 77 ( 2012 ) ] .campagne - ibarcq , e. flurin , n. roch , d. darson , p. morfin , m. mirrahimi , m. h. devoret , f. mallet , and b. huard , _ persistent control of a superconducting qubit by stroboscopic measurement feedback _ , http://dx.doi.org/10.1103/physrevx.3.021008[physical review x * 3 * , 021008 ( 2013 ) ] . r. l. cook , c. a. riofrio , and i. h. deutsch , _ single - shot quantum state estimation via a continuous measurement in the strong backaction regime _ , http://dx.doi.org/10.1103/physreva.90.032113[physical review a * 90 * , 032113 ( 2014 ) ] .y. aharonov , d. z. albert , and l. vaidman , _ how the result of a measurement of a component of the spin of a spin-1/2 particle can turn out to be 100 _ , http://dx.doi.org/10.1103/physrevlett.60.1351[physical review letters * 60 * , 1351 ( 1988 ) ] .g. c. knee , g. a. d. briggs , s. c. benjamin , and e. m. gauger , _ quantum sensors based on weak - value amplification can not overcome decoherence _ , http://dx.doi.org/10.1103/physreva.87.012115[phys .a * 87 * , 012115 ( 2013 ) ] .c. ferrie and j. combes , _ weak value amplification is suboptimal for estimation and detection _ , http://dx.doi.org/10.1103/physrevlett.112.040406[phys .rev . lett . * 112 * , 040406 ( 2014 ) ] ; in particular , see the supplementary material .j. dressel , m. malik , f. m. miatto , a. n. jordan , and r. w. boyd , _ colloquium : understanding quantum weak values : basics and applications _ , http://dx.doi.org/10.1103/revmodphys.86.307[reviews of modern physics * 86 * , 307 ( 2014 ) ]. a. n. jordan , j. martinez - rincon , and j. c. howell , _ technical advantages for weak - value amplification : when less is more _ , http://dx.doi.org/10.1103/physrevx.4.011031[physical review x * 4 * , 011031 ( 2014 ) ] .j. lee and i. tsutsui , _ merit of amplification by weak measurement in view of measurement uncertainty _ ,http://dx.doi.org/10.1007/s40509-014-0002-x[quantum studies : mathematics and foundations , * 1 * , 65 ( 2014 ) ] . j. s. lundeen and c. bamber , _procedure for direct measurement of general quantum states using weak measurement _, http://dx.doi.org/10.1103/physrevlett.108.070402[physical review letters * 108 * , 070402 ( 2012 ) ] . z. shi , m. mirhosseini , j. margiewicz , m. malik , f. rivera , z. zhu , and r. w. boyd , _ scan - free direct measurement of an extremely high - dimensional photonic state _ , http://dx.doi.org/10.1364/optica.2.000388[optica * 2 * , 388392 ( 2015 ) ] in the long tradition of institutions abandoning a name in favor of initials , to divorce from some original product or purpose , we recommend the use of dst in the hope that the `` direct '' can be forgotten .
the use of weak measurements for performing quantum tomography is enjoying increased attention due to several recent proposals . the advertised merits of using weak measurements in this context are varied , but are generally represented by novelty , increased efficacy , and foundational significance . we critically evaluate two proposals that make such claims and find that weak measurements are not an essential ingredient for most of their advertised features .
one of the remote probing techniques for the ionosphere is the method of radio waves incoherent scatter .the method is based on the scattering of radio waves from ionospheric plasma dielectric permittivity irregularities [ _ evans _ , 1969 ] .furthermore , two different experimental configurations are involved : monostatic ( where the receive and transmit antennas are combined ) and bistatic ( where these antennas are spaced ) . in actual practice , it is customary to use the monostatic configuration . ionospheric plasma parameters ( ion composition , drift velocity , electron and ion temperatures , and electron density ) in this case are determined from the scattered signal received after completion of the radiated pulse .the spectral power of the received signal , averaged over sounding runs ( realizations ) , is related ( assuming that such an averaging is equivalent to statistical averaging ) to the mean spectral density of dielectric permittivity irregularities by the radar equation [ _ tatarsky _ , 1969 ] .the connection of the dielectric permittivity irregularities spectral density with mean statistical parameters of the medium is usually determined in terms of kinetic theory [ _ clemow and dougherty_,1969 ; _ sheffield _ , 1975 ; _kofman_,1997 ] .the location and size of the ionospheric region that makes a contribution to the scattered signal ( sounding volume ) is determined by the antenna beam shape , the sounding radio pulse , and by the time window of spectral processing [ _ suni et al ._ , 1989 ] .the shape of the sounding volume determines also the method s spectral resolution , the accuracy to which the mean spectral density of dielectric permittivity is determined ( which , in turn , affects the determination accuracy of macroscopic ionospheric parameters : electron and ion temperatures , the drift velocity , and electron density ) .the number of realizations , over which the received signal spectral power is averaged , determines the method s time resolution , i.e. its ability to keep track ( based on measurements ) of fast changes of macroscopic parameters of ionospheric plasma .currently most incoherent scatter radars have accumulated extensive sets of the scattered signal individual realizations ( private communications of p.erickson ( millstone hill ) , v.lysenko ( kharkov is radar ) , and g.wannberg ( eiscat ) ) .therefore , attempts are made to analyze the realizations from different methods which differ from a standard averaging by their sounding runs .basically , these methods imply looking for small scatterers making the main contribution to the scattered signal .this method is good for analyzing signals scattered from meteors and their traces [ _ pellinen - wannberg _ , 1998 ] ; however , it is insufficiently substantiated for describing the scattering in the ionosphere . in the workthere were used the experimental data obtained with irkutsk incoherent scatter radar .the radar is located at , it has sounding frequency 152 - 160 mhz and peak power 3mw .high signal - to - noice ratio during the experiments under investigation ( s / n > 10 ) allows us to neglect the noice effects when analyzing the signal received .the technique of the incoherent scatter signal processing in irkutsk is radar is the following .for each single realization of received signal we calculate spectrum in time window with width equal to the sounding signal duration and with delay corresponding to the radar range to the sounding volume investigated .the sounding signal we use in this experiment is a radiopulse with duration 800 mks .the repeating frequency approximately 25 hz .averaging over the 1000 realizations corresponds to dispersium of the overaged spectrum relative to its mathematical expectation .the reason of using such a simple pulse is to investigate the fine structure of the single ( unaveraged ) spectrums in this simpliest case .figure [ figone ] exemplifies the mean spectral power of the scattered signal and its separate realizations , based on the data from the irkutsk incoherent scatter radar .it is evident from the figure that the spectral power of the scattered signal in an individual realization ( figure [ figone](b - d ) ) differs drastically from that averaged over realizations ( figure [ figone](a ) ) ; therefore , existing model of the incoherently scattered signal , based on averaging over sounding runs , are inapplicable for its interpretation .for that reason , development of new models of the scattered signal for analyzing its separate realizations without averaging them is important from the theoretical and practical standpoint .sometimes it is useful to suppose that incoherent scattering signal is a random gaussian one [ for example , _ farley _, 1969 ; _ zhou _ , 1999 ] .but , it is well known that the signal received is a detirministic function of ionospheric dielectric permittivity and is fully determined in first approximation by the born s formula ( in one or another its form [ _ ishimaru _ , 1978 ; _ berngardt and potekhin _ , 2000 ] ) , this relation could be called as a radar equation for signals [ _ berngardt and potekhin _ , 2000 ] .the dielectric permittivity irregularities also could be supposed as a random functions , but they are deterministic functional of some other functions ( in case of uncollisional unmagnetized palsma with one ions type those functions are phase density of the ions and electrons as functions of velocity , location and time , ion composition and temperatures of the ions and electrons , this functional dependence is determined by the landau s solution [ _ landau _ , 1946 ] ) . if one could determine all these unknown functions , the received signal shape in single realization will be fully determined , and could be analyzed without using any statistical methods .such an approach , for example , is used in radioacoustical technique of the athmosphere sounding when the delectric permittivity irregularities ( by which the radisignal is scattered ) are generated by the acoustical wave [ _ kalistratova and kon _ , 1985 ] .the statistical proporties of the single realizations are showed at figure [ figstat ] . from this figureit becomes clear that the unaveraged spectrum has the fine structure - it consists from a number of peaks with approximately 1.5khz width ( and this width very slightly depends on frequency ) , which could be characterized by the peak amplitude(amplitude at the maximum of the peak ) and peak appearence ( number of realizations in which there is a peak maximum at given frequency ) at the given frequency , and those properties distributions are not gaussian ones but have double peaked structure and located in the same band with incoherent scattering average spectral power .this fact allows as to suppose that not only average spectral power of the received signal depends on ionosperical parameters , but the fine structure of non - averaged spectra too . at first , it is neccessary to understand quilitatively , what information one could obtain from one realization of the is signal .it is well known , that after any statistical processing of a function a part of the information is loosed irreversibly ( for example .when one calucaltes the first n statistical moments , all the rest moments , starting with n+1 are still unknown ) .that is why , if the statistical characteristics of the realizations ( mean spectral power or correlation function ) are depend on the ion and electron temperatures and the ion composition then single realization must depend on all those parameters and on some new additional parameters .it is clear that to determine temperatures and ion composition from averaged signal parameters is much easier than from single realization ( because the second one includes additional parameters ) , and we can use the ones obtained from mean spectral power , with necessary spatial and spectral resolution , using different techniques , for example alternating codes [ _ lehtinen _ , 1986 ] .but the new additional parameters can be determined from single realizations only .the aim of this paper is to find out the functional dependence of single realization spectrum on all the parameters , including well known ( temperatures and ion composition ) and new ones , which could describe the single realizations spectrum properties . for this proposewe will use for analysis only signals with high signal to noise ratio ( more than 10 ) , because in this case the noice effects could be neglected and the received signal could be supposed as only is signal without presence any noice .to analyze the individual realizations of the scattered signal , it is necessary to have a convenient expression relating the spectrum of the scattered signal to the space - time spectrum of dielectric permittivity irregularities without averaging over realizations .such an expression for a monostatic experimental configuration was obtained and analyzed in [ _ berngardt and potekhin _ , 2000 ] .it holds true in the far zone of the receive - transmit antenna and , within constant factors ( unimportant for a subsequent discussion ) , is here - is the space - time spectrum of dielectric permittivity irregularities ; - is the narrow - band weight function ; - are , respectively , the sounder signal envelope and the time window of spectral processing ; - is the antenna factor which is the product of the antenna patterns by reception and transmission ; - is a unit vector in a given direction ; - the wave number of the sounding wave ; -is the velocity of light .suppose that sounding signal and receiving window of spectral processing are located in time near the moments and respectively and theirs carriers do not intersects ( this is the one of the radar equation ( [ eq : rlu ] ) obtaining conditions [ _ berngardt and potekhin _ , 2000 ] ) . in this casethe carrier of the weight function is located near the . by going in equation ( [ eq : rlu ] ) to the spectrumscalcualated relative to the weight volume center ( to remove the oscillated multipliers under integral ) , we obtain ( neglecting to the unessentional multiplier ) : {l } \int h_{1}(\omega -\nu , k-2k_{0}-\frac{\nu } { 2c } ) \frac{g(-\widehat{\textrm{k}})}{k } \\\widetilde{\epsilon } ( \nu , \overrightarrow{k};t_{1}-t_{0}/2,-\widehat{k}t_{0}c/2)d\nu d\overrightarrow{k } \\\end{array } .\ ] ] where - low oscillating part of the , corresponding to its calculation relative to the center of the weight volume ; and - is a time - spatial spectrum of dielectric permittivity irregularities calculated relative to the point . in accordance with [ _ sheffield_,1975 ; _ clemmow and dougherty_,1969 ; and _ akhieszer et al._,1974 ] , assume that the spectrum of small - scale dielectric permittivity irregularities is determined by the landau solution [ _ landau_,1949 ] .then the low - frequency ( ion - acoustic ) part of the irregularities spectrum in a statistically homogeneous , unmagnetized , collisionless ionospheric plasma with one sort of ions is determined by plasma macroscopic parameters ( electron and ion temperatures , ion composition , and drift velocity ) , and by unknown conditions in the moment related to which this spectrum is calculated - the ions number phase density in a six - dimensional phase space of velocities and positions of particles .it is known that the dielectric permittivity irregularities spectrum at large wave numbers of the sounding wave ( where is plasma frequency ) , is proportional to the electron density irregularities spectrum [ _ landau and lifshitz_,1982 , par.78 ] : which is given by the expression ( for example , [ _ sheffield _ , 1975 , sect.6 ] ) : where - is longitudinal dielectric permittivity ; wave number should be small enought to wave length be smaller than debye length ( solpiter approximation ) .most part of is radars have the sounding frequencies(50 - 1000 mhz ) within these limitations . - are equilibrium distribution functions of the electrons and ions velocity and their densities ; - are the mass and charges of electrons and ions , respectively ; - the ions number phase density in a six - dimensional phase space of velocities and positions of particles ( the ions number phase density , inpd , at ) , with the summation made over all ions . generally equilibrium distribution functions are taken to be maxwellian , with the temperatures and for electrons and ions , respectively , and in the absence of a drift they are where stands for the thermal velocities of electrons and ions , respectively .then the functions have the well - known analytical expression , for example [ _ sheffield_,1975 ] : where the physical meaning of the expression ( [ eq : irregularities ] ) is as follows : the position and velocity of each ion at the moment are determined by the inpd , and the dielectric permittivity irregularities are determined by ion - acoustic oscillations of plasma under the action of such initial conditions .traditionally , the incoherent scattered signal is processed in the following way .a set of the scattered signal spectra ( [ eq : rlu ] ) is used to obtain its spectral power averaged over realizations . by assuming that an averaging over the realizations is equivalent to a statistical averaging , and also by assuming a maxwellian distribution of the inpd , one can obtain the following expression for the mean spectral power of the scattered signal [ _ suni et al._,1989 ] : where is the smearing function determined by the spectrum of the sounder signal and the spectral processing time window ; and is averaging over realizations .the frequency dependence of the scattered signal mean spectral power ( [ eq : average_rlu ] ) under usual ionospheric conditions has a typical two - hump form ( figure [ figone](a ) ) [ _ evans_,1969 ] . from the scattered signalmean spectral power ( [ eq : average_rlu ] ) it is possible to determine the electron and ion temperatures and the drift velocity involved in a familiar way in the functions and [ _ sheffield_,1975 ] . in single realizations , however , the scattered signal spectralpower differs essentially from the mean spectral power .figure [ figone ] presents the scattered signalspectral power in three consecutive realizations ( figure [ figone](a - c ) ) , the spectral power averaged over 1000 realizations ( figure [ figone](d ) ) , and the spectral power of the sounder signal envelope ( figure [ figone](e ) ) . from figure [ figone ]it is evident that the non averaged spectral power of the incoherent scatter signal ( figure [ figone](a - c ) ) has a typical peaked form , the width of peaks is larger than that of the sounder signal spectrum , and the peaks themselves are concentrated in the band of the mean signal spectral power . in the case of an averaging over realizations , such a peaked structure transforms to a typical smooth two - hump structure ( figure [ figone](d ) ) .to obtain a model of the incoherent scatter signal we substitute the landau s expression for dielectric permittivity irregularities ( [ eq : irregularities ] ) into the expression for the scattered signal spectrum ( [ eq : rlu_mod ] ) . using in ( [ eq : irregularities ] ) the spatial spectrum of the ions number phase density , and upon interchanging the order of integration , we obtain the one - realization model for the scattered signal spectrum : here is unknown function we want to determine from experiment and has a form similar to the radon transform of the function . a kinetic function ( showed at figure [ figtwo ] ) is determined by macroscopic parameters of ionospheric plasma and ; these parameters can be determined , for example , from measurements of the mean spectral power of the received signal ( [ eq : average_rlu ] ) .thus the kernel is completely determined by the sounder signal , the receiving window , and by macroscopic characteristics of ionospheric plasma . the expression ( [ eq : signal_model ] ) clearly shows the meaning of the kernel : it determines the selective properties of the model , i.e. the possibilities of determining the unknown function from the measured .hence it can be termed the weight volume in the space , or ambiguity function .since the function is a narrow - band one , with its carrier concentrated near zero , the function of indefiniteness has also a limited carrier in near .the possibilities of determining the unknown function dependence on the wave vector directions are determined by the product of the kinetic function and the antenna beam . according to the resulting model ( [ eq : signal_model ] ) , the form of scattered signal single spectrum is determined both by a determinate component ( the weight volume ) , and by a random ( i.e. dependent on time by the unknown way ) component . a random component is the function determined by the spartial harmonics packet of the inpd with wave numbers , concentrated near and calculated relative to the moment .the moment is determined by the moments of the sounding signal transmitting and spectral processing receiving window location , and corresponds to the middle moment between them .the weight volume determines the parameters of this wave packet , the region of wave vectors and velocities in which make the main contribution to the scattered signal at a particular frequency . in this experiment the spectra sounder signal and the receiving window selected such as they are sufficiently narrow - band ones ( 1khz ) , in comparison with the functions and , in order to improve the accuracy of their determination from experimental results .therefore , the weight function can also be considered narrow - band from both arguments as compared with the kinetic function from corresponding arguments . in this casewe can approximate the weight volume as : the function is concentrated near [ _ berngardt and potekhin_,2000 ] .the width of this function from arguments is , , where , is the width of bands of sounder signal spectra and of the spectral processing window , respectively . in this experiment on ionosphericsounding by the incoherent scatter method , they have the order . the function at a fixed for a typical ionospheric plasma with ions and is presented in figure [ figtwo ] .the figure shows that the function does varies smoothly on the characteristic size of the weight function carrier in which is in this case has the order of ( corresponds to a sounding by the impulse radio signal of a duration of 1 millisecond ) .assuming that the envelope of the sounder pulse and the receiving window have an identical gaussian - like spectrum : , we obtain the function of the form upon substituting ( [ eq : model1 ] ) into ( [ eq : kern_model_modified ] ) and rather unwieldy calculations , similar to [ _ landau _ , 1946 ] , we obtain the following expression for the scattered signal spectrum ( [ eq : signal_model ] ) : where the selective properties of in the longitudinal component of the velocity are determined by the first cofactor in ( [ eq : modified_kernel ] ) .a maximum in at a fixed is determined by the condition which , as a consequence of the properties of the exponential and the functions , corresponds to the frequency doppler shift condition in the case of the scattering from a single particle : the width of a maximum in ( that determines the region of velocities making the main contribution to the scattered signal at fixed and ) can be estimated from the condition to be the selective properties of in wave numbers are determined by the second cofactor in ( [ eq : modified_kernel ] ) .a maximum in , at a fixed , is determined by the condition which corresponds to the condition ( analogical to the volf - bragg condition for scattering from nonstationary spatial harmonic): the width of a maximum in ( that determines the region of wave numbers making the main contribution to the scattered signal at a fixed ) can be estimated from the condition to be the function determines the selective properties of in the direction of the wave vectors and the maximum possible width of the received signal spectrum determined by the kinetic function and antenna factor .from ( [ eq : simpl_model ] ) it follows that the fine structure of the scattered signal spectrum is determined only by the properties of the function ( [ eq : f_integral ] ) and can be related to the inpd only within the framework of additional assumptions about the structure of .formally , to determine the function require measuring the scattered signal for different wave numbers of the sounder signal simultaneously . to carry out a qualitative comparison with experimental spectra of incoherently - scattered signals we use the spectral processing window that repeats the sounder signal shape .their spectra are approximated by gaussian - like spectra with a width equal to their actual spectrum width . according to ( [ eq : simpl_model ] ), the spectrum of the received signal is defined by the unknown function ([eq : f_integral ] ) ( convoluted with a kernel ) . for comparison with experimental data , we give following simple model of the function . * simple model * assume that involves only one peak in the range of longitudinal wave vectors and velocities of our interest : this model corresponds to the fact that the medium involves an isolated spatial harmonic at the wave vector and with longitudinal velocity , the amplitude of which is significantly larger than the amplitudes of spatial harmonics close to it .the spectrum of the received signal will then involve also only one peak: and the form and width of this peak will be defined by the product . from the position of the peak in the spectrum , one can determine the unknown wave number of the spatial harmonic that make the main contribution to the scattered signal ( [ eq : k_w ] ) and the longitudinal velocity ( [ eq : drift_w ] ) corresponding to the moment .this model gives the signal spectrum with only one peak . according to the model, the observed presence of several peaks in the real spectrum corresponds to existance not only one peculiarity described by the model but a numer of them .let us show that after averaging this model gives as the well known expression for the is average spectral power .according traditional approaches lets suppose that .summarizing over the different realizations gives the produces the function:{l } \int \left| v_{3}(\omega , \widehat{k } ) \right| ^{2 } d\widehat{k } \sum_{j=1}^{n } f_{i0}(v_{||,j})\\ \left| v_{1}(\omega -2k_{0}v_{||,j})\right| ^{2 } \int \left| v_{2}(\omega -(k-2k_{0})c)\right| ^{2 } dk\\ = const \left| \int v_{3}(\omega , \widehat{k})d\widehat{k}\right| ^{2}\int f_{i0}(v)\left| v_{1}(\omega -2k_{0}v)\right| ^{2}dv\\ \sim \left| \frac{g_{e}(\omega , 2k_{0})g(-\widehat{k_{0}})}{8k_{0}^{3}\epsilon _ { ||}(\omega , 2k_{0})}\right| ^{2}\int f_{i0}(v)\left| v_{1}(\omega -2k_{0}v)\right| ^{2}dv \end{array}\ ] ] from ( [ eq : mod1_rezult ] ) becomes clean , that the average received signal spectral power is determined as a product of the kinetic function and maxwellian ions distribution convolved with a narrow - band function ( which is defined by the sounding signal and spectral processing receiving window spectra ) .qualitatively this solution ( [ eq : mod1_rezult ] ) is close to the average spectral power obtainded by the tradditional way ( [ eq : average_rlu ] ) .a numerical simulation have been made for the model containing a number of peaks : where - some constant ; ; - a function with uniform distribution of values ( we have used random one , having fixed value for fixed ) .this expression for has a normal distribution over , uniform distribution over , and statistical independence of the values for different ( or when delay between them exceeds the interval in which changes slowly ) .spectra obtained by substituting this simple model ( [ eq : comp_model ] ) into obtained equation ( [ eq : signal_model ] ) , gives us the following single spectrums and their statistical properties , are shown at figure [ fig4 ] .as one can see , single realizations of the spectral power(e - g ) have the same structure , as an experimental one ( figure [ figone ] ) , the close peak width ( b ) ( figure [ figstat],b ) , and same relation between mean peak appearence ( c ) and average spectral power ( d ) ( figure [ figstat],c , d ) .this allows us to suppose that the model of the scattered signal single spectrum ( [ eq : signal_model ] ) could be used to describe signal properties , and simplified model ( [ eq : comp_model ] ) could quilitatively describe the ion number phase density behavior .in this paper we have suggested an interpretation of separate realizations of incoherently scattered signal spectra . it is based on the radar equation [ _ berngardt and potekhin_,2000 ] and kinetic theory of ion - acoustic oscillations of a statistically homogeneous unmagnetized , collisionless ionospheric plasma with one sort of ions [ _ landau_,1946 ] . in accordance with the proposed model ( [ eq : signal_model ] ) , the main contribution to the scattering is made by plasma waves caused by spatial harmonics of ions number phase density , with wave numbers on the order of the double wave number of the sounder signal , for .it has been shown that the form of the received signal spectrum is related to the inpd . at each frequency the value of is determined by radons - like integral on between the limits of the velocity components across the wave vector ( [ eq : f_integral ] ) .the region of wave vectors and longitudinal velocities making contribution to the received signal is determined by the weight volume ( [ eq : kern_model ] ) .so , by changing the transmitting pulse start moment and the moment of its receiving one could measure the value of , as function of the time .this allows the inpd diagnostics for different moments including delays smaller than theirs lifetime and without statistical averaging of receiving signal .actually , in the case of irregularities lifetime much longer than sounding pulses repeating interval . , one could measure the behavior as function of the time . based on the proposed model in ( [ eq : signal_model ] ) and a gaussian approximation of the spectra of the sounder signal envelope and the receiving window , a qualitative comparison of the model with experimental data from the irkutsk incoherent scatter radar was carried out .the comparison showed a quilitative agreement for simplified model ( [ eq : comp_model ] ) , based on additional assumptions about the properties of the function .i am grateful to b.g.shpynev for making the data from the irkutsk is radar available and to a.p.potekhin for fruitful descussions .the work has been done under partial support of rfbr grants # 00 - 05 - 72026 and # 00 - 15 - 98509 .kofman , w. , plasma instabilities and their observations with the incoherent scatter technique _incoherent scatter : theory , practice and science _ , edited by d.alcayde , pp .3365 , technical report 97/53 , eiscat scientiffic association , 1997 .pellinen - wannberg , a. , a. westman , g. wannberg , k.kaila , meteor fluxes and visual magnitudes from eiscat radar event rates : a comparison with cross - section based magnitude estimates and optical data , _ann.geophysicae_ , _ 16 _ , 14751485 , 1998 tatarsky , v. i. , _ wave propagation in a turbulent atmosphere _ ,moscow , nauka , 1967.(in russian ) farley , d.t . , incoherent scatter power measurements : a comparison of various techniques , _ radio science _ , _ 4_(2 ) , 1969 zhou q.h . , zhou q.n . and mathews j.d . , arithmetic average , geometric average and ranking : application to incoherent scatter radar data processing , _ radio science _ , _ 34_(5 ) , 1999 ishimaru a.,_wave propogation and scattering in random media _ , academic press , v.1 , 1978 . kalistratova m.a . and kon a.i . , _radioacoustical sounding of the atmoshpere _ ,moscow , nauka , 1985.(in russian )
this paper offers a model for incoherent scatter signal spectra without averaging the received signal over sounding runs ( realizations ) . the model is based on the existent theory of radio waves single scattering from the medium dielectric permittivity irregularities , and on the existing kinetic theory of the plasma thermal irregularities . the proposed model is obtained for the case of monostatic sounding . the model shows that the main contribution to the received signal is made by ion - acoustic waves caused by certain spatial harmonics of the ions number phase density . the model explains the width and form of the signal spectrum by macroscopic characteristics of the medium , and its fine peaked structure by characteristics of the ions number phase density . the notion of the weight volume is introduced to define the domain of wave vectors and velocities in the spatial spectrum of the ions number phase density which makes the main contribution to the formation of the scattered signal . this weight volume depends on the antenna pattern , the form of the sounding signal , and on the time window of spectral processing , as well as on ionospheric plasma macroscopic parameters : electron and ion temperatures , ion composition , and the drift velocity . within the context of additional assumption about the ions number phase density , the proposed model was tested by the data from the irkutsk incoherent scatter radar . the test showed a good fit of the model to experiment .
telescope bibliographies are important tools to measure scientific output .typically , they contain all ( or all refereed ) papers using observational data from specific facilities that were published in the scholarly literature .telescope bibliography databases are therefore the ideal source to derive various kinds of reports and statistics . for instance , management and governing bodies of observatoriesmay be interested in publication and citation statistics to evaluate the performance of observatories and telescopes .instrument scientists often need reports regarding the scientific impact of specific instruments and research programs .such reports can also provide guidelines for future telescopes and instruments .for the astronomy community at large , it is important that telescope bibliographies interconnect resources : publications are linked to the observing programs that generated the data and to the actual data in the archive , and in turn scientists will be able to go from the archival data directly to all publications that use these data . in this context , the eso library has developed and maintains two tools : ( 1 ) fuse is a full - text search tool that semi - automatically scans defined sets of journal articles for organizational keyword sets , while providing highlighted results in context ; ( 2 ) the telescope bibliography ( telbib ) is used to classify eso - related papers , store additional metadata , and generate statistics and reports .both tools rely heavily on the nasa ads abstract service for bibliographic metadata . in this paper, we describe how fuse and telbib link publications and observations and explain the main features of the new public telbib interface .fuse and telbib form part of a workflow that links published literature with data located in the eso archive .the result is an information system that answers predominantly two questions : * which eso facilities generated the data used in the scientific literature ? * which publications used data provided by specific eso facilities ? in the following ,essential components of fuse and telbib are explained ( see fig .1 ) . in order for fuse to work ,it is necessary to have access to the electronic versions of all scientific journals that shall be monitored .fuse provides search methods for these journals that allow to retrieve the full - texts of articles ( typically in pdf format ) from the publishers websites .fuse is a php / mysql tool created by the eso library .it converts pdfs into text files and scans them for user - defined keyword sets .if any keywords are detected in the text files , they are highlighted and shown in context on the results page .after fuse identifies possible candidates for the eso telescope bibliography , these papers are inspected visually in detail .records that shall be added to telbib are imported into the database through the librarians interface ( telbib back - end , implemented with php / sybase ) .bibliographic information ( authors , title , publication details ) along with further information like current number of citations , author - assigned keywords , and author affiliations are imported from the ads abstract service .the eso librarians extensively tag and annotate each telbib record .such tags include standardized descriptions of telescopes , instruments , surveys , and other information .most importantly , all eso program ids that provided data are assigned to the telbib record of the paper .the program ids assigned to telbib records provide links to the corresponding data in the eso archive . in this way, readers of scientific papers who are interested in the data used in the publication can easily find the observing programs that were used in the research .the data can be requested after the usual one year proprietary period .program ids in the eso archive provide links back to telbib , listing all scientific papers that use specific observing programs . of course telbib can also be queried directly through the public user interface .more information on the telbib front - end is given in the following section .for many years , the public interface of the eso telescope bibliography had not changed its look and feel .now , telbib s front - end has undergone a complete makeover .it has been created and will be further developed by the eso librarians . a state - of - the - art interface has been implemented .using apache solr together with php , telbib now provides new features and sophisticated search functionalities .the new system will be rolled out in the final quarter of 2011 .it will be accessible at www.eso.org/libraries/telbib.html .the telbib front - end interface provides a variety of options to query the database .these include searches by bibliographic information ( authors , title words , author - assigned keywords , publication year , etc . ) as well as by observing facilities ( instruments , telescopes ) and program ids .the main search screen shows a list of the top 5 journals and instruments , indicating the number of records in telbib for each journal and using data from these instruments , respectively .in addition , the most recent five years are displayed along with the number of papers per year .a spellchecker is available for certain search fields ( for instance author names , title words ) . in case search terms entered in these fieldsdo not lead to any hit , the system will provide hints towards search terms that will turn up results ( `` did you mean ... ? '' ) .in addition , queries for authors , bibcode , and program ids are supported by an autosuggest feature that offers search terms which exist in the index after at least two characters have been entered .the results page lists papers that fulfill the query parameters in six columns , showing the publication year , first author , instruments , program ids , and the bibcode of each paper .titles are linked to the detailed view of records , program ids lead to the eso observing schedule and ultimately to the archive from where the data can be requested , and bibcodes are connected with the full - texts at ads . in order to limit search results ,faceted filtering is available in the `` refine search '' area on the left - hand side .facets exist for publication years , journals , and instruments .for the latter two filters , the top 5 among these results are shown , together with the five most recent years .the lists can be expanded by clicking on the `` more ... '' button . the detailed record view ( fig .2 ) shows all eso observing facilities that were used in the paper as well as additional tags like survey names .search terms are highlighted for easy identification .instruments , telescopes , and observing sites are hyperlinked and will retrieve other papers that use the same facilities . for program ids , two links are offered .clicking on the program i d itself will evoke a new search for all papers that use the given program . selecting `` access to data '' takes users to the observing schedule and from there to the archive where the respective data can be requested if the proprietary period has ended . at the bottom of the page , users will find recommendations for other papers that may be of interest .this section is entitled `` also of interest ? '' and offers access to other papers with similar content than the one currently displayed .we are very grateful to chris erdmann , now at the harvard - smithsonian cfa , who created the first versions of fuse and telbib .both programs make extensive use of nasa s astrophysics data system .many thanks to all of them .
bibliometric studies have become increasingly important in evaluating individual scientists , specific facilities , and entire observatories . in this context , the eso library has developed and maintains two tools : fuse , a full - text search tool , and the telescope bibliography ( telbib ) , a content management system that is used to classify and annotate eso - related scientific papers . the new public telbib interface provides faceted searches and filtering , autosuggest support for author , bibcode and program i d searches , hit highlighting as well as recommendations for other papers of possible interest . it is available at + http://telbib.eso.org .
in recent years , the theory of optimal transport has been actively studied . in particular ,properties of the optimal transport map on riemannian manifolds are well established .the existence and uniqueness theorem for the optimal transport map on riemannian manifolds was proved by ; this result extended the pioneering work of for the euclidean case .he showed that optimal transport is given by the gradient map of a so - called cost - convex function . on the other hand , for statistical data analysis on euclidean space ,it is useful to consider convex combinations of convex functions in order to construct various probability density functions ( , ) . in this paper, we show that when the underlying space is the sphere , the convex combination of cost - convex functions is actually cost - convex ( lemma [ lem : convex ] ) and the jacobian determinant of the resultant gradient map is log - concave with respect to the convex combination ( theorem [ thm : jacobian ] ) .this result is an extension of the jacobian interpolation inequality shown by .we refer to our jacobian inequality as _ the jacobian inequality _ throughout this paper , for simplicity .our result is related to the regularity theory of optimal transport maps .here we consider some recent studies in this field . showed that regularity of the transport map for general cost functions on euclidean space is assured if a geometrical quantity called the _ cost - sectional curvature _ is positive .conversely , showed that non - negativity of the cost - sectional curvature is necessary for regularity .he also showed that non - negativity of the cost - sectional curvature implies non - negativity of the usual sectional curvature if the cost function is the squared distance on a riemannian manifold .however , the converse does not hold ( ) .comprehensive assessment on the theory of optimal transport has been published ( ) .a relevant concept is _ the cross - curvature _ ( ) . and independently showed that the sphere has almost positive cross - curvature . in general , the cost - sectional curvature is non - negative if the cross - curvature is non - negative . in the present paper ,we use the non - negative cross - curvature property of the sphere to prove our main results .we show that our jacobian inequality opens several doors for applications to directional statistics . in this field, a family of probability densities is used to analyze given directional data , such as locations on the earth .for example , a test on the directional character of given data is constructed via families of probability density functions on the sphere .directional statistics has a long history since and a comprehensive text on this subject has been published ( ) .we define a probability density function on the sphere by the gradient maps of cost - convex functions .although , in the context of optimal transport , one usually considers push - forward of probability densities , we construct a family of densities by means of _ pull - back _ of probability densities .this follows from the fact that a pull - back density has an explicit expression for the likelihood function needed for statistical analysis .the density function does not need any special functions such as the modified bessel function , which usually appear in directional statistics .furthermore , the jacobian inequality implies that the likelihood function is log - concave with respect to the statistical parameters .this property is reasonable for computation of the maximum likelihood estimator .we propose more specific models and show graphical images of each probability density . in terms of analysis of real data , we present the result of density estimation for some astronomical data .this paper is organized as follows . in section [ sec : main ] , we present basic notation and state our main theorem . in section[ sec : appl ] , we construct a family of probability density functions on the sphere and apply them to directional statistics .all mathematical proofs of the main theorem and lemmas are given in section [ sec : proof ] .finally we present a discussion in section [ sec : discussion ] .let be the -dimensional unit sphere . the tangent space at is denoted by .the geodesic distance ( arc length ) between and in is denoted by .the cost function is .if one uses euclidean coordinates in to express , then , where the range of is ] the function of is also -convex . showed lemma [ lem : convex ] simultaneously and independently from us .indeed , they showed more general result , in that the convexity of the space of -convex functions is necessary and sufficient condition for the non - negative cross - curvature property ( see theorem 3.2 in ) .we define as long as is differentiable at , where is the gradient operator . following , we call the _ gradient map _ associated with the _ potential function _ . the map is differentiable at if and has its hessian at .it is known that any -convex on any compact riemannian manifold is lipschitz and therefore differentiable almost everywhere .furthermore , has a hessian almost everywhere in the alexandrov sense , and therefore is differentiable almost everywhere ( see and ) .these technical facts on differentiability are important for the theory of optimal transport .however , we will not need them because , for statistical applications , we can assume from the beginning that is differentiable except at a finite set of points ( see section [ sec : appl ] ) . for any -convex functions and , by lemma [ lem : convex ] , the convex combination is -convex .we define an interpolation of gradient maps by .\ ] ] assume that for each , and has its hessian at .then it is easy to see that , for any ] ) is also in .we construct a probability density function for each .let be a random variable on distributed uniformly . then , since is bijective , we can define a random variable on by .the probability density function of with respect to the uniform measure is , where the symbol jac refers to the jacobian determinant . in other words, we define by the _ pull - back _ measure of the uniform measure pulled by the gradient map . at this point, we describe the exact sampling method of the probability density function .a sampling procedure is important if one needs to calculate expectations by the monte carlo method . from the definition ,it is clear that the random variable with a uniformly random variable on has density .hence if we can generate and solve the equation effectively , we obtain a random sample .indeed , is quite easily generated , for example , by normalization of a standard gaussian sample in .to solve , it is sufficient to find the unique minimizer of the function with respect to since the following lemma holds .[ lem : exact][lemma 7 of ] if is -convex and is defined at , then the unique minimizer of with respect to is .thus our task is to solve the ( deterministic ) minimization problem .although the minimization problem of is not convex in the usual sense , the objective function has no local minimum , by -convexity .hence the problem is efficiently solved by generic optimization packages .an example of sampling is illustrated in figure [ fig : sample ] .we consider a finite - dimensional set of probability densities on the sphere . in statistics , a finite - dimensional set of probability densities is called a _statistical model_. an unknown parameter that parameterizes the density functions is estimated from observed data points .one of the most important estimators is the maximum likelihood estimator that maximizes the likelihood function with respect to .we construct a new statistical model using -convex functions .recall that the set of wrapping potential functions is a convex space ( lemma [ lem : convex - gamma ] ) .we can consider a finite - dimensional subspace as follows .let for .define where ranges over a convex subset of such that for any . by lemma [ lem : convex - gamma ] and the elementary fact that , we can use the simplex as .let be the probability density function induced by , that is , we call the family ( [ eqn : gradient - model ] ) _ the spherical gradient model_. the maximum likelihood estimator for the spherical gradient model ( [ eqn : gradient - model ] ) is reasonably computed by the following corollary of theorem [ thm : jacobian ] .[ cor : statistics ] define by ( [ eqn : gradient - model ] ) .then , for any data points , the likelihood function is log - concave with respect to . as an anonymous referee pointed out , in euclidean space ,there are results on _ convexity along generalized geodesics _ ( chapter 9 of ; see also ) . herea generalized geodesic is defined by the set of measures _ pushed forward _ by the gradient maps } ] .the -th derivative of is denoted by .the following lemma is fundamental .[ lem : rotationally - symmetric ] assume that and for almost all ] that satisfy the assumption in lemma [ lem : rotationally - symmetric ] .choose pairs from .then we can define the spherical gradient model ( [ eqn : gradient - model ] ) with where is a convex subset of such that for all .[ rem : rotationally ] if , the resultant density is a function of for some . in directional statistics , such a probability density functionis called rotationally symmetric .we briefly touch on known distributions on the sphere in statistics .a very well - known distribution on the sphere is the _von mises - fisher distribution _defined by in euclidean coordinates of , where and denotes the modified bessel function of the first kind and order .a more general distribution is the _ fisher - bingham distribution _ defined by where is a normalizing factor to ensure that .see for details .we return to our spherical gradient model ( [ eqn : gradient - model ] ) with ( [ eqn : rot - sym - potential ] ) .the following explicit formula due to a general expression ( [ eqn : jacobian - determinant ] ) is useful for practical implementation : where euclidean coordinates in are used .we remark that the above formula needs no special function , unlike the von mises - fisher distribution ( [ eqn : vonmises - fisher ] ) or the fisher - bingham distribution ( [ eqn : fisher - bingham ] ) .we give examples of pairs .recall that is the set of all wrapping potential functions .let for all .we use euclidean coordinates in to express .then is in as long as .we deduce that a potential function is in if . the parameter determines the direction and magnitude of concentration .that is , the resultant density function takes larger values at when is closer to and is larger , where the negative sign of is needed because our model is defined by the pull - back measure .we call the linear potential and the resultant statistical model _ the linear - potential model_. this model is rotationally - symmetric ( see remark [ rem : rotationally ] ) . an example is given in figure [ fig : spherical-1 ] ( a ) .consider for and for .then the potential can be written as let and .let denote the trace norm of defined by the sum of absolute eigenvalues of .this is actually a norm because ] in this paper . for given with , let and be smooth curves such that and .we assume that either (y) ] .note that only one of the two curves is assumed to be a -segment .then _ the cross - curvature _ is well defined by where and . for a given quadruplet , _ the sliding mountain _is defined by a function (z))-c(x,[y_0,y_1]_t(z ) ) .\label{eqn : sliding - mountain}\end{aligned}\ ] ] we use the following fact proved by and . [ lem : cross - curvature ] for the sphere , the cross - curvature is non - negative for any with .although the following lemma is essentially due to , we derive it from lemma [ lem : cross - curvature ] for completeness .[ lem : sliding - mountain ] let be a point in and let and be two points in different from the antipodal point of . then for any the sliding - mountain ( [ eqn : sliding - mountain ] ) is convex with respect to ] for simplicity .we first assume is not the antipodal point of and prove .let (y) ] .note that is a -segment with respect to . then from lemma [ lem : cross - curvature ], we have for each ] , we have .next we assume that is the antipodal point of . by assumption , is not the antipodal point of . by direct calculation ,we have (z)}{dt}\right| \ \geq\ 0.\ ] ] therefore is convex over ] and denote it by for simplicity . from lemma [ lem : sliding - mountain ] , we have where the supremum of the right hand side is attained at .hence where is defined by an infimum convolution since is written in the form of a -transform , it is -convex .this proves lemma [ lem : convex ] . for each -convex function ,let be the set of points such that and has its hessian defined at . if is a wrapping potential function ( definition [ defn : wrapping ] ) , then consists only of a finite set of points .the following lemma is essentially proved in .[ lem : less - than - pi ] if is -convex , then except for at most one .furthermore , if for some , then for any .let be -convex .assume that there exists such that .in general , any -convex function on a compact riemannian manifold is lipschitz continuous with lipschitz constant less than or equal to the diameter of the manifold ( lemma 2 of ) .since the diameter of the sphere is , we have .hence is the antipodal point of .we now prove that for all .we use 2-monotonicity of the gradient map : where is the distance between and .the above inequality follows from lemma [ lem : exact ] .let , and .then we have . by the triangle inequality with respect to the triangle , we have . therefore this implies or ; equivalently , or .hence we have for any .then for any from the definition of .we proceed to the proof of theorem [ thm : jacobian ] .fix two -convex functions and and let for ] . recall that the gradient map of is denoted by .note that is a -segment (x) ] , lemma [ lem : sliding - mountain ] implies that for all . by taking the hessian with respect to at , we obtain this means . [lem : geometric - arithmetic ] let .then the following inequality holds : by the formula ( [ eqn : jacobian - determinant ] ) , it is sufficient to prove that is concave with respect to .indeed , by lemma [ lem : hessian - concave ] and the geometric - arithmetic inequality on , we obtain hence is concave .[ rem : cordero ] if , the inequality ( [ eqn : geometric - arithmetic ] ) is similar to the jacobian inequality , due to cordero - erausquin et al .they showed that if , ^{1/n}j_1(x)^{1/n } , \label{eqn : riemannian - ineq } \end{aligned}\ ] ] where denotes the volume distortion coefficient ( see cordero - erausquin et al .2001 for details ) .the inequality ( [ eqn : riemannian - ineq ] ) is crucial to prove a brunn - minkowskii - type inequality on manifolds .however , since the inequality ( [ eqn : riemannian - ineq ] ) is only established for the special case , it is not sufficient for our statistical application . unfortunately , ( [ eqn : riemannian - ineq ] ) is not implied from ( [ eqn : geometric - arithmetic ] ) .in fact , if , then and , and the inequality ( [ eqn : geometric - arithmetic ] ) reduces to this inequality is weaker than ( [ eqn : riemannian - ineq ] ) because and .[ lem : log - sigma ] for any , is concave with respect to . for the unit sphere , the jacobian determinant of the exponential mapis given by .therefore .since the function \ni\rho\mapsto\log(\sin\rho/\rho) ] , the gradient map is injective .put . by lemma [ lem : injective ] , it is sufficient to show that for any ] .by lemma [ lem : inverse ] , we know that and .hence the gradient vector vanishes only if lies on a great circle that passes through and .since the exceptional point is also included in , we deduce that the point minimizing must belong to .we fix a circular coordinate ] without loss of generality .then the function can be written as by the assumption for , one can easily check that the second derivative of is ( a.e . )as long as .furthermore , we obtain .thus is not a point minimizing .furthermore , the point minimizing is unique because is strictly convex over .we denote the minimizer by ] and with .then the gradient map is explicitly given by .if moves from to , then moves from to monotonically because for almost all $ ] .hence is an isomorphism .lastly , is clearly twice differentiable whenever and .this completes the proof .we briefly discuss the jacobian inequality for general manifolds . in the proof of theorem [ thm : jacobian ], we have used the closed property of cost - convex functions ( lemma [ lem : convex ] ) , the jacobian - ratio inequality ( lemma [ lem : geometric - arithmetic ] ) and log - concavity of the jacobian of the exponential map ( lemma [ lem : log - sigma ] ) . for any non - negatively cross - curved ( or time - convex - sliding - mountain ) manifold defined in ,the former two lemmas are obtained in the same manner .however , lemma [ lem : log - sigma ] does not automatically follow from the non - negative cross - curvature condition .the author does not know if any riemannian manifold with non - negative cross - curvature satisfies the jacobian inequality .at least , any product space of and satisfies the jacobian inequality because the non - negative cross - curvature condition is preserved for products of manifolds ( ) and the jacobian determinant of the exponential map is also factorized into the jacobian determinant on each space .this fact may enable us to describe dependency structures of multivariate directional data in statistics .we leave such an extension for future research .the author is grateful to alessio figalli and robert j. mccann for their helpful comments on the first version of the paper .this study was partially supported by the global center of excellence `` the research and training center for new development in mathematics '' and by the ministry of education , science , sports and culture , grant - in - aid for young scientists ( b ) , no .
in the field of optimal transport theory , an optimal map is known to be a gradient map of a potential function satisfying cost - convexity . in this paper , the jacobian determinant of a gradient map is shown to be log - concave with respect to a convex combination of the potential functions when the underlying manifold is the sphere and the cost function is the distance squared . the proof uses the non - negative cross - curvature property of the sphere recently established by kim and mccann , and figalli and rifford . as an application to statistics , a new family of probability densities on the sphere is defined in terms of cost - convex functions . the log - concave property of the likelihood function follows from the inequality .
the current global network of interferometric detectors ( geo600 , ligo , tama300 and virgo ) have been scanning the sky for gravitational wave signals with unprecedented sensitivities .the prospect of detection is very real and once a detection is made , we must ask what can be inferred about the source from the detected gravitational wave signal .while this issue must be addressed for all signal types , we choose to focus on core - collapse supernovae here .numerical relativity simulations have predicted several sets of gravitational wave signals or _waveform catalogues _ due to rotating core - collapse supernovae ( see and references therein ) .the features of each waveform produced by the simulations vary depending on the physics employed and the chosen initial parameters .the predicted waveforms can be used as inputs into parameter estimation algorithms which can , for example , calculate the likelihood that the detected signal corresponds to one of the predicted waveforms .however , from a cursory inspection of the waveform catalogues , one can see that there are many common features in the predicted waveforms , especially for waveforms in the same catalogue . by decomposing the waveforms into a set of orthonormal basis vectors, we can greatly reduce the computation costs of the parameter estimation stage by concentrating on a subset of basis vectors that encompass the main features of the chosen waveforms .we propose using principal component analysis ( pca ) to create an orthonormal set of basis vectors . broadly speaking, pca transforms a correlated , multi - dimensional data set into a set of orthogonal components .this is achieved by determining the eigenvectors and eigenvalues of the covariance matrix of the data set .the first principal component is the eigenvector with the largest corresponding eigenvalue .it is a the linear combination of the original variables which accounts for as much of the variability in the data as possible .similarly , the second principal component is the linear combination which accounts for as much of the remaining variability as possible subject to the constraint that it is orthogonal to the first principal component and so on . in recent years, pca has been applied to a number of astrophysical problems ( see for a recent example ) , such as spectral classification , photometric redshift determination and morphological analysis of galaxy redshift surveys , as well as wider class of image processing and pattern recognition problems across a range of scientific applications . for a detailed account of the statistical basis of pca the readeris referred to , for example , morrison ( 1967 ) or mardia _ et al . _( 1979 ) .it must be noted that this is not the first approach proposed to decompose a waveform catalogue .brady and ray - majumder have previously applied gram - schmidt decomposition to the zwerger - mller and ott _ et al ._ waveform catalogues .additionally , summerscales _et al . _ developed a maximum entropy based method to identify the presence of a gravitational wave signal and demonstrated the method s ability to extract the correct amplitude and phase of waveforms from a catalogue by ott _( 2004 ) . in this article, we choose the waveform catalogue generated by simulations peformed by dimmelmeier _ to demonstrate the use of pca .these simulations focus on rotating stellar core - collapse and the subsequent bounce . while this phase of a supernova has been the focus of numerous simulations over the years , recent simulations by burrows _et al . _ have produced large post - bounce gravitational wave signals due to acoustic shock mechanisms .there is still much discussion about this mechanism so we choose to concentrate only on the gravitational wave signal produced by the core - collapse phase .moreoever , we would like to stress that the choice of catalogue here is , to a large extent , arbitrary and the methods discussed in this paper can be easily applied to the other catalogues or any combination thereof .additionally , we compare the basis vectors obtained by pca and gram - schmidt decomposition by applying them to the same set of waveforms .we then describe our observations of the waveform catalogue before discussing our observations .gram - schmidt ( gs ) decomposition is a recursive method for decomposing a set of waveforms to create a set of orthonormal basis vectors .it was first applied to a supernova catalogue by brady and ray - majumder . for completeness, the main points of this method are reviewed below . in gs decomposition ,one begins by selecting a waveform from the data set as the first basis vector . to create a second basis vector ,the first basis vector is first projected onto the next waveform to be included into the set of basis vectors .the projected component is then subtracted from the second waveform and the resulting vector is orthogonal to the first basis vector .one continues this process by subtracting the sum of the projections of all exisiting basis vectors onto the desired waveform .this is done recursively until the desired number of waveforms are included into the set of basis vectors .more explicitly , for a set of waveforms , , the orthonormal basis vectors , , are with and where is the total number of waveforms . here , the brackets denote an inner product .explicitly , the inner product for two vectors and , each of length is given by where denotes the element of the vector . note that the second term in equation [ subproj ] is the sum of the projection from all previously formed basis vectors .therefore , is the residual waveform not described by previously generated basis vectors .brady and majumder point out that the choice of the first waveform is chosen arbitrarily and may produce a basis vector set that spans the waveforms most efficiently .therefore , the basis set is constructed repeatedly with a different initial waveform chosen each time until the basis vector that spans the waveforms parameter with the fewest number of basis vectors is obtained . in principal component analysis ( pca ) ,a basis set is formed by determining the eigenvectors of the covariance matrix of the desired data set . in the context of this article ,let us arrange the waveforms from the catalogue into a matrix such that each column corresponds to one of the waveforms , .for waveforms , each of length , the matrix has dimensions of and the covariance matrix for is calculated by where is the covariance matrix with dimensions for waveforms with length .the normalised eigenvectors of form a set of basis vectors , , that span the parameter space defined by the waveforms in . note that in pca , the eigenvalues of the covariance matrix tell us how well each eigenvector spans the parameter space of the waveform catalogue .the eigenvectors are , therefore , ranked by their corresponding eigenvalues , with the first principal component having the largest eigenvalue .supernovae waveforms have significant energies at high frequencies ( ) , so can be about 1000 data samples at ligo ( 16384hz ) data sampling rates or more at virgo ( 20khz ) sampling rates .determining the eigenvectors of a matrix of such dimensions is computationally expensive .a common method of avoiding this computationally intensive operation ( see for example ) is to first calculate the eigenvectors , , of such that where is the corresponding eigenvalue for each eigenvector. then , by pre - multiplying both sides by , we have if we rewrite equation [ cov_mat ] so that the covariance matrix takes the form , then are the eigenvectors of the covariance matrix . .in svd , equations [ eig_v_1 ] and [ eig_v_2 ] are the equivalent of using the right - singular vectors , which are the eigenvectors of , to determine the left - singular vectors , which are the eigenvalues of .] so , for , we can determine the eigenvectors of covariance matrix by first calculating the eigenvectors of the smaller which is an matrix , thereby significantly reducing computation costs .the waveform catalogue used here to demonstrate the use of pca were produced by dimmelmeier _ et al _ .these waveforms are generated from axisymmetric general relativistic hydrodynamic simulations of stellar core - collapse to a proto - neutron star .they use the microphysics equation of state from shen _et al . _ with a 20 progenitor model from woosley _et al . _there is a total of 54 waveforms in this catalogue generated by models which are parameterised by initial differential rotation , , and the ratio of the rotational kinetic to gravitational energies , .the values of are increased from to in 18 steps while three values of initial differential rotation were used .the three values , labelled , and , corresponding to differential rotation occurring at 50000 km ( almost uniform rotation ) , 1000 km and 500 km respectively . according to the rotation law the angular velocity has dropped to 1/2 its central value at distance of a from the rotation axis .hence , smaller values of a correspond to more differential rotation .we introduce the _ match _ parameter , , to quantify how well a set of basis vectors reconstructs a specified waveform . for a waveform , , calculated by summing the projections of the desired number of basis vectors , , onto the waveform such that where are the orthonormal basis vectors determined by the methods described in the previous section refbrady . as with equations [ gs1 ] and [ subproj ], the brackets denote an inner product. if we normalise the set of waveforms , then will be equal to 1 if the sum of the projections of the basis vectors match at particular waveform , , exactly .it is clear that will be equal to 1 for all waveforms in the catalogue if we use all basis vectors decomposed from the catalogue ( ) .however , it is interesting to calculate the smallest match obtained for any waveform in the catalogue ( commonly referred to as _minimal match _ ) if we use a subset of basis vectors .the minimal match , , is often used in templated matched filter searches for signals with well - modelled waveforms .for such searches , the basis vectors form a bank of templates and minimal match is used to characterise how well the desired parameter space is covered by the template bank . to maximise the detection probability, one would maximise minimal match with the smallest number of templates so as to minimise computational time .computing time , however , is not a serious issue for these waveforms because they are short and relatively few compared to , for example , the number of templates used in a search for gravitational waves from binary neutron stars ( see for a recent example ) .instead , we examine the minimal match here to study the parameter space covered by the waveform catalogue .if the parameter space of the waveform catalogue is degenerate , then one would expect the minimal match to rapidly approach 1 for .figure [ numvecs ] shows the number of gs and pca basis vectors required as a function minimal match .similar number of gs and pca basis vectors are required for minimal requirements up to about 0.9 .the number of basis vectors required rises rapidly as the minimal match criterion approaches 1 since smaller features , unique to a small subset of waveforms , require a large number of basis vectors to reconstruct .it is interesting to note that for minimal match requirements greater than 0.95 , more gs basis vectors are needed .nonetheless , the parameter space spanned by the waveform catalogue is well spanned by less than half the total number of basis vectors for each method .this implies that all waveforms in the catalogue are dominated by a few unique features and this allows the minimal match to reach 0.75 with just 7 basis vectors .in fact , dimmelmeier _ et al ._ noted that these waveforms can be divided into three broad categories : waveforms due to a pressure - dominated bounce with convective overturn , waveforms due to a pressure - dominated bounce only and waveforms with a single centrifugal bounce . in figure[ match_example ] , we plot an example waveform reconstructed by the gs and pca basis vectors with a match of 0.9 .the reconstructed waveform obtained by the two sets of basis vectors , though not identical , are very similar .the difference between the two waveforms is only about in amplitude . in the previous subsection, we noted that the parameter space of the dimmelmeier _et al . _ waveform catalogue used in our studies can be spanned by a small number of basis vectors .this implies that there are many common features in the waveforms from this catalogue . here, we chose to make basis vectors using only the 18 waveforms with moderate differential rotation at 1000 km from the centre ( ) .we make this choice to test the hypothesis that the waveforms from precollapse stellar cores with moderate differential rotation contain features presented in waveforms from low and highly differentially rotating stellar bodies .figure [ subset ] plots the number of waveforms with observed to have matchgreater than 0.7 and 0.9 . with only 3 basis vectors ,about 30 of the 36 waveforms already have a match greater than 0.7 .with 16 pca or gs basis vectors , of the remaining 36 waveforms have match greater than 0.9 and have match greater than 0.7 .therefore , a large fraction of the parameter space covered by the catalogue is covered by waveforms from simulations with .this is consistent with the observations of dimmelmeier _ et al _ who noted that the degree of differential rotation does not qualitatively alter the waveforms .we have introduced pca as a method of decompsing a set of waveforms into a set of basis vectors .a nice feature of pca decomposition is that it allows one to quantitatively identify the main features in a desired set of waveforms since each basis vector is ranked by the value of its corresponding eigenvalue .one can interpret the basis vector with the largest corresponding eigenvalue ( the first principal component ) as having the most significant features in the waveform catalogue .we compared the pca method introduced here to the gs decomposition method introduced by brady and ray - majumder .the efficiency of the pca basis vectors at spanning the parameter space defined by the waveform catalogue are comparable to gs decomposition , with about 15 basis vectors required for a minimal match of 0.9 . for a minimal match of 0.95 , 17 pca basis vectors while 22 gs basis vectors were required .this shows that there are many common features in the waveforms from the chosen catalogue .we also generated a set of basis vectors using only the 18 of the 54 waveforms ( with only ) using both methods and observed that 34 of the 36 waveforms not included in the construction of the basis set have a minimal match of 0.7 .this implies that the features from all waveforms are well described by models with .the basis vectors produced here can easily be used by parameter estimation techniques .for example , monte - carlo markov - chain ( mcmc ) methods can be applied to a detected gravitational wave signal with each basis vector as a degree of freedom to search across .the output of the mcmc analysis would be a set of coefficients that can be used to reconstruct the signal waveform from a linear combination of basis vectors .alternatively , we can project the waveforms onto the basis vectors to determine a set of coefficients with which we can reconstruct each waveform with the basis vectors .each waveform can then be parameterised by these coefficients or _weights _ and they can be used to form a classification scheme similar to that laid out by turk and pentland .pca as well as gs decomposition can also be used to decompose waveforms generated by simulations from different groups , using different core - collapse models .this application ( also proposed by brady and ray - majumder ) will combine the parameter space covered by all waveforms in an efficient manner . in the case of pca, common features will be decomposed into the main components with large eigenvalues and , for parameter estimation , will reconstruct the main features of most waveforms .on the other hand , smaller features belonging to a small subset of waveforms will have much smaller eigenvalues and may be ignored by the analysis to reduce computation costs .the author would like to thank patrick brady , nelson christensen , harald dimmelmeier , martin hendry , christian ott , renate meyer and graham woan for enlightening discussions .this work is supported by the science and technology facilities council and the scottish universities physics alliance .00 h. lck __ , _ class .quantum grav . _* 23 * ( 2006 ) s71s78 .d. sigg ( for the ligo scientific collaboration ) , _ class .quantum grav . _ * 23 * ( 2006 ) s516. m. ando and the tama collaboration , _ class . quantum grav ._ * 22 * ( 2005 ) s8819 .f. acernese _ et al _ , _ class .quantum grav ._ * 23 * ( 2006 ) s639 .http://arxiv.org/abs/0809.0695 m.j ._ , _ nature _ * 455 * ( 2008 ) 1082 .morrison 1967 , _ multivariate statistical methods _, mcgraw - hill .mardia , j.t .kent and j.m .bibby 1980 , _ multivariate analysis ( probability and mathematical statistics ) _ , academic press .p.r . brady and s. ray - majumder , _class.quantum grav ._ * 21 * ( 2004 ) s183947 .t. zwerger and e. mller , _ astron .astrophys . _* 320 * ( 1997 ) 209227 .summerscales , a. burrows , l.s . finn and c.d .ott , _ ap . j. _ * 678 * ( 2008 ) 114257 .ott , a. burrows , e. livne , and r. walder , _ ap .* 600 * ( 2004 ) 834 .h. dimmelmeier , c.d .janka , a. marek and e. mller _ phys .* 98 * ( 2007 ) 251101 .ott , a. burrows and luc dessart , _ phys .* 96 * ( 2006 ) 201102 .g. golub and c. van loan 1996 , _ matrix computations _ , the john hopkins university press .r.a . horn and c.r .johnson 1990 , _ matrix analysis _ , cambridge university press .g. strang 1993 , _ introduction to linear algebra _, wellesley - cambridge press .h. shen , h. toki , k. oyamatsu and k. sumiyoshi , _ prog .* 100 * ( 1999 ) 1013 .s. e. woosley , a. heger and t. a. weaver , _ rev .phys . _ * 74 * ( 2002 ) 1015 .h. komatsu , y. eriguchi and i. hachisu , _ mon . not .* 237 * ( 1989 ) 355379 .b. abbott __ , _ phys .* 77 * ( 2008 ) 062002 .h. dimmelmeier , c.d .ott , a. marek and hjanka , _ phys .* 78 * ( 2008 ) 064056 .m. turk and a. pentland , _ journal of cognitive neuroscience _ * 3 * ( 1991 ) 7186 w.r .gilks , s. richardson and d.j .spiegelhalter 1996 , _ markov chain monte carlo in practice _ , chapman & hall / crc
+ this paper introduces the use of principal componenent analysis as a method to decompose the catalogues of gravitational waveforms to produce a set of orthonormal basis vectors . we apply this method to a set of gravitational waveforms produced by rotating stellar core - collapse simulations and compare the basis vectors obtained with those obtained through gram - schmidt decomposition . we observe that , for the chosen set of waveforms , the performance of the two methods are comparable for minimal match requirements up to 0.9 , with 14 gram - schmidt basis vectors and 12 principal components required for a minimal match of 0.9 . this implies that there are many common features in the chosen waveforms . additionally , we observe the chosen waveforms have very similar features and a minimal match of 0.7 can be obtained by decomposing only waveforms generated from simulations with a=2 . we discuss the implications of this observation and the advantages of eigen - decomposing waveform catalogues with principal component analysis .
we have performed a computational study of the kuramoto model running on top of the hc network ( details are given in the methods section ) .our results reveal the existence of an intermediate regime placed between the coherent and the incoherent phase ( see fig.1 ) .this is characterized by broad quasi - periodic temporal oscillations of which wildly depend upon the realization of intrinsic frequencies .anomalously large sampling times would be required to extract good statistics for the actual mean values and variances .collective oscillations of are a straightforward manifestation of partial synchronization and they are robust against changes in the frequency distribution ( e.g. gaussian , lorentzian , uniform , etc . ) whereas the location and width of the intermediate phase depend upon details .as this phenomenology is reminiscent of griffiths phases posed in between order and disorder and stemmig from the existence of semi - isolated regions it is natural to investigate how the hc hierarchical modular structure affects synchronization dynamics .any network with perfectly isolated and independently synchronized moduli trivially exhibits oscillations of , with amplitude peaking at times when maximal mutual synchronization happens to be incidentally achieved .such oscillations can become chaotic if a finite and relatively small number of different coherent moduli are coupled together .thus , in a connected network without delays or other additional ingredients , oscillations in the global coherence are the trademark of strong modular structure with weakly interconnected moduli .strong modular organization into distinct hierarchical levels is indeed present in the hc as reveled by standard community detection algorithms and as already discussed in the literature ( see e.g. and references therein ) .for instance , we have found that the optimal partition into disjoint communities i.e .the partition maximizing the modularity parameter corresponds to a division in communities ( see fig.1d ) while , at a higher hierarchical level , a separation into just moduli the cerebral hemispheres is obtained ( fig.1d ) .obviously these coarser moduli include the above as sub - moduli .although more levels of hierarchical partitioning could be inferred ( see e.g. and refs . therein ) , for the sake of simplicity we focus on these two levels , and with and moduli , respectively . and two interfacial nodes . results of the numerical integration of the kuramoto equations ( blue points ) are in strikingly good agreement with the integration of eqs.([oa ] ) ( solid blue line )local block - wise order parameters are shown for comparison ( small symbols ; dashed lines are guides to the eye ) .a first transition , where local order emerges , occurs at , while global coherence is reached at . in the intermediate region, oscillates ( inset ) , revealing the lack of global coherence .despite the simplicity of this toy model , these results constitute the essential building - block upon which further levels of complexity rely ( see main text).,width=453 ] to shed further light on the properties of synchronization on the hc , we consider a very simple network model allowing for analytical understanding which will constitute the elementary `` building - block '' for subsequent more complex analyses .this consists of a few blocks with very large internal connectivity and very sparse inter - connectivity .each block is composed by a bulk of nodes that share no connection with the outside and a relatively small `` interfacial '' set that connects with nodes in other blocks . for instance , in the simplest realization , consisting of just two blocks connected by a single pair of nodes ( fig.2 ) , each block is endowed with local coherence , average phase , and average characteristic frequency , while 1-node interfaces have perfect coherence , phase , and characteristic frequency . in this case , , and the oa ansatz can be safely applied to each block ( large ) but not to single - node interfaces . in the particular case( convenient for analytical treatment ) in which are zero - mean lorentz distributions with spreads , the resulting set of oa equations can be easily shown to be : \nonumber \\ \dot{\varphi}_\mathrm{a } & = & \nu_\mathrm{a}+ k \left [ mr_\mathrm{a}\sin(\psi_\mathrm{a}-\varphi_\mathrm{a})+\sin(\varphi_\mathrm{b } -\varphi_\mathrm{a})\right ] \label{oa}\end{aligned}\ ] ] ( together with for each 1-node interface ) , and a symmetric set ( ) for block .the solution of eq.(2 ) displayed in fig.2 reveals a transition to local coherence within each block at a certain threshold value of .as soon as local order is attained , and , from eq.(2 ) the mutual synchronization process obeys and a symmetrical equation for . for small ,the right - hand side is dominated by : whereas the average value becomes arbitrarily small within blocks ( assuming that is large ) , the frequency does not .consequently , synchronization between the two blocks through the interfacial link is frustrated : each block remains internally synchronized but is unable to achieve coherence with the other over a broad interval of coupling strengths .this interval is delimited above by a second transition at , where is large enough as to overcome frustration and generate global coherence .this picture is confirmed by numerical integration of the full system of coupled kuramoto equations as well as by its oa approximation ( eq.(2 ) ) , both in remarkably good agreement .therefore , local and global coherences have their onsets at two well - separated transition points and similarly to the much more complex hc case oscillates in the intermediate regime ( fig.2 ) .similar results hold for versions of the model with more than two moduli ( e.g. ; see below ) .the existence of two distinct ( local and global ) transitions had already been reported in a recent study of many blocks with much stronger inter - moduli connections than here ( even if , owing to this difference , no sign of an intermediate oscillatory phase was reported ) .in particular , the value of two - block models has already been explored in the past , for systems of identical oscillators with non - zero phase lags , in which each node is coupled equally to all the others in its community , and less strongly to those in the other . in such systems ,local coherence emerged for large enough values of the phase lag .our two - block model shows that the presence of `` structural bottlenecks '' between moduli combined with heterogeneous frequencies at their contact nodes ( interfaces ) are essential ingredients to generate a broad region of global oscillations in , even in the absence of phase lag .still , it is obviously a too - simplistic model to account for all the rich phenomenology emerging on the hc , as we show now .( green , , and ) and ( magenta , , and ) moduli , respectively . the characteristic frequency of these oscillations is typically between and ( a range which coincides with slow modes detected in brain activity ; see e.g. ) .b ) average of the local order parameter over all moduli and c ) chimera index for moduli at levels as in a ) , as a function of .global order ( thin black line in b ) ) emerges only after local order is attained at lower levels .d ) average decay of activity for identical frequencies in the hc network and comparison with a single - level modular network ( made up of similar random moduli at a single hierarchical level ) of the same size and average connectivity as the hc network .symbols stand for different values of .e ) characteristic decay times corresponding to the inverse of the first non - trivial eigenvalues of the laplacian matrix ( x axis ) as a function of their respective ordered indices ( y axis ) , for networks as in d ) . the stretched exponential behavior in d ) is the result of the convolution of slow time scales associated with small eigenvalues in e).,width=453 ] fig.3 shows numerical results for the local order parameter for some of the moduli at the hierarchical levels , and in the hc network .it reveals that ( fig.3a ) local coherences exhibit oscillatory patterns in time ( with characteristic frequencies typically between and ) and that ( fig.3b ) the transition to local coherence at progressively higher hierarchical level occurs at progressively larger values of ; i.e. coherence emerges out of a hierarchical bottom - up process as illustrated above for the for the two - block model ( see ) .observe , however , that local oscillations were not present in the two - block model .this suggests that the moduli in the hc are on their turn composed of finer sub - moduli and that structural frustration , as introduced above , affects all hierarchical levels .the average variance of local coherences ( called chimera index , ) exhibits a marked peak reflecting maximal configurational variability at the transition point for the corresponding level ( fig .3b - c and methods section ) .similar intra - modular oscillatory patterns dubbed _ chimera states_ have been recently found in kuramoto models in which explicit phase lags induce a different kind of frustration , hindering global synchronization . strictly speaking ,chimeras are defined in systems of identical oscillators .in such a case , a non - zero phase lag term is essential for partial synchronization to occur .realistic models of the brain , however , require oscillators to be heterogenous . states of partial synchronization in empirical brain networks with frequency heterogeneity have been found for kuramoto models with explicit time delays .in contrast , the chimera - like states put forward here have a purely structural origin , as they arise from the network topology .it was noted in the past that synchronization in a synthetic network with hubs could be limited to those hubs by tuning clustering properties , and global order could be attained in a monotonous step - like fashion upon increasing .fig.3b instead reveals that the ordering process in the hierarchical modular hc may be non - monotonous : coherence does not systematically grow with .indeed , the emergence of local order in some community may hinder or reduce coherence in others , inducing local `` desynchronization '' and reflecting the metastable nature of the explored states .fixing all intrinsic frequencies to be identical allows us to focus specifically on structural effects .thus , we consider , without loss of generality , the simple case , and define the `` activity '' . in this case, perfect asymptotic coherence should emerge for all values of but , as illustrated in fig .3d , the convergence towards turns out to be extremely slow ( much slower than exponential ) .this effect can be analytically investigated assuming that , for large enough times , all phase differences are relatively small .then , up to first order , where are the elements of the laplacian matrix . solving the linear problem , , where denotes the -th laplacian eigenvalue ( ) and the -th component of the corresponding eigenvector . given that the averaged order parameter can be written as , averaging over initial conditions , and considering that ( as the laplacian has zero row - sums ) , we obtain where is the standard deviation of the initial phases .this expression holds for any connected network . as usual , the larger the spectral gap , the more `` entangled '' the network and thus the more difficult to divide it into well separated moduli ( only for disconnected networks ) . for large spectral gaps all timescales are fast , and the last expression can be approximated by its leading contribution , ensuing exponential relaxation to , as in fact observed in well - connected network architectures ( erds - rnyi , scale free , etc .this is not the case for the hc matrix , for which a tail of small non - degenerate eigenvalues is encountered ( see fig.3e and ) .each eigenvalue in the tail corresponds to a natural division of moduli into sub - moduli , and the broad tail reflects the heterogeneity in the resulting modular sizes . as a consequence , each of these eigenvalues with its associated large timescale, contributes to the sum above , giving rise to a convolution of relaxation processes , entailing anomalously - slow dynamics , which could not be explained by a single - level modular network ( see fig.3d - e ) : slow dynamics necessarily stems from the existence of a hierarchy of moduli and structural bottlenecks .as explained in methods , in the case of the hc the convolution of different times scales gives rise to stretched - exponential decay , which perfectly fits with numerical results in fig.3 .it was noted in the past that strongly modular networks exhibit isolated eigenvalues in the lower edge of the laplacian spectrum .synchronization would develop in a step - wise process in time , where each transient would be given by each isolated eigenvalue . in our case , the depth of the hierarchical organization and the strength of topological disorder produce instead a quasi - continuous tail of eigenvalues , and the step - wise process is replaced by an anomalous stretched - exponential behavior . to shed additional light on the previous findings for the hc i.e .the emergence of chimera - like states and anomalously slow dynamics we suggest to go beyond the single - level modular network model and study hierarchical modular networks ( hmn ) in which moduli exists within moduli in a nested way at various scales .hmn are assembled in a bottom - up fashion : local fully - connected moduli ( e.g. of nodes ) are used as building blocks .they are recursively grouped by establishing additional inter - moduli links in a level - dependent way as sketched in fig.4(top ) . , basal fully connected blocks of size are linked pairwise into super - blocks by establishing a fixed number of random unweighted links between the elements of each ( in the fig . ) .newly formed blocks are then linked iteratively with the same up to level , until the network becomes connected .a ) , b ) , c ) as in fig.3 , but for a hmn with , , and .hierarchical levels are in black , blue , green , magenta and red respectively ( not all shown in a ) for clarity ) .d ) time relaxation of activity for homogeneous characteristic frequencies , for logarithmically equally spaced values of .averages over realizations of hmns with and .inset : as in the main plot d ) , but representing as a function of and confirming the predicted stretched exponential behavior .e ) inverse tail - eigenvalues ( as in fig.3 ) for a hmn as in e ) ., width=453 ] our computational analyses of the kuramoto dynamics on hmn substrates ( see fig.4 ) reveal : ( i ) a sequence of synchronization transitions for progressively higher hierarchical levels at increasing values of , ( ii ) chimera - like states at every hierarchical level , resulting in a hierarchy of metastable states with maximal variability at the corresponding transition points , ( iii ) extremely slow relaxation toward the coherent state when all internal frequencies are identical .furthermore , anomalies in the laplacian spectrum analogous to those of the hc network are observed for hmn matrices ; in particular , the lower edge of the hmn laplacian spectrum has been recently shown to exhibit a continuous exponential lifshitz tail for , with .taking the continuum limit of eq.([eq : discrete_lambda ] ) , we find , which can be evaluated with the saddle - point method ( see methods ) , leading to i.e. anomalous stretched - exponential asymptotic behavior , in excellent agreement with computational results ( see fig.4d ) . therefore , hierarchical modular networks constitute a parsimonious and adequate model for reproducing all the complex synchronization phenomenology of the hc . a crucial role in the emergence of such behavioris played by disorder .one would be tempted to believe that all networks characterized by a finite spectral dimension could potentially give rise to this phenomenology .this is obviously not the case for a regular lattice , where the spectral gap is always well defined . a fractal lattice or an ordered tree ,on the other hand , could exhibit a hierarchy of discrete low eigenvalues , whose multiplicities reflect system symmetries . the introduction of disorder , as in hmns , is then necessary in order to transform such hierarchy of discrete levels into a continuous lifshitz tail , leading eventually to the behavior predicted by eq .( [ eq : stretched ] ) .simple models of synchronization dynamics exhibit an unexpectedly rich phenomenology when operating on top of empirical human brain networks .this complexity includes oscillatory behavior of the order parameter suggesting the existence of relatively isolated structural communities or moduli , that as a matter of fact can be identified by using standard community detection algorithms .even more remarkably , oscillations in the level of internal coherence are also present within these moduli , suggesting the existence of a whole hierarchy of nested levels of organization , as also found in the recent literature relying on a variety of approaches .aimed at unveiling this complex behavior we have introduced a family of hierarchical modular networks and studied them in order to assess what structural properties are required in order to reproduce the complex synchronization patterns observed in brain networks . in the absence of frequency dispersion , perfect coherence is achieved in synthetic hierarchical networks by following a bottom - up ordering dynamics in which progressively larger communities with inherently different timescales become coherent ( see ) .however , this hierarchically nested synchronization process is constrained and altered by structural bottlenecks as carefully described here for the simpler two - block toy model at all hierarchical levels .this structural complexity brings about anomalously - slow dynamics at very large timescales .observe that the hc , in spite of being a coarse - grained mapping of a brain network , already shows strong signals of this ideal hierarchical architecture as reflected in its anomalously slow synchronization dynamics as well as in the presence of non - degenerate eigenvalues in the lower edge of its laplacian spectrum , acting as a fingerprint of structural heterogeneity and complexity .we stress that such a complex phenomenology would be impossible to obtain in networks with stronger connectivity patterns ( e.g. with the small world property ) such as scale free - networks or high - degree random graphs .even the generic presence of simple communities may not be sufficient to grant the emergence of frustration : the uniqueness of the human connectome , and of hierarchical modular networks in general , resides in the strong separation into distinct levels , which the synchronization dynamics is able to resolve only at well - separated values of the coupling .on the other hand , in the presence of intrinsic frequency heterogeneity , the described slow ordering process is further frustrated .actually , for small values of the coupling constant the system remains trapped into metastable and chimera - like states with traits of local coherence at different hierarchical levels . in this case , inter - moduli frequency barriers need to be overcome before weakly connected moduli achieve mutual coherence .this is clearly exemplified by the separation between distinct peaks in the chimera index in figs .3 - 4 , each one signaling the onset of an independent synchronization process at a given level ( see methods ) .the result is a complex synchronization landscape , which is especially rich and diverse in the intermediate regime put forward here . including other realistic ingredients such as explicit phase frustration or time delays to our simplistic approach should only add complexity to the structural frustration effect reported here .it is also expected that more refined models including neuro - realistic ingredients leading to collective oscillations would generate similar results , but this remains to be explored in future works .addition of noise to the kuramoto dynamics would allow the system to escape from metastable states .stochasticity can overcome the `` potential barriers '' between mutually incoherent moduli as well as re - introduce de - synchronization effects .these combined effects can make the system able to explore the nested hierarchy of attractors , allowing one to shed some light into the complex synchronization patterns in real brain networks .actually , spontaneous dynamical fluctuations have been measured in the resting state of human brains ; these are correlated across diverse segregated moduli and characterized by very slow fluctuations , of typical frequency , in close agreement with those found here ( fig.3 ) .accordingly , it has been suggested that the brain is routinely exploring different states or attractors and that in order to enhance spontaneous switching between attractors brain networks should operate close to a critical point , allowing for large intrinsic fluctuations which on their turn entail attractor `` surfing '' and give access to highly varied functional configurations and , in particular , to maximal variability of phase synchrony .the existence of multiple attractors and noise - induced surfing is largely facilitated in the broad intermediate regime first elucidated here , implying that a precise fine tuning to a critical point might not be required to guarantee functional avantages usually associated with criticality : the role usually played by a critical point is assumed by a broad intermediate region in hierarchically architectured complex systems .finally , let us remark that our results might also be of relevance for other hierarchically organized systems such as gene regulatory networks for which coherent activations play a pivotal role .the kuramoto model is simulated by numerically integrating eq .( [ eq : kuramoto ] ) .computations are carried out using both a 4th order runge - kutta method of fixed step size and an 8th order dormand - prince method with adaptive step size .both methods lead to compatible results within precision limits .the robustness of the observation of a novel intermediate phase between incoherent and coherent ones is assessed by choosing different functional forms for the frequency distribution ( lorentzian , gaussian , uniform ) , and by implementing variations of eq .[ eq : kuramoto ] in which the matrix is weight - normalized for simulations of the hc and degree - normalized for simulations in hmns .no qualitative change in the phenomenology is observed . in the main text ,the chimera index is introduced as a measure of partial synchronization at the community level . at any hierarchical level , a hierarchical network can be divided into a set of communities .following , is defined as follows : ( i ) in the steady ( oscillatory ) state , and for each time , local order parameters for each community are calculated and their variance across communities is stored ; ( ii ) the chimera index is computed as the time average . having at a given hierarchical level implies that local order is only partial as fluctuates , giving rise to a chimera - like state . on the other hand , means that each local order parameter at that level is , and local order has been attained3b - c and 4b - c show that at each ( each color ) a peak in the corresponding marks the onset of the local synchronization processes : as soon as the peak vanishes upon increasing , local order at that level is attained .the sequence of separated peaks in for increasing values of is the direct evidence of a hierarchical synchronization process . in sparse hmns , the lower end of the laplacian spectrumis characterized by an exponential tail in the density of states , known as lifshitz tail . in graphs , lifshitz tails signal the existence of non - trivial heterogeneous localized states governing the asymptotic synchronization dynamics at very large times . in the main textwe have shown that in the absence of frequency heterogeneity , the behavior of the activity is given by .this expression can be evaluated by applying the saddle point method , yielding .\ ] ] substituting , as empirically found in hmns , leads to eq .( [ eq : stretched ] ) , whose square root behavior is confirmed by simulations in fig . 4d. 10 hagmann , p. _ et al ._ mapping the structural core of human cerebral cortex ._ plos biol . _* 6 * , e159 ( 2008 ) .honey , c. j. _ et al ._ predicting human resting - state functional connectivity from structural connectivity .usa _ * 106 * , 20352040 ( 2009 ) .bullmore , e. & sporns , o. complex brain networks : graph theoretical analysis of structural and functional systems ._ * 10 * , 186198 ( 2009 ) . sporns , o. _ networks of the brain_. ( mit press , cambridge , 2010 ) .kaiser , m. a tutorial in connectome analysis : topological and spatial features of brain networks _ neuroimage _ * 57 * , 892907 ( 2011 ) . meunier , d. , lambiotte , r. & and bullmore , e. modular and hierarchically modular organization of brain networks .neurosci . _* 4 * , 200 ( 2010 ) .buzski , g. _ rhythms of the brain_. ( oxford university press , new york , 2006 ) .zhou , c. , zemanov , l. , zamora , g. , hilgetag , c. c. & kurths , j. hierarchical organization unveiled by functional connectivity in complex brain networks .lett . _ * 97 * , 238103 ( 2006 ) .ivkovi , m. , amy , k. & ashish , r. statistics of weighted brain networks reveal hierarchical organization and gaussian degree distribution ._ plos one _ * 7 * , e35029 ( 2012 ) .betzel , r. f. _ et al ._ multi - scale community organization of the human structural connectome and its relationship with resting - state functional connectivity ._ network science _ * 1 * , 353373 ( 2013 ) .zhou , c , zemanov l , zamora - lpez , g. , hilgetag , c. c. & kurths , j. structure function relationship in complex brain networks expressed by hierarchical synchronization ._ new j. phys . _ * 9 * , 178 ( 2007 ) .kaiser , m. , grner , m. & hilgetag , c. c. criticality of spreading dynamics in hierarchical cluster networks without inhibition ._ new j. phys . _* 9 * , 110 ( 2007 ) .kaiser , m. & hilgetag , c c. optimal hierarchical modular topologies for producing limited sustained activation of neural networks .neuroinform . _ * 4 * , 8 ( 2010 ) .rubinov , m. , sporns , o. , thivierge , j. p. & breakspear , m. neurobiologically realistic determinants of self - organized criticality in networks of spiking neurons ._ plos comput .biol . _ * 7 * , e1002038 ( 2011 ). moretti , p. & muoz , m. a. griffiths phases and the stretching of criticality in brain networks .commun . _ * 4 * , 2521 ( 2013 ) .vojta , t. rare region effects at classical , quantum and nonequilibrium phase transitions ._ j. phys . a _ * 39 * , r143r205 ( 2006 ) .muoz , m. a. , juhsz , r. , castellano , c. & dor , g. griffiths phases on complex networks .lett . _ * 105 * , 128701 ( 2010 ) .juhsz , r. , dor , g. , castellano , c. , & muoz , m. a. rare - region effects in the contact process on networks .e _ * 85 * , 066125 , ( 2012 ) .bennett , m. v. & zukin , r. electrical coupling and neuronal synchronization in the mammalian brain . _ neuron _ * 41 * , 495511 ( 2004 ) . breakspear , m. & stam , c. j. dynamics of a neural system with a multiscale architecture .b _ * 360 * , 10511074 ( 2005 ) .sompolinsky , h. , crisanti , a. & and sommers , h. j. chaos in random neural networks .* 61 * , 259262 ( 1988 ) .klimesch , w. memory processes , brain oscillations and eeg synchronization .j. psychophysiol ._ * 24 * , 61100 ( 1996 ) .buehlmann , a. & deco , g. optimal information transfer in the cortex through synchronization ._ plos comput .biol . _ * 6 * , e1000934 ( 2010 ). steinmetz , p. n. _ et al ._ attention modulates synchronized neuronal firing in primate somatosensory cortex ._ nature _ * 404 * , 187190 ( 2000 ) .kandel , e. r. , schwartz , j. h. & jessell , t. m. _ principles of neural science_. ( mcgraw - hill , new york , 2000 ) rosenblum , m. g. , pikovsky , a. & kurths , j. _ synchronization a universal concept in nonlinear sciences_. ( cambridge university press , cambridge , 2001 ) .kuramoto , y. self - entrainment of a population of coupled nonlinear oscillators .notes phys . _* 39 * , 420422 ( 1975 ) .strogatz , s. h. from kuramoto to crawford : exploring the onset of synchronization in populations of coupled oscillators ._ physica d _ * 143 * , 120 ( 2000 ) .acebrn , j. a. , bonilla , l. l. , prez vicente , c. j. , ritort , f. , & spigler , r. the kuramoto model : a simple paradigm for synchronization phenomena .phys . _ * 77 * , 137185 ( 2005 ) .arenas , a. , daz - guilera , a. , kurths , j. , moreno , y. & zhou , c. synchronization in complex networks . _rep . _ * 469 * , 93153 ( 2008 ) .cabral , j. , hugues , e. , sporns , o. , & deco , g. role of local network oscillations in resting - state functional connectivity ._ neuroimage _ * 57 * , 130139 ( 2011 ) .breakspear , m. , heitmann , s. & daffertshofer , a. generative models of cortical oscillations : neurobiological implications of the kuramoto model .neurosci . _ * 4 * , 190 ( 2010 ) .gmez - gardees , j. , zamora - lpez , g. , moreno , y. & arenas , a. from modular to centralized organization of synchronization in functional areas of the cat cerebral cortex , _ plos one _ * 5 * , e12313 ( 2010 ) .ott , e. & antonsen , t. m. low dimensional behavior of large systems of globally coupled oscillators ._ chaos _ * 18 * , 037113 ( 2008 ) .skardal , p. s. & restrepo , j. g. hierarchical synchrony of phase oscillators in modular networks .e _ * 85 * , 016208 ( 2012 ) .arenas , a. & prez - vicente , c. j. exact long - time behavior of a network of phase oscillators under random fields .e _ * 50 * , 949956 ( 1994 ) .acebrn , j. a. & bonilla , l. l. asymptotic description of transients and synchronized states of globally coupled oscillators ._ physica d _ * 114 * , 296314 ( 1998 ) .popovych , o. v. , maistrenko , y. l. & tass , p. a. phase chaos in coupled oscillators .e _ * 71 * , 065201 ( 2005 ) .duch , j. & arenas , a. community detection in complex networks using extremal optimization .e _ * 72 * , 027104 ( 2005 ) .newman , m. the structure and function of complex networks ._ siam rev . _ * 45 * , 167256 ( 2003 ) .abrams , d , m. & strogatz , s. h. chimera states for coupled oscillators .* 93 * , 174102 ( 2004 ) .arenas , a. , daz - guilera , a. & prez - vicente , c. j. synchronization reveals topological scales in complex networks . _lett . _ * 96 * , 114102 ( 2006 ) .abrams , d , m. , mirollo , r. , strogatz , s. h. & wiley , d. a. solvable model for chimera states of coupled oscillators .lett . _ * 101 * , 084103 ( 2008 ). shanahan , m. metastable chimera states in community - structured oscillator networks ._ chaos _ * 20 * , 013108 ( 2010 ) . wildie , m. & shanahan , m. metastability and chimera states in modular delay and pulse - coupled oscillator networks ._ chaos _ * 22 * , 043131 ( 2012 ) .mcgraw , p. n. & menzinger , m. clustering and the synchronization of oscillator networks .e _ * 72 * 015101(r ) ( 2005 ) .chung , f. r. k. _ spectral graph theory_. ( reg .conf . series . in maths ,ams , providence , 1997 ) .donetti , l. , neri , r. & muoz , m. a. optimal network topologies : expanders , cages , ramanujan graphs , entangled networks and all that ._ j. stat ._ , p08007 ( 2006 ) .wang , s .- j ., hilgetag , c. c. & zhou , c. sustained activity in hierarchical modular neural networks : soc and oscillations .neurosci . _* 5 * , 30 ( 2011 ) .biswal , b. , zerrin yetkin , f. , haughton , v. & hyde , j. functional connectivity in the motor cortex of resting human brain using echo - planar mri ._ * 34 * , 537541 ( 1995 ). deco , g. & and jirsa , v. k. ongoing cortical activity at rest : criticality , multistability , and ghost attractors ._ j. neurosci ._ * 32 * , 33663375 ( 2012 ) .chialvo , d. r. emergent complex neural dynamics .phys . _ * 6 * 744750 ( 2010 ) .shew , w. l. , yang , h. , petermann , t. , roy , r. & plenz , d. neuronal avalanches imply maximum dynamic range in cortical networks at criticality . _j. neurosci ._ * 29 * , 1559515600 ( 2009 ) .haimovici , a. , tagliazucchi , e. , balenzuela , p. & chialvo , d. r. brain organization into resting state networks emerges at criticality on a model of the human connectome ._ * 110 * , 178101 ( 2013 ) .shriki , o. _ et al . _ neuronal avalanches in the resting meg of the human brain ._ j. neurosci . _ * 33 * , 70797090 ( 2013 ) .yang , h. , shew , w. l. , roy , r. & plenz , d. maximal variability of phase synchrony in cortical networks with neuronal avalanches _ j. neurosci . _* 32 * , 10611072 ( 2012 ) .beggs , j. m. the criticality hypothesis : how local cortical networks might optimize information processing .r. soc . a _ * 366 * , 329343 ( 2008 ) .shew , w. l. & plenz , d. the functional benefits of criticality in the cortex ._ neuroscientist _ * 19 * , 88100 ( 2013 ) . , s. , sun , y. , cooper , t. f. & bassler , k. robust detection of hierarchical communities from escherichia coli gene expression data ._ plos comput .* 8 * , e1002391 ( 2012 ) .nykter , m. _ et al ._ gene expression dynamics in the macrophage exhibit criticality .usa _ * 105 * , 18971900 ( 2008 ) .we acknowledge financial support from j. de andaluca , grant p09-fqm-4682 and we thank o. sporns for providing us access to the human connectome data .p.m. and m.a.m . conceived the project , p.v . and p.m. performed the numerical simulations , carried out the analytical calculations and prepared the figures .p.m. and m.a.m . wrote the main manuscript text .all authors reviewed the manuscript .the authors declare no competing financial interests .
the spontaneous emergence of coherent behavior through synchronization plays a key role in neural function , and its anomalies often lie at the basis of pathologies . here we employ a parsimonious ( mesoscopic ) approach to study analytically and computationally the synchronization ( kuramoto ) dynamics on the actual human - brain connectome network . we elucidate the existence of a so - far - uncovered intermediate phase , placed between the standard synchronous and asynchronous phases , i.e. between order and disorder . this novel phase stems from the hierarchical modular organization of the connectome . where one would expect a hierarchical synchronization process , we show that the interplay between structural bottlenecks and quenched intrinsic frequency heterogeneities at many different scales , gives rise to frustrated synchronization , metastability , and chimera - like states , resulting in a very rich and complex phenomenology . we uncover the origin of the dynamic freezing behind these features by using spectral graph theory and discuss how the emerging complex synchronization patterns relate to the need for the brain to access in a robust though flexible way a large variety of functional attractors and dynamical repertoires without _ ad hoc _ fine - tuning to a critical point . neuro - imaging techniques have allowed the reconstruction of structural human brain networks , composed of hundreds of neural regions and thousands of white - matter fiber interconnections . the resulting `` human connectome '' ( hc ) turns out to be organized in moduli characterized by a much larger intra than inter connectivity structured in a hierarchical nested fashion across many scales . on the other hand , `` functional '' connections between nodes in these networks have been empirically inferred from correlations in neural activity as detected in electroencephalogram and functional magnetic resonance time series . unveiling how structural and functional networks influence and constrain each other is a task of outmost importance . a few pioneering works found that the hierarchical - modular organization of structural brain networks has profound implications for neural dynamics . for example , neural activity propagates in hierarchical networks in a rather distinctive way , not observed on simpler networks ; beside the usual two phases percolating and non - percolating commonly encountered in models of activity propagation , an intermediate `` griffiths phase '' emerges on the hierarchical hc network . such a griffiths phase stems from the existence of highly diverse relatively - isolated moduli or `` rare regions '' where neural activity remains mostly localized generating slow dynamics and very large responses to perturbations . brain function requires coordinated or coherent neural activity at a wide range of scales , thus , neural synchronization is a major theme in neuroscience . synchronization plays a key role in vision , memory , neural communication , and other cognitive functions . an excess of synchrony results in pathologies such as epilepsy or parkinsonian disease , while neurological deficit of synchronization has been related to autism and schizophrenia . our aim here is to scrutinize the special features of synchronization dynamics as exemplified by the canonical kuramoto model running on top of the best available human connectome mapping . this consists of a network of nodes , each of them representing a mesoscopic population of neurons able to produce self - sustained oscillations whose mutual connections are encoded by a symmetric weighted connectivity matrix . the validity of this admittedly simplistic kuramoto model as a convenient tool to explore the generic features of complex brain dynamics at a large scales has been recently emphasized in the literature . here , we uncover the existence of a novel intermediate phase for synchronization dynamics similar in spirit to the griffiths phases discussed above which stems from the hierarchical modular organization of the hc and which gives rise to very complex and rich synchronization dynamical patterns . we identify this novel phase as the optimal regime for the brain to harbor complex behavior , large dynamical repertoires , and optimal trade - offs between local segregation and global integration . the kuramoto dynamics on a generic network ( see for a nice and comprehensive review ) is defined by : , \label{eq : kuramoto}\ ] ] where is the phase at node at time . the intrinsic frequencies accounting for node heterogeneity are extracted from some arbitrary distribution function , are the elements of the connectivity matrix , and is the coupling strength . time delays , noise , and phase frustration could also be straightforwardly implemented . the kuramoto order parameter is defined as , where gauges the overall coherence and is the average global phase . in large populations of well - connected oscillators without frequency dispersion , perfect coherence ( ) emerges for any coupling strength ; on the other hand , frequency heterogeneity leads to a phase transition at some critical value of , separating a coherent steady state from an incoherent one . analytical insight onto this phase transition can be obtained using the celebrated ott - antonsen ( oa ) ansatz , allowing for a projection of the high - dimensional dynamics into an evolution equation for with remarkable accuracy in the large- limit . , for kuramoto dynamics on the hc network for a specific and fixed set of frequencies extracted from a gaussian distribution . a broad intermediate regime separates the incoherent phase ( low ) from the synchronous one ( high ) . in this regime , coherence increases with in an intermittent fashion , and with strong dependence on the frequency realization . b ) raster plot of individual phases ( vertical axis ) showing local rather than global synchrony and illustrating the coexistence of coherent and incoherent nodes ( ) as time runs . c ) for values of ( arrows in the main plot ) . d ) adjacency matrix of the hc network with nodes ordered to emphasize its modular structure as highlighted by a community detection algorithm ( main text ) , keeping the partition into the hemispheres ( dashed lines ) . intra - modular connections ( shown in color ) are dense while inter - modular ones ( grey ) are limited to tiny subsets , acting as interfaces between moduli . integration between hemispheres is mostly carried out by the central moduli . this plot visually illustrates the hierarchical modular organization of the human connectome network.,width=453 ]
portfolio optimization problems with an objective to exceed a given benchmark arise very commonly in portfolio management among both institutional and individual investors . for many hedge funds , mutual funds and other investment portfolios ,their performance is evaluated relative to the market indices , e.g. the s&p 500 index , and russell 1000 index . in this paper , we consider the problem of maximizing the outperformance probability over a random benchmark through a dynamic trading with a fixed initial capital .specifically , given an initial capital and a random benchmark , how can one construct a dynamic trading strategy in order to maximize the probability of the success event " where the terminal trading wealth exceeds , i.e. ? in the existing literature ,outperformance portfolio optimization has been studied by among others .it has also been studied in the context of quantile hedging by fllmer and leukert . in particular , fllmer and leukertshow that the quantile hedging problem can be formulated as a _ pure _ hypothesis testing problem . in statistical terminology, this approach seeks to determine a _test _ , taking values 0 or 1 , that minimizes the probability of type - ii - error , while limiting the probability of type - i - error by a pre - specified acceptable significance level .the maximal success probability can be interpreted as the _ power _ of the test .the fllmer - leukert approach permits the use of an important result from statistics , namely , the neyman - pearson lemma ( see , for example , ) , to characterize the optimal success event and determine its probability .on the other hand , the outperformance portfolio optimization can also be viewed as a special case of shortfall risk minimization , that is , to minimize the quantity for some specific risk measure . as is well known ( see ), the shortfall risk minimization with a convex risk measure can be solved via its equivalent _ randomized _ hypothesis testing problem .in fact , the problem to maximize the success probability is equivalent to minimizing the shortfall risk with respect to the risk measure defined by for any random variable .however , this risk measure does not satisfy either convexity or continuity .hence , a natural question is : 1 .is the outperformance optimization problem equivalent to the randomized hypothesis testing ? in section [ sec:31 ], we show that the outperformance portfolio optimization in a general incomplete market is equivalent to a pure hypothesis testing .moreover , we illustrate that the outperformance probability , or equivalently , the associated pure hypothesis testing value , can be strictly smaller than the corresponding randomized hypothesis testing ( see examples [ exm : pr ] and [ exm : bin ] ) .therefore , the answer to ( q ) is negative in general .this also motivates us to analyze the sufficient conditions for the equivalence of pure and randomized hypothesis testing problems ( see theorem [ thm : np ] ) . in turn , our result is applied to give the sufficient conditions for the equivalence of outperformance portfolio optimization and the corresponding randomized hypothesis testing problem ( see theorem [ thm : qhic ] ) .the main benefit of such an equivalence is that it allows us to utilize the representation of the randomized testing value to compute the optimal outperformance probability .moreover , the sufficient conditions established herein are amenable for the verification and are applicable to many typical finance markets .we provide detailed illustrative examples in section [ sec : cm ] for a complete market and section [ sect : stochvol ] for a stochastic volatility model . among other results ,we provide an explicit solution to the problem of outperforming a leveraged fund in a complete market . in a stochastic volatility market , we show that , for a constant or stock benchmark , the investor may optimally assign a zero volatility risk premium , which corresponds to the minimal martingale measure ( mmm ) .this in turn allows for explicit solution for the success probability in a range of cases in this incomplete market . with the general form of benchmark, the value function can be characterized by hjb equation in the framework of stochastic control theory .the paper is structured as follows . in section [ sect - hypo ] ,we analyze the generalized composite pure and randomized hypothesis testing problems , and study their equivalence .then , we apply the results to solve the related outperformance portfolio optimization in section [ sect - finance ] , with examples in both complete and incomplete diffusion markets .section [ sec : conclusions ] concludes the paper and discusses a number of extensions .finally , we include a number of examples and proofs in the appendix .in the background , we fix a complete probability space . denote by ] and respectively , and are denoted by /\mathcal{b}([0,1])\ } \quad \text{and } \quad \mathcal{i } = \{x : \omega/\mathcal{f } \mapsto \{0,1\}/2^{\{0,1\}}\}.\ ] ] in addition , and are two given collections of non - negative -measurable random variables .first , we consider a randomized composite hypothesis testing problem . for , define \label{hypt_1}\\ \text { subject to } & \sup_{h\in \mathcal{h } } \mathbb{e } [ hx]\le x.\label{hypt_2}\end{aligned}\ ] ] from the statistical viewpoint , and correspond to the collections of alternative hypotheses and null hypotheses , respectively . the solution can be viewed as the most powerful test , and is the power of , where is the significance level or the size of the test .for any set of random variables , we define a collection of randomized tests by \le x , \\forall h \in \mathcal{\tilde h}\}.\ ] ] then , the problem in ( [ hypt_1])-([hypt_2 ] ) can be equivalently expressed as .\ ] ] when no ambiguity arises , we will denote for simplicity .for the upcoming results , we denote the convex hull of by , and the closure ( with respect to the topology endowed by the convergence in probability ) of by . also , we define the set \le x , \ \forall x \in \mathcal{x}_x^\mathcal{h}\}.\ ] ] from the definitions together with fatou s lemma , it is straightforward to check that is convex and closed , containing . furthermore , we observe that for an arbitrary satisfying . hence , the randomized testing problem in - , and therefore , in will stay invariant if is replaced by as such .more precisely , we have [ prop : h ] let be an arbitrary set satisfying .then , in is equivalent to .\end{aligned}\ ] ] in particular , one can take or .this randomized hypothesis testing problem is similar to that studied by cvitani and karatzas , except that and in - and are not necessarily the radon - nikodym derivatives for probability measures . in this slight generalization , ] for each , resulting in a confidence level of ] , and is convex and closed .the following theorem gives the characterization of the solution for .[ thm : rnp ] under assumption [ a:1 ] , there exists satisfying /\mathcal{b}([0,1]),\ ] ] \le \mathbb{e}[\hat h \hat x ] = x , \quad \forall h \in \mathcal{h},\ ] ] and \le \mathbb{e } [ g \hat x ] , \quad \forall g \in \mathcal{g}.\ ] ] in particular , and satisfying - can be chosen to be measurable with respect to , the smallest -algebra generated by the random variables in .moreover , of is given by = \inf_{a \ge 0 } \big\{xa + \inf_{\mathcal{g } \times co(\mathcal{h } ) } \mathbb{e } [ ( g- a h)^+]\big\},\]]which is continuous , concave , and non - decreasing in .furthermore , and respectively attain the infimum of ,\quad \hbox{and}\quad xa + \mathbb{e } [ ( g- a h)^+].\ ] ] first , we apply the equivalence between and from lemma [ prop : h ] , and the fact that . also , is convex and closed . if there is such that almost surely in ,then in probability and .therefore , we apply the procedures in ( * ? ? ?* proposition 3.2 , theorem 4.1 ) to obtain the existence of satisfying - , the optimality of , and the representation = \inf_{a \ge 0 } \{xa + \inf_{\mathcal{g } \times \overline{co(\mathcal{h } ) } } \mathbb{e } [ ( g- a h)^+]\}.\]]specifically , we replace the two probability density sets in by the -bounded sets and for our problem , and their by . at the infimum , in becomes ( see ( * ? ? ?* proposition 3.2(i ) ) ) \}.\ ] ] note that belongs to but not necessarily to .nevertheless , there exists a sequence satisfying in probability . by the fact that any subsequence contains almost surely convergent subsequence , and together with the dominated convergence theorem , it follows that \to \mathbb{e}[(\hat g - \hat a \hat h)^+] ] , and , then precisely , the last inclusion above is due to the bipolar theorem .moreover , may be not solid , and strictly smaller than the bipolar , see example [ exm : pr ] . according to theorem [ thm : rnp ] ,if the random variable in can be assigned as an indicator function satisfying - , then the associated solver of will also be an indicator , and therefore , a _ pure test _ !this leads to an interesting question : when does a pure test solve the randomized composite hypothesis testing problem ? motivated by this , we define the pure composite hypothesis testing problem : \\ \text { subject to}\quad & \sup_{h\in \mathcal{h } } \mathbb{e } [ hx]\le x , \quad x>0.\end{aligned}\ ] ] this is equivalent to solving ,\]]where \le x , \\forall h \in \mathcal{h}\} ] .in fact , is not concave and continuous , while is . [ remark : concavemajor]in example [ exm : pr ] , turns out to be the smallest concave majorant of . however , this is not always true .we provide a counter - example in appendix [ sect - counter - major ] .if there is a pure test that solves both the pure and randomized composite hypothesis testing problems , then the equality must follow .an important question is : when does this phenomenon of equivalence occur ?[ lem : b1 ] let be given by theorem [ thm : rnp ] .then , in must satisfy a. if = x ] , then . in view of the existence of in theorem [ thm : rnp ] and its form in , specified in each case above is the unique choice that satisfies = x ], is a random variable taking value in ] , the choice of yields a non - pure test ( see ) .nevertheless , our next lemma shows that , under an additional condition , one can alternatively choose an indicator in place of and obtain a pure test .[ lem : npg ] assume that and are singletons , and there exists an -measurable random variable , such that the function , \quad \forall y\in \mathbb{r},\ ] ] is continuous .then there exists a pure test that solves both problems and .if satisfies either ( i ) or ( ii ) of corollary [ lem : b1 ] , then corollary [ lem : b1 ] implies that must be an indicator .next , we discuss the other case : when satisfies > x > \mathbb{e}[\hat h i_{\{\hat g > \hat a \hat h\}}] ] for ] almost surely in . hence , we have = \mathbb{e } [ h \bar x] ] for all , and this implies satisfies both and . as a consequence , the indicator indeed solves both pure and randomized test by the definition .the fact that an independent random variable appears in the equivalence between pure and randomized testing problems is quite natural . indeed , in hypothesis testing, statisticians may interpret the randomized test by a pure test combined with an independent random variable drawn from a uniform distribution . in lemma[ lem : npg2 ] , we have introduced the uniform random variable to the same effect .next , we summarize a number of sufficient conditions that are amenable for verification .[ thm : np ] suppose that one of the following conditions is satisfied : 1 . and are singletons , and there exists an -measurable random variable with a continuous c.d.f . with respect to , 2 .there exists a continuous -measurable random variable independent of and , 3 . for all ] see definition 8.1.1 of .we denote the set of all admissible strategies by .the benchmark is modeled by a non - negative random terminal variable . the smallest super - hedging price( see e.g. ) is defined as ,\ ] ] which is assumed to be finite . in other words, is the smallest capital needed for for some strategy .note that with less initial capital the success probability for all .our objective is to maximize over all admissible trading strategies the success probability with .specifically , we solve the optimization problem : the second equality follows from the monotonicity of the mapping .clearly , is increasing in .moreover , if -a.s . , then due to the non - negative wealth constraint .* scaling property . *if the benchmark is scaled by a factor , then what is its effect to the success probability , given any fixed initial capital ? to address this , we first define [ prop - scale1 ] for any fixed , the success probability has the following properties : {1.00,1.00,1.00}{ ......... }\\ & ( iii)~\lim_{\beta \to \infty } \widetilde{v}(x;\beta)= \p\{f=0\},\label{limitsbeta}\\ & ( iv)\,~\label{success1}\widetilde{v}(x;\beta)=1,~ \text { for } \quad 0\le \beta \le \frac{x}{f_0}.\end{aligned}\ ] ] first , we observe that .therefore , increasing means reducing the initial capital for beating the same benchmark , so ( i ) holds . substituting with , we obtain ( ii ) . to show ( iii ) , we write focusing on the second term of , it suffices to consider an arbitrary strictly positive benchmark .we deduce from ( i ) and that this together with implies the limit .lastly , when the initial capital exceeds the super - hedging price of units of , i.e. , the success probability and hence ( iv ) holds . in other words , for any initial capital , the success probability stays constant whenever the initial capital and benchmark are simultaneously scaled by . to see this ,suppose the optimal strategy for beat one unit of the benchmark is .if the investor wants to outperform the benchmark , then he can trade using the same strategy in separate accounts and will achieve the same level of success probability as in the single benchmark case .proposition [ prop - scale1 ] points out that this strategy is optimal for any , and hence , there is no _ economy of scale_. for any fixed , the success probability is not convex or concave in .this can be easily inferred from the properties of shown in proposition [ prop - scale1 ] , and is illustrated in figure [ fig1 ] below .next , we show that the portfolio optimization problem admits a dual representation as a pure hypothesis testing problem .such a connection was first pointed out by fllmer and leukert in the context of quantile hedging .[ lem : npi ] the value function of is equal to the solution of a pure hypothesis testing problem , that is , where \le x. \end{aligned}\ ] ] furthermore , if there exists that solves , then , and the associated optimal strategy is a super - hedging strategy with -a.s .first , if we set and , then the right - hand side of resembles the pure hypothesis testing problem in . 1 .first , we prove that . for an arbitrary ,define the success event .then , ] is satisfied .consequently , for any , we have . since by , we concludenow , we show the reverse inequality .let be an arbitrary set satisfying the constraint implies a super - replication by some such that . in turn , this yields .therefore , by .thanks to the arbitrariness of , holds . in conclusion , . moreover , if a set satisfies that , then the corresponding strategy that super - hedges is the solution of . applying our analysis in section [ sect - equivhypo ] , we seek to connect the outperformance portfolio optimization problem , via its pure hypothesis testing representation , to a randomized hypothesis testing problem .we first state an explicit example ( see ) where the outperformance portfolio optimization is equivalent to the pure hypothesis testing by proposition [ lem : npi ] , but not to the randomized counterpart .[ exm : bin ] consider , , and the real probability given by .suppose stock price follows one - period binomial tree : the benchmark at .we will determine by direct computation the maximum success probability given initial capital . to this end, we notice that the possible strategy with initial capital is shares of stock plus dollars of cash at .then , the terminal wealth is due to the non - negative wealth constraint a.s . , we require that . now ,we can write as as a result , for different values of initial capital we have : 1 . if , then and which implies both indicators are zero , i.e. .if , then we can take , which leads to , i.e. . on the other hand , .from this and , we conclude that .if , then we can take , and .with reference to the value functions ( randomized hypothesis testing ) and ( pure hypothesis testing ) from example [ exm : pr ] , we conclude that . as in theorem[ thm : np ] , we now provide the sufficient conditions for the equivalence between the outperformance portfolio optimization and the randomized hypothesis testing .[ thm : qhic ] suppose that one of two conditions below is satisfied : 1 . is a singleton , and there exists a -measurable random variable with continuous cumulative distribution function under ; 2 . for all , the minimizer ] , which is a special case of . given an initial capital , the investor faces the optimization problem : [ prop : gbm ] is a continuous , non - decreasing , and concave function in .it admits the dual representation : \}.\end{aligned}\ ] ] first , proposition [ lem : npi ] implies ( the pure hypothesis testing ) .also , since is a singleton , and has continuous c.d.f .with respect to , the first condition of theorem [ thm : qhic ] yields the equivalence of pure and randomized hypothesis testings , i.e. . for computing the value of in this complete market model , proposition [ prop : gbm ] turns the original stochastic control problem into a static optimization ( over ) in . in the dual representation, the expectation can be interpreted as pricing a claim under measure , namely , .\ ] ] hence , is the legendre transform of the price function evaluated at . in this section, we assume that and are constant , so is a geometric brownian motion ( gbm ) .we consider a class of benchmarks of the form , for .this includes the constant benchmark ( ) , as well as those based on multiples of the traded asset ( ) and its power .one interpretation of the power - type benchmarks is in terms of leveraged exchange traded funds ( etfs ) .etfs are investment funds liquidly traded on stock exchanges .they provide leverage , access , and liquidity to investors for various asset classes , and typically involve strategies with a constant leverage ( e.g. double - long / short ) .they also serve as the benchmarks for fund managers . since its introduction in the mid 1990 s ,the etf market has grown to over 1000 funds with aggregate value exceeding ] .then , we apply formula to obtain the success probability for different values of capital and leverage . as shown , for every fixed , moving the leverage further away from zero increases the success probability . in other words , for any fixed success probability , highly ( long / short ) leveraged etfs require lower initial capital for the outperformance portfolio .the comparison between long and short etfs with the same magnitude of leverage depends on the sign of .in particular , we observe from and that when the success probability is the same for , and the surface is symmetric around . in a related study , fllmer and leukert ( * ? ? ? * sect .3 ) considered quantile hedging a call option in the black - scholes market .their solution method involves first conjecturing the form of the success events under two scenarios .alternatively , one can also study the quantile hedging problem via randomized hypothesis testing . from wecan compute the maximal success probability from \} ] next , we proceed to prove proposition [ prop - stochvol ] .applying theorem [ thm : qhic ] , the associated randomized hypothesis testing is given by \},\ ] ] where according to and , note that for , can be rewritten as where and is a standard brownian motion defined by hence is in fact a -martingale for . in view of lemma [prop : compz ] , for any fixed , it is optimal to take .since is bounded positive process away from zero , applying proposition [ prop : tc ] and girsanov theorem , we have , and hence holds for any constant and . to this end, we verified the second condition of theorem [ thm : qhic ] , and conclude together with proposition [ lem : npi ] .proposition [ prop - stochvol ] shows that among all candidate emms the mmm is optimal for .in other words , when the benchmark is a constant or the stock , the objective to maximize the outperformance probability induces the investor to assign a zero risk premium ( ) for the second brownian motion under the stochastic factor model - .interestingly , this is true for all choices of , , , and for .furthermore , if is constant , then the expectation in and hence the success probability can be computed explicitly .suppose for some constant .then , is given by where and are given in and in .more generally , let us consider a stochastic benchmark in the form for some measurable function .the outperformance portfolio optimization is given by where the notation .we define : ,\ ] ] where = \mathbb{e}[\ \cdot\ | s_t = s , y_t = y] ] is always non - negative , and so we have .\ ] ] so , we conclude by arbitrariness of . on the other hand ,if we take of in the above , then it yields equality , instead of inequality .\ ] ] by definition , we have right - hand side is always greater than or equal to , and this implies .applying - , the optimizer for is derived from of theorem [ thm : rnp ] with and . in turn , this yields and via . under quite general conditions, one can show that of is the unique solution of hjb equation in the viscosity sense .[ a:2 ] , , , , and are all lipschitz continuous .[ prop : vis ] under assumption [ a:2 ] , the dual function in is the unique bounded continuous viscosity solution of with datum for all .first , it can be shown that is the viscosity sub - solution ( resp .supersolution ) using the feynman - kac formula on its super ( resp .sub ) test functions . for details, we refer to the similar proof in ( * ? ? ?* appendix ) . for uniqueness, we transform the domain from to , by taking and defining . then , is equivalent to where now put in the standard form , the uniqueness of solution , and thus , follows from the comparison result in ( * ? ? ?* theorem 4.1 ) .we have studied the outperformance portfolio optimization problem in complete and incomplete markets .the mathematical model is related to the generalized composite pure and randomized hypothesis testing problems .we established the connection between these two testing problems and then used it to address our portfolio optimization problem .the maximal success probability exhibits special properties with respect to benchmark scaling , while the outperformance portfolio optimization does not enjoy economy of scale . in various cases, we obtained explicit solutions to the outperformance portfolio optimization problem . in the stochastic volatility model, we showed the special role played by the minimal martingale measure . with the general benchmark, hjb characterization is available for the outperformance probability .an alternative approach is the characterization via bsde solution for its dual representation ( see ) .there are a number of avenues for future research .most naturally , one can consider quantile hedging under other incomplete markets , with specific market frictions and trading constraints .another extension involves claims with cash flows over different ( random ) times , rather than a payoff at a fixed terminal time , such as american options and insurance products .on the other hand , the composite nature of the hypothesis testing problems lends itself to model uncertainty . to illustrate this point ,let s consider a trader who receives from selling a contingent claim with terminal random payoff ] .in fact , we can convert this problem into a randomized composite hypothesis testing problem as in . to this end , we define and then write , where solves \notag\\ \text { subject to } & \sup_{z\in \mathcal{z } } \mathbb{e}[zx ] \le \frac{k - x}{k}.\notag\end{aligned}\ ] ] following the analysis in this paper , one can obtain the properties of the value function as well as the structure of the optimal solution .finally , the outperformance portfolio optimization problem in section [ sect - finance ] is formulated with respect to a fixed reference measure .this corresponds to applying the theoretical results of section 2 with the set ; cf .the proofs of proposition [ lem : npi ] and theorem [ thm : qhic ] .it is also possible to incorporate model uncertainty by replacing the reference measure by a class of probability measures . in this setup ,the portfolio optimization problem becomes is a special case of the hypothesis testing problems discussed in section 2 , where the original set can be interpreted as the set containing the radon - nikodym densities with .for related studies on the robust quantile hedging problem , we refer to .the authors would like to thank two anonymous referees for their insightful remarks , as well as jun sekine , birgit rudloff and james martin for their helpful discussions .tim leung s work is partially supported by nsf grant dms-0908295 .qingshuo song s work is partially supported by srg grant 7002818 and grf grant cityu 103310 of hong kong .[ exm : coh ] let ] for all .thus is the unique minimizer of \} ], then there also exists a counter - example such that minimizes \} ] , is the unique minimizer of . define the function ] and is finite , and , there exists a finite that minimizes . now , suppose is a minimizer of .then , it follows that , , which leads to - \inf_{\mathcal{g } \times \overline{co(\mathcal{h } ) } } \mathbb{e } [ ( g- ah)^+]\\ \\ \displaystyle & \ge \mathbb{e}[\tilde g ] - \displaystyle \inf_{\overline{co(\mathcal{h } ) } } \mathbb{e } [ ( \tilde g- ah)^+ ] \\ \\ & \ge \displaystyle a \sup_{\overline{co(\mathcal{h } ) } } \mathbb{e } [ h i_{\{\tilde g \ge ah\ } } ] \ge \displaystyle a \sup_{\mathcal{h } } \mathbb{e } [ h i_{\{\tilde g \ge ah\ } } ] .\end{array } \end{aligned}\ ] ] in , minimizes ] implies that it is a half - plane bounded by , which passes since .hence , we have where . in summary ,the values are [ ex : dig ] let ] .define by ).\ ] ] let be of and , and \to \mathbb{r} ] .the pure hypothesis testing problem is \ ] ] subject to \le 1/2.\ ] ] direct computation gives the success set and the value of pure hypothesis test .on the other hand , the randomized hypothesis testing problem \ ] ] subject to \le 1/2.\ ] ] we find that and solve this randomized hypothesis test with the optimal value .this shows that the values of pure and randomized hypothesis tests are different . if one were to construct an indicator version of the randomized test as in , namely , although this test still satisfies = 1/2 ] . on the probability space with filtration , we denote by to be a standard brownian motion .let be a -martingale defined by .\ ] ] where is bounded -adapted process . since is a continuous process , by levy s zero one law , we have therefore, it is enough to show that there exists such that note that , the martingale has the same distribution as a time - changed brownian motion starting from state . together with , we have for some standard brownian motion that since is independent of , and strictly less than , we can simply take .h. fllmer and m. schweizer : hedging of contingent claims under incomplete information , applied stochastic analysis , stochastics monographs ( m.h.a .davis and r.j .elliot , eds . ) , vol . 5 , gordon and breach , london / new york , 1990 , pp.389 414 y. giga , s. goto , h. ishii , and m .- h .sato : comparison principle and convexity preserving properties for singular degenerate parabolic equations on unbounded domains , indiana univ .j. * 40 * , 443470 ( 1991 ) t. leung , q. song , and j. yang : generalized hypothesis testing and maximizing the success probability in financial markets , proceedings of the international conference on business intelligence and financial engineering ( icbife ) , 2011 .
we study the portfolio problem of maximizing the outperformance probability over a random benchmark through dynamic trading with a fixed initial capital . under a general incomplete market framework , this stochastic control problem can be formulated as a composite pure hypothesis testing problem . we analyze the connection between this _ pure _ testing problem and its _ randomized _ counterpart , and from latter we derive a dual representation for the maximal outperformance probability . moreover , in a complete market setting , we provide a closed - form solution to the problem of beating a leveraged exchange traded fund . for a general benchmark under an incomplete stochastic factor model , we provide the hamilton - jacobi - bellman pde characterization for the maximal outperformance probability . * keywords : * portfolio optimization , quantile hedging , stochastic benchmark , hypothesis testing , neyman - pearson lemma + + * jel classification : * g10 , g12 , g13 , d81 + * mathematics subject classification : * 60h30 , 62f03 , 62p05 , 90a09 +
geophysical flows are dangerous natural hazards occurring mostly in mountainous terrain . the most apparent phenomena of this category are debris flows , which originate when heavy rainfall mobilizes a large amount of debris .the resulting mixture comprises water , cohesive sediments , organic matter , silt , sand and in many cases also stones of different sizes the resulting rheological behavior is known to have a wide variability , which makes numerical studies an essential tool to support experimental investigations .full - scale simulations of geophysical flows are very scarce , since they require a framework that efficiently manages complicated boundary conditions , as well as a powerful and flexible fluid solver .moreover , the non - newtonian nature of the material , and in some cases its multiple phases , pose more challenges .traditional solvers are known to have troubles in tackling this problem , and nowadays alternative solution are sought by the community . the lattice boltzmann method ( lbm ) is becoming increasingly popular and is today considered a valid alternative for categories of flows where traditional solvers exhibit disadvantages , like multiphase fluids , flows through porous media , irregular geometries , and free - surface realizations .after reviewing the most commonly used rheological model for flowing geomaterials in sec .[ rheology ] , we offer an essential overview of the method in sec .[ lbm ] , together with a simple but effective formulation for the simulation of geophysical flows . in sec .[ simulation ] and [ obstacle ] examples are given .the rheology of geophysical flow materials is a debated issue in the field , due to the extreme variability in natural material parameters and the presence of multiple phases , complicating the classification .most models therefore adopt simplified solutions based on single - phase descriptions .this can either be a frictional material or a viscoplastic fluid .the former is used for rock and snow avalanches , while the latter is preferred for mudflows and viscous debris flows . for certain categories of geophysical flows , however , a single - phase approach is insufficient to capture the physics of the phenomena .debris flows are a typical example of this , because granular and viscous behavior interact , giving rise to unexpected structures and a localization of rheological properties .a continuum - continuum coupling for granular and fluid phase is possible , but is incapable of capturing the localization of flow properties , which is widely recognized to be a key feature of debris flows .a discrete - continuum approach would of course be able to provide a detailed description , but development of this sort of coupling has been slowed by its demanding computational cost .this is currently challenged , however , by the maturity reached by alternative solvers like smoothed particle hydrodynamics , the material point method , or lbm , which are more flexible in managing complex boundary conditions than traditional tools like finite differences and finite volumes . in such methods ,the granular phase is treated by a separate solver and , for this reason , the fluid model can focus on the nature of the material .this is the reason behind our choice to adopt lbm with a purely viscoplastic rheological law , an approach that can offer : * an efficient framework for the simulation of geophysical flows of plastic nature , where the complexity of the boundary does not influence the performance . * a convenient environment for the coupling with a discrete method .this option opens future chances for full realizations of multiphase flows .regarding the specific rheological law , we adopt the bingham model , which is widely used to describe plastic fluids due to its conceptual simplicity .it reads : where and denote yield stress and plastic viscosity .an analogous way to write the law is through an analogy with newtonian flow .one defines a parameter , the apparent viscosity , which proportionally relates stress and rate of strain and is treated as a variable . in the case of a bingham fluid , takes the form where the apparent viscosity ( from now on , for simplicity , called viscosity ) , diverges when , which will require special care in the solver .we are now ready to introduce lbm in the next section , and to incorporate this constitutive law in sec .lbm has lately emerged as an attractive alternative to traditional fluid solvers , mainly due to its high - level performance and the predisposition to parallelization .lbm is also suitable to the solution of problems involving complex boundary conditions .it is beyond the scope of this chapter to give a complete description of the method .the reader can refer to refs . for a comprehensive review .we will focus on the aspects of the formulation that need to be modified in order to successfully reproduce debris flows . in lbm , the fluid is described using a distribution function and a set of discrete velocities .density and velocity of the fluid are computed as the first two moments of the distribution function the evolution of is governed by the lattice - boltzmann equation where is the operator that represents the effects of inter - particle collisions in the fluid .a common way to approximate the otherwise complex expression of is the bhatnagar - gross - krook operator , which relaxes the distribution function to a thermodynamic equilibrium .it can be written as and features a constant , the relaxation time , which is related to the kinematic viscosity of the fluid as with this formulation , and with the setting of a coherent lattice , lbm can produce realizations of fluid dynamics in analogy to the navier - stokes equations .the method is accurate in the limit of small mach number , practically with denoting the lattice speed of sound .we will now describe two additions to the model necessary for the simulation of geophysical flows : a non - newtonian rheology and a free - surface treatment. imposed by the method. therefore , also the viscosity is limited.,scaledwidth=60.0% ] the lbm described in the previous section yields , after the chapman - enskog expansion , the navier - stokes equation for newtonian fluids .a simple way to upgrade the method to more general formulations is offered by a local treatment of the relaxation time .any rheological law that can be approximated as with , is suitable for this approach .the relaxation time can in fact be directly related to the viscosity through equation [ mu ] , obtaining ad hoc formulations for different rheological laws .the bingham fluid , for example , can be written as this type of formulation requires the computation of the shear rate tensor , which can be done easily in lbm directly form the distribution functions and the magnitude can be extracted as the limitation of this approach lies in the range of values given to the relaxation time by equation [ bingham ] .accuracy in lbm is guaranteed as long as .reasonable values for these limits are and . therefore , also the viscosity , which is linearly linked to the relaxation time , is subjected to the same restrictions : .the following considerations are thus necessary : * the fluid that reaches the maximum allowed value of is considered to be in a plastic state .however , with the proposed scheme , the fluid never stops its motion , but rather flows at a much slower rate .the ratio between and determines the effectiveness of this approach . with the proposed limit values for , . * the best approximation of a bingham fluid is obtained when , because the lower limitation on has no effect .however , an eventual transition to turbulent regime can happen when simulating diluted flows , and therefore the value of must be raised to avoid instabilities . in case , the approximation of the bingham constitutive model becomes less accurate . in order to simulate geophysical flows on realistic geometries , we need to include the boundary conditions given by the channel bed and the interface of the flow with air . while the former can be implemented as a standard no - slip boundary condition , as in ref . , the latter is a less common practice in lbm .the free - surface is represented through a classification of the lattice nodes in three categories : liquid , interface and gas nodes .the governing parameter is the liquid fraction : the liquid fraction of a node evolves according to the streaming of the distribution function given by equation [ newfunctions ] as where and represent the distribution function streaming respectively in and out of the node , and is a parameter that depends on whether the distributions are exchanged with a fluid node or another interface node .this method conserves mass exactly , and ensures a smooth evolution of the surface .further details are found in ref .the full - scale simulation of a plastic geophysical flow is shown in this section . mimicking the real geometry of a small valley ,the simulation features a cylindrical channel inclined at with respect to the horizontal , and a flat deposition area at its bottom ( fig .[ geometry ] ) .the total volume of the flowing material is and is fixed , i.e. neither entrainment nor deposition are modeled . while very big events can be of the order of , the size of the most frequent type of geophysical flows lies in the range of , which is big enough to endanger humans and infrastructures .therefore our simulation proposes a realistic scenario , even though not a particularly dangerous one .the fluid has density and follows a bingham - like rheological law , like the one proposed in sec .yield stress and kinematic plastic viscosity are respectively and , relating the simulated system to a very dense mudflow or to a debris flow whose granular phase has been homogenized into the fluid , therefore increasing the bulk viscosity . fig .[ flowvisualization ] shows how the fluid free surface evolves in time .the fluid is quickly sheared by the effect of gravity and moves until an equilibrium is reached in the deposition area , where the viscosity increases .this technique can be used to estimate the deposition area of the material after an event , and to support the design of hazard maps on real terrain .[ characteristics ] shows the evolution of the maximum velocity in the fluid and of the plasticization level of the material , which are the useful parameters to determine the status of the flow .the color contour shows the velocity at the free surface ( a ) and in the longitudinal section ( b ) . ] ] .[ forceimage ] to show the possibilities to use lbm to design protection structures , we repeat the simulation of the previous section , this time featuring an obstacle .lbm can in fact be used to calculate the hydrodynamic interactions on solid objects , computing all momentum transfers between the distribution function and the solid boundaries .the procedure , which is found in ref . , does not change significantly the overall efficiency of the method.we add a retaining wall , fixed at the bottom of the channel and of size , as in fig .[ geometry ] .the shape of the free surface after the impact is shown in fig .[ obstacleimage ] , with insight into the longitudinal cross section of the flow .the force on the wall can be estimated with a hydrodynamic formula as where is the area of the obstacle impacted by the flow .the value of the coefficient is given by the comparison with experiments and varies , according to different authors , from to .in the simulation , when the flow hits the wall , the depth is and the front speed is , which leads to an estimated force of .[ forceimage ] shows the hydrodynamic force as calculated by the solver , highlighting the importance of the dynamic load due to the initial impact .the maximum values match the prediction of the hydrodynamic formula .in this chapter we showed how a model based on lbm can be used to simulate geophysical flows and provide a new tool for the rational design of mitigation and protection structures .the model inherits the advantages of the local solution mechanism of lbm , and extends the standard solver with the addition of a bingham fluid formulation and of the free - surface technique .the resulting framework can be used to simulate homogeneous plastic flows , and provides an optimal environment for the coupling with discrete method , thus opening future chances for the full simulation of multiphase geophysical flows .the research leading to these results has received funding from the european union ( fp7/2007 - 2013 ) under grant agreement n. 289911 .we acknowledge financial support from the european research council ( erc ) advanced grant 319968-flowccs .the authors are grateful for the support of the european research network mumolade ( multiscale modelling of landslides and debris flows ) .vec , o. , skoek , j. , stang , h. , geiker , m.r . ,roussel , n. : free surface flow of a suspension of rigid particles in a non - newtonian fluid : a lattice boltzmann approach .j. nonnewton .fluid mech . * 179 - 180 * , 3242 ( 2012 ) .
we explore possible applications of the lattice - boltzmann method for the simulation of geophysical flows . this fluid solver , while successful in other fields , is still rarely used for geotechnical applications . we show how the standard method can be modified to represent free - surface realization of mudflows , debris flows , and in general any plastic flow , through the implementation of a bingham constitutive model . the chapter is completed by an example of a full - scale simulation of a plastic fluid flowing down an inclined channel and depositing on a flat surface . an application is given , where the fluid interacts with a vertical obstacle in the channel . mudflow , debris flow , non - newtonian , bingham , lattice - boltzmann
knowledge of steady - state quantities related to an ergodic stochastic chemical system can provide useful insights into its properties .moreover , steady - state values of cost functions are often easier to estimate with good accuracy compared to stationary distributions .when the system propensities are affine in the state , mean values of polynomial functions of the system state can be computed analytically , as the system of moments is closed . however ,when non - polynomial functions are considered , or the system propensities are not affine , analytic calculations are no longer possible and the only solution left is simulation . while moment closure methods can be used to provide good approximations to system moments over a finite time interval , they commonly tend to diverge from the true solution over time , thus resulting in biased steady - state values .the solution presented in ref . works only when polynomial functions of the state are considered , and generalization to arbitrary functions is still very difficult .the finite state projection algorithm can be alternatively employed to provide moment estimates with guaranteed accuracy bounds , however the number of states required to attain a certain accuracy makes the method applicable to small problems . on the other hand , stochastic simulation can always provide estimates for the stationary mean of any function of the state , however these estimates are inevitably noisy .brute - force noise reduction can only be achieved at an increased computational cost , either by simulating longer trajectories or by running many trajectories in parallel .another possibility for reducing the noise in the estimated quantities is the application of a variance reduction technique , provided the added computational cost of the reduced variance estimator is significantly smaller than the gain in computer time . in this work we present the application of such a variance reduction technique to systems of stochastic chemical kineticsthe idea is based on so - called _ shadow functions _ and originated in the queueing systems simulation literature , a field where the range of analytically tractable systems in that field is much larger .we demonstrate how the same idea can be applied to steady - state simulation of stochastic chemical systems .we further test the capabilities of the reduced - variance estimators by performing parametric sensitivity calculations for two systems governed by nonlinear propensity functions .the paper is organized as follows : in sections ii and iii we define the steady - state estimation problem and define the nave and shadow function estimators . in sectionsiv and v we present one possible implementation of the variance reduction technique to stochastic chemical kinetics and its applicability to steady - state sensitivity analysis . the numerical examples in sectionvi serve to demonstrate the effectiveness of the shadow function method in practice and assess its computational cost in comparison to nave estimation .the conclusions of our study and some future research directions are finally summarized in section vii .assume an irreducible positive recurrent markov chain on a countable space .in the case of stochastic chemical kinetics , , where is the number of chemical species in the system . the chain moves according to a finite set of available transitions , with a corresponding set of propensity functions .the infinitesimal generator of is the operator satisfying for all such that .the discreteness of allows us to enumerate its elements and think of as an infinite matrix .similarly , any function on can be thought of as an infinite column vector , and distributions on can be defined as infinite row vectors .let be a -integrable cost function associated with .the ergodic theorem for markov chains ascertains that for any initial condition where is the unique invariant distribution of the system and the steady - state mean value of .since the analytic calculation of is possible only in very special cases , its estimation from simulation is usually the only possibility .the most straightforward estimator of is which is also strongly consistent . under some further general conditions on , and , we also know that as , where denotes weak convergence , is the standard normal distribution and is called the _ time average variance constant ( tavc ) for _ . the tavc can be expressed in terms of the integrated autocovariance function of the process , where , according to the formula : \,ds.\ ] ] an alternative expression for can be derived from the functional central limit theorem for continuous - time markov chains : where is a solution to the so - called _ _ poisson s equation__ ( note that solutions to the poisson equation are unique up to an additive constant , i.e. if is a solution , then is also a solution ) : a more general class of estimators for has the form where is chosen such that almost surely for all .the function offers an extra degree of freedom in the design of the estimator , which can be exploited to achieve variance reduction .in other words , can be chosen such that the tavc of , denoted by , is smaller than .the obvious choice is of course intractable , however it suggests that a function with a zero steady - state mean that is approximately equal to could also achieve variance reduction .such functions would result in a process that behaves almost antithetically from , thus making the variance of smaller than that of alone . in the steady - state simulation literature , a function that satisfies called a _ shadow function _ .the problem then becomes the selection of an appropriate shadow function , so that , with . fromwe see that a reduction of variance by a factor implies that the variance of is equal to the variance of . assuming that the computational cost of both estimators is dominated by the cost of simulating the process , can be used as an indicator of the efficiency of relative to . the basic idea of the shadow function method of ref . , outlined in the next section , is to obtain such an by using analytical information from a second markov chain that approximates the original one and is mathematically tractable .a second alternative solution of more general applicability will be described after presenting the method in more detail .the basic idea to the shadow function method is to consider candidate functions of the form where is the generator matrix of the markov chain and is any -integrable function ( so that the ergodic theorem holds for it as well ) . in this case , and under the assumption that ( that holds under some not - too - stringent conditions on ) , becomes a shadow function . we are then naturally led to consider the solution of the poisson equation , which could provide us with the appropriate function .solving is of course not possible , since the state space is countable and is unknown .however , we can look for so - called _ surrogate functions _ that approximate this solution to build a better estimator .following the analysis from , we consider another markov chain evolving on a countable space , with stationary distribution and generator .we also assume a map ( not necessarily one - to - one ) and a function that is somehow closely related to the original cost function . if is -integrable , we further assume that we can compute the solution to the poisson equation through which we arrive at a surrogate function summing up , the approach outlined above is based on the fact that if is : 1 ) a relatively good approximation of and 2 ) tractable analytically , then we can derive a surrogate function and an estimator which is better than the original estimator in terms of tavc ( assuming that the extra calculation time needed for is not significant ) .the shadow function method was originally developed for steady - state simulation of queueing systems , for which a wide range of known and tractable approximations exists .the solution of the approximating poisson equation can thus be calculated explicitly in many cases , and the application of the method is straightforward .this is not the case for stochastic chemical kinetic systems , where explicit solutions are very hard or impossible to calculate .one thus has to resort to different types of approximation schemes , outlined below .the markov chains we are interested in satisfy the following properties : 1 .they have a finite number of bounded increments over each finite time interval 2 .each state leads to a finite number of states ( i.e. for every , for finitely many s ) for such chains , an obvious idea for obtaining an approximating process is to consider a chain evolving on a finite truncation of ( i.e. consider to be a finite subset of ) .actually , under quite weak assumptions and careful definition of , one can show that the invariant distribution of on approaches that of as the truncation size grows .this of course implies that also approaches . in this case, the function between the two state spaces can be intuitively defined to map every to itself , and every to some ( which may vary with ) . in this way , . in order to arrive at a good approximation with this approach ,one first has to study a few simulations of , to determine a finite set that contains a good amount of its invariant mass and then perform the necessary calculation of the solution to the poisson equation on .the size of this set is determined in practice as a trade - off between tractability and approximation accuracy . however , the applicability of this approach is in general very limited due to the fact that the required truncations grow exponentially with the system dimension .another problem is that the approximation of ( the solution to the original intractable poisson equation ) will be very poor for states , because of the form of , which projects are states outside back into the set .this implies that significant variance reduction will be hard to achieve ( and in some cases variance may even increase ) , if the chain sample paths exit too frequently during simulation . instead of searching for an approximating markov process, one may try to approximate the solution of directly , to arrive at a suitable shadow function .this approach is also followed in ref . , where the discrete - time steady - state simulation problem is considered . given a set of functions ,must satisfy a boundedness condition derived from a foster - lyapunov inequality . for more details , see ref . or ch.8 of ref . . in the sequelwe will assume that all the functions considered satisfy this property .] one can define where and is a vector of weights . in principleone could then try to calculate the value of that minimizes the tavc of .using and , this variance constant turns out to be ( see appendix [ app_a ] ) ,\ ] ] where solves .thus , minimizing the tavc of requires knowledge of , which is unavailable .we thus have to resort to heuristic methods for obtaining a suboptimal estimate of , for example by determining the value of that minimizes for some suitable measure .this is a linear least squares regression problem , which can be solved approximately by generating a set of training data , , with weights .if we define the finite - sample version of by and similarly set we can then calculate the matrix corresponding to by using the explicitly known form of the markov chain generator and finally obtain where , as the ( weighted ) least squares minimizer of .putting together all the elements presented above , we summarize below the basic steps of the variance reduction algorithm implemented in this work : * simulate a long path of the process using any preferred version of the stochastic simulation algorithm . *obtain a rough estimate of from the simulated trajectory using .* pick a set of functions and approximate the solution to the poisson equation by using the approach outlined above . *evaluate along the simulated sample path .* refine the estimate of using . *verify that variance reduction has been achieved .the last step is necessary to ensure that the variance has not actually increased due to the use of a suboptimal weight vector , and it can be carried out quite straightforwardly using the method of batch means and the simulated trajectory from step 1 . in all caseswe have tested , steps 2 - 6 do not contribute more than a few seconds to the computational cost of this algorithm , which implies that the main computational bottleneck still lies at step 1 . the estimate of obtained by weighted least squares is clearly suboptimal , however it may still yield a reduced - variance estimator .the choice of the weighting measure in the optimization problem above is completely free , and one could in principle try to optimize over both and for a given problem . in practicehowever , such an approach would increase computational cost of the reduced - variance estimator and possibly eliminate the benefit of variance reduction . to maintain estimator efficiency, one should thus consider a single ( or a few ) `` generic '' choices for , and preferably re - use the points generated at step 1 .a reasonable choice of weighting measure would be itself .the training set for regression would then consist of all distinct points visited by the process over the course of simulation in step 1 ( possibly after discarding the burn - in period ) , weighted according to the empirical distribution of the process .a more coarse approximation of would be to use the same sample with all weights being equal .yet another possibility consists of sampling from a uniform grid that is centered on the area containing the bulk of the invariant mass of the chain .this area can also be crudely determined from the sample of step 1 .all these approaches can achieve variance reduction , however the optimal choice remains problem - dependent .given that the calculation of least squares estimates can be carried out very efficiently using linear algebraic techniques , it is highly advisable to test several alternatives for the problem at hand . in ch.11 of ref . , the problem of selecting an optimal is overcome by introducing a least - squares temporal difference learning ( lstd ) algorithm for the approximation of the value of that minimizes the variance of in the context of discrete - time chains . the same algorithm could in principle be applied to continuous - time chains using the embedded discrete - time markov chain and carrying out the necessary modifications to the original algorithm , based on the results of ref .while this solution is theoretically justified , it requires setting up and running an lstd estimator in parallel with the simulated chain that will asymptotically converge to the optimal value of . depending on the convergence properties of this estimator , the overall efficiency of the variance reduction scheme may be smaller than the efficiency achieved by using a sub - optimal value for , especially when several approximating functions are considered .another degree of freedom in the design of shadow function estimators is the choice of the approximating set . here, the probabilistic interpretation of poisson s equation may assist the selection of approximating functions by providing some useful intuition : assuming is -integrable and ergodic , it holds that ,\ ] ] where is the hitting time of some state ( changing simply shifts by a constant ) and denotes expectation given . from this equation onemay infer some general properties of ( e.g. monotonicity , oscillatory behavior etc . )based on the form of the propensity functions .the same formula can be used to provide some crude simulation - based estimates of , which can be also helpful for the selection of .finally , a lyapunov - type analysis can be employed to infer the asymptotic behavior of .chemical reaction systems typically depend on several kinetic parameters , and the calculation of the output sensitivity with respect to these parameters is an essential step in the analysis of a given model .while there are several powerful parameter sensitivity methods available today , they are mostly appropriate for transient sensitivity analysis , as the variance of their estimates tends to grow with the simulation length .indeed , it can be shown that the variance of sensitivity methods based on the so - called likelihood ratio or the girsanov transformation grows linearly with time .on the other hand , the variance of estimators based on finite parametric perturbations can be shown to remain bounded under mild conditions on the propensity functions , provided the underlying process is ergodic. however , the stationary variance can be still quite large , which makes necessary the use of a variance reduction method , such as the one presented here . besides providing reduced - variance estimates of various steady - state functions of the chain ,the shadow function estimator can be also employed for sensitivity analysis using a finite difference scheme and the common random numbers ( crn ) estimator .more analytically , assuming that the propensity functions of are of the form , where is a parameter of interest , the finite difference method aims to characterize the sensitivity of the steady - state value of a given function to a small finite perturbation of of around a nominal value .if is small enough , we expect that will be approximately equal to .finite difference - based sensitivity analysis using shadow functions can be simply carried out by generating process trajectories for the nominal and perturbed parameter values , and estimating by . as shown in ref . , use of the same random number stream for the generation of both the nominal and perturbed trajectories can result in great variance decrease compared to using independent streams .to demonstrate the efficiency of shadow function estimators , we next present two applications of the method to steady - state sensitivity estimation .we compare our finite difference scheme that uses common random numbers and the shadow function estimator to the method of coupled finite differences ( cfd ) , which frequently outperforms finite - difference estimators based on common random numbers and the random time change representation .all numerical examples were generated using custom - written matlab scripts running on a 3.4 ghz quad - core pc with 8 gb of ram .as a first example , we consider the stochastic focusing model of , where an input signaling molecule inhibits the production of another molecule .stochastic focusing arises due to the presence of stochastic fluctuations in , that make the mean value of more sensitive to changes than predicted by the deterministic model of the system .the same system is treated in ref . using a more sophisticated method based on trajectory reweighting .the system reactions are given below : where .the parameters used are , and , while is varied between 200 and 900 to study the effect of varying ] . more specifically ( and similarly to ref . ) , we want to calculate the gain to this end we estimate using finite differences with at several points between and . figure [ sf_gain ] shows the calculated confidence intervals for obtained by the common random number ( crn ) estimator , the crn estimator in conjunction with a shadow function and the cfd method . for each value of , a simulated sample path of length time units ( t.u . )was used to generate 19 batches of length 400 t.u .each , while the first 400 t.u . were discarded as burn - in .shadow functions consisted of linear combinations of all monomials in two variables up to order three ( that is , , with ) , together with can provide useful intuition for the selection of approximating functions . in the case at hand , , so ( the solution to the poisson equation ) is expected to grow only very slowly with , as the production rate of tends to zero as . ]this set of s was selected manually and is definitely not the `` optimal '' choice .the training set used for regression consisted of all unique points visited by the process sample paths after a burn - in period .two alternative weighting schemes were tested for each value of : according to the first , all points were assigned equal weight ( ) , while in the second one the points were weighted according to the empirical distribution of the process , calculated using the simulated sample paths ( . both schemes lead to variance reduction , and calculation of in each case can be performed very fast ( sec ) , given the small number of training points ( ) .post - processing of the trajectories for the evaluation of the shadow function over the different batches takes another 5 sec of cpu time . on the other hand ,ssa simulation takes on average 40 sec , which demonstrates that the overhead associated with the shadow function usage is relatively small , while the computational savings in the estimation of are significant , as table [ vr_ar ] demonstrates .finally , a cfd simulation of the same length requires 220 sec of cpu time on average , while achieving a smaller magnitude of variance reduction . to ,estimated with the finite difference method . shownare 95% confidence intervals obtained with the method of batch means .green : crn estimator .blue : cfd estimator .red : crn combined with shadow functions . ].variance reduction in the estimation of [ cols="^,^,^,^,^,^,^,^,^,^,^,^",options="header " , ] the variance reduction method remains quite efficient computationally in this case as well : ssa simulation of a t.u .trajectory takes about 17 sec of cpu time , while calculation of requires 1 sec and post - processing of the sample path another 3 sec . at the same time, a cfd simulation of the same length requires 95 sec of cpu time on average , while failing to achieve a comparable level of variance reduction .we demonstrated the applicability of the powerful shadow function method to the problem of steady - state simulation of stochastic chemical kinetics .our results suggest that a significant increase in the efficiency of a steady - state estimator is possible by only a small increase in its computational cost .the method can be applied to the steady - state estimation of practically any function of the process , and can thus provide improved estimates of high order ( cross-)moments , as well as estimates of stationary probabilities for subsets of the process state space , by using set indicators as cost functions .the magnitude of variance reduction achieved by the shadow function method allows also the efficient and precise computation of steady - state parameter sensitivities using the finite difference method .the comparison of the efficiency of this approach for providing steady - state sensitivity estimates with the one presented in is the topic of our ongoing work. it would also be instructive to assess the relative strengths and weaknesses of the lstd approximation algorithm for optimizing the shadow function and test its scalability with system size and number of approximating functions ( note that only one - dimensional examples are treated in ) .the proposed workflow for arriving at a useful shadow function can be improved at several points , by drawing from the large literature on function approximation techniques , in order to enlarge its range of applicability and its accuracy . however , even a crude approach such as the one presented above seems to be sufficient for systems of practical interest .from and , , where solves the poisson equation .this implies that , where is the solution of the poisson equation .the variance of the shadow function estimator thus becomes \\ & = \sigma_1 ^ 2 - 2\left[\langle f_c,\psi\theta\rangle -\langle q(\psi\theta),g_1\rangle + \langle q(\psi\theta),\psi\theta\rangle\right].\end{aligned}\ ] ]
we address the problem of estimating steady - state quantities associated to systems of stochastic chemical kinetics . in most cases of interest these systems are analytically intractable , and one has to resort to computational methods to estimate stationary values of cost functions . in this work we consider a previously introduced variance reduction method and present an algorithm for its application in the context of stochastic chemical kinetics . using two numerical examples , we test the efficiency of the method for the calculation of steady - state parametric sensitivities and evaluate its performance in comparison to other estimation methods .
let be a set , and a symmetric function that satisfies , for any and any and : such a function is called a _ positive definite kernel _ on .a famous result by states the equivalence between the definition of a positive kernel and the embedding of in a hilbert space , in the sense that is a positive definite kernel on if and only if there exists a hilbert space with inner product and a mapping such that , for any , it holds that : the construction of positive definite kernels on various sets has recently received a lot of attention in statistics and machine learning , because they allow the use of a variety of algorithms for pattern recognition , regression of outlier detection for sets of points in .these algorithms , collectively referred to as _ kernel methods _ , can be thought of as multivariate linear methods that can be performed on the hilbert space implicitly defined by any positive definite kernel through ( [ eq : inp ] ) , because they only access data through inner products , hence through the kernel .this `` kernel trick '' allows , for example , to perform supervised classification or regression on strings or graphs with state - of - the - art statistical methods as soon as a positive definite kernel for strings or graphs is defined .unsurprisingly , this has triggered a lot of activity focused on the design of specific positive definite kernels for specific data , such as strings and graphs for applications in bioinformatics in natural language processing .motivated by applications in computational chemistry , proposed recently a kernel for labeled graphs , and more generally for structured data that can be decomposed into subparts .the kernel , called _ optimal assignment kernel _ , measures the similarity between two data points by performing an optimal matching between the subparts of both points .it translates a natural notion of similarity between graphs , and can be efficiently computed with the hungarian algorithm .however , we show below that it is in general not positive definite , which suggests that special care may be needed before using it with kernel methods .it should be pointed out that not being positive definite is not necessarily a big issue for the use of this kernel in practice .first , it may in fact be positive definite when restricted to a particular set of data used in a practical experiment .second , other non positive definite kernels , such as the sigmoid kernel , have been shown to be very useful and efficient in combination with kernel methods .third , practitioners of kernel methods have developed a variety of strategies to limit the possible dysfunction of kernel methods when non positive definite kernels are used , such as projecting the gram matrix of pairwise kernel values on the set of positive semidefinite matrices before processing it .the good results reported on several chemoinformatics benchmark in indeed confirm the usefulness of the method .hence our message in this note is certainly not to criticize the use of the optimal assignment kernel in the context of kernel methods .instead we wish to warn that in some cases , negative eigenvalues may appear in the gram matrix and specific care may be needed , and simultaneously to contribute to the limitation of error propagation in the scientific litterature .let us first define formally the optimal assignment kernel of .we assume given a set , endowed with a positive definite kernel that takes only nonnegative values .the objects we consider are tuples of elements in , i.e. , an object decomposes as , where is the length of the tuple , denoted , and .we note the set of all tuples of elements in .let be the symmetric group , i.e. , the set of permutations of elements .we now recall the kernel on proposed in : the meaning of the statement `` not always '' in theorem [ thm1 ] is that there exist choices of and such that the optimal assignment kernel is positive definite , while there also exist choices for which it is not positive definite .theorem [ thm1 ] contradicts theorem 2.3 in , which claims that the optimal assignment kernel is always positive definite .the proof of theorem 2.3 in , however , contains the following error . using the notations of , the author define in the course of their proof the values and .they show that , on the one hand , and that , on the other hand . from thisthey conclude that , which is obvioulsy not a valid logical conclusion .when , the tuples are simply repeats of the unique element , hence each element is uniquely defined by its length .the optimal assignment kernel is then given by : the function is known to be positive definite on , therefore is a valid kernel on .the function defined in lemma [ lem2 ] is the well - known gaussian radial basis function kernel , which is known to be positive definite and only takes nonnegative values , hence it satisfies all hypothesis needed in the definition of the optimal assignment kernel . in order to show that the latter is not positive definite, we exhibit a set of points in that can not be embedded in a hilbert space through ( [ eq : inp ] ) . for thislet us start with four points that form a square in , e.g. , and ( figure [ fig : square ] ) . in the space of tuples ,let us now consider the six -tuples obtained by taking all pairs of distinct points : . using the definition of the optimal assignment kernel for , we easily obtain : if was positive definite , then these six -tuples could be embedded to a hilbert space by a mapping satisfying ( [ eq : inp ] ) .let us show that this is impossible .let be the hilbert distance between two points after their embedding in .it can be computed from the kernel values by the classical equality : we first observe that , and .therefore , from which we conclude that form a half - square , with hypotenuse . a similar computation shows that is also a half - square with hypotenuse .moreover , which shows that the four points are in fact coplanar and form a square .the same computation when and are respectively replaced by and shows that the four points are also coplanar and also form a square .hence all six points can be embedded in dimensions , and the points are themselves coplanar and must form a rectangle on the plane equidistant from and ( figure [ fig2 ] ) .the edges of this rectangle have all the same length and is therefore a square , whose hypotenuse should have a length .however a direct computation gives , which provides a contradiction since .hence the six points can not be embedded into a hilbert space with as inner product , which shows that is not positive definite on .h. frhlich , j. k. wegner , f. sieker , and a. zell .optimal assignment kernels for attributed molecular graphs . in _ proceedings of the 22nd international conference on machine learning _ , pages 225 232 , new york , ny , usa , 2005 . acm press .
we prove that the optimal assignment kernel , proposed recently as an attempt to embed labeled graphs and more generally tuples of basic data to a hilbert space , is in fact not always positive definite .
chemical communication , pheromones , reproductive division of labor , social insects , natural selection , evolutionary dynamicsmany species of ants , bees , and wasps form highly complex eusocial societies characterized by dominance hierarchies and reproductive division of labor . in most cases , both the queen andthe workers are capable of laying male eggs parthenogenetically , but the workers often forego their own reproduction , allowing the queen to produce the majority of drones .there are several ways in which this behavior could arise .one possibility is that a ` policing ' mutation acts in a worker , causing that worker to destroy male eggs produced by other workers . alternatively ,a ` non - reproduction ' mutation could act in a worker , causing that worker to forego its own reproduction .such mutations can spread and eventually fix in the population if the resulting gains in colony reproductive efficiency are sufficiently large . finally , and as the case we consider in this paper , a mutation could act in a queen , causing her to suppress her workers reproduction .there are several mechanisms by which a queen can manipulate her workers reproductive output .in small colonies , the queen or dominant individual can directly control worker reproduction by eating worker - laid eggs or by aggressing workers who attempt to lay eggs . indirect chemical suppression of worker reproduction is also possible through queen pheromones , which are especially important in species with large colonies , where direct queen policing is infeasible .pheromonal suppression by queens or dominant individuals has long been recognized in the eusocial hymenoptera .for example , queen tergal gland secretions and queen mandibular pheromone have both been shown to limit ovarian development in honey bee workers ( genus _ apis _ ) , while in the carpenter ant _ camponotus floridanus _ , worker - laid eggs experimentally marked with the queen - derived surface hydrocarbons were significantly less likely to be destroyed by other workers .pheromonal suppression of worker reproduction has also been documented in primitively eusocial species , including the polistine wasps _ polistes dominula _ and _ ropalidia marginata _ , the euglossine bee _ euglossa melanotricha _ , and several species in _ bombus _ .despite the ubiquity of the phenomenon , a rigorous theoretical understanding of the evolution of queen suppression of worker reproduction is lacking .what are the precise conditions under which queen control evolves ?what demographic and ecological characteristics of insect populations result in the evolutionary emergence of queen control ? to address these questions , we formulate a model of population dynamics that is based on haplodiploid genetics . in this model , we study the population genetics of alleles , dominant or recessive , that act in queens to reduce worker reproduction .we derive exact conditions for invasion and stability of these alleles , for any number of matings of the queen , and interpret these conditions in terms of the colony efficiency effects of suppressing worker reproduction .a related , longstanding debate in the literature concerns the nature of queen chemical suppression of worker reproduction in terms of workers ` evolutionary interests ' .should queen chemical suppression be interpreted as coercive control of workers ( against their evolutionary interests ) , or are these chemicals best thought of as honest signals of queen presence or fertility ( so that their induction of non - reproduction in workers can in fact be in those workers evolutionary interests ) ?empirical studies provide support for both interpretations .our setup , based on population genetics , offers a simple and attractive framework for classifying queen suppressor chemicals as either coercive or honest signals .suppose a queen suppressor mutation has fixed , so that all queens produce chemicals that suppress workers reproduction .now suppose that a ` resistance ' mutation arises that renders workers in whom it is expressed immune to queen suppressor chemicals , so that these workers again lay male eggs .if this ` resistance ' mutation invades , then resistance is seen to be in the workers evolutionary interests , and the initial queen suppression should be interpreted as coercive . if not , then we interpret the queen suppressor chemical to be an honest signal .invadability of the population by this rare ` resistance ' allele is equivalent to evolutionary instability of a non - reproduction allele acting in workers , the formal population genetical conditions for which are given in .we use these conditions to distinguish the demographic and ecological parameter regimes in which queen suppression should be thought of as coercion or as honest signalling .we also explore the similarly relevant possibility of partial queen control inducing complete worker sterility .we study queen control of workers in the context of haplodiploid sex determination , as found in ants , bees , and wasps .fertilized eggs ( diploid ) become females , and unfertilized eggs ( haploid ) become males . a single gyne mates with distinct , randomly - chosen drones .she then founds a colony and becomes its queen ( figure [ fig : colonies](a ) ) .she fertilizes haploid eggs with the sperm from each of the males that she mated with to produce diploid female eggs .when these female eggs hatch , the resulting individuals become workers in the colony . in addition , the queen produces unfertilized haploid male eggs .workers can also produce haploid male eggs , leading to reproductive conflict over male production within a colony ( figure [ fig : colonies](b ) ) .we consider the evolutionary dynamics of two alleles a wild - type allele , , and a mutant allele , .we use the following notation for individuals of various genotypes .there are two types of drones : and .there are three types of gynes : , , and .a queen s type ( or , equivalently , that of a colony , since each colony is headed by a single queen ) is denoted by ; ; or , depending on whether the queen s own genotype is , , or , respectively , and the number , , of mutant ( type ) drones she mated with , requiring .we use the notation , , and to denote the frequencies of the colony types in the population , and we require that at all times .the mutant allele , , acts in a queen to alter her phenotype . if the mutant allele , , is dominant , then type queens are wild - type , while type and type queens have the mutant phenotype .if the mutant allele , , is recessive , then type and type queens are wild - type , while type queens have the mutant phenotype . in colonies headed by wild - type queens ,a fraction of males are produced by the queen , and new gynes and drones are produced at rate . in colonies headed by queens with the mutant phenotype ,a fraction of males are produced by the queen , and new gynes and drones are produced at rate .thus , colonies headed by queens with the mutant phenotype have different values of the fraction of queen - produced males and colony efficiency and , respectively compared with colonies headed by wild - type queens .importantly , our mathematical analysis is robust .it does not make restrictive or nuanced assumptions about the underlying factors that influence the values of , , , and in any particular case of interest .the values of the parameters , , , and indeed result from interplay between many demographic and ecological factors .it is instructive to consider the relative values of these parameters in the context of a queen that influences her workers reproduction .we expect that ; i.e. , the effect of the queen s manipulation is to increase the fraction of male eggs that come from her . may be greater than or less than . if , then the queen s manipulation effects an increase in colony efficiency , while if , then the queen s manipulation effects a decrease in colony efficiency .the key question is : what values of the parameters , , , and support the evolution of queen suppression of workers reproduction ? we derive the following main results .the allele , which causes the queen to suppress her workers reproduction , invades a population of non - controlling queens if the following condition holds : condition applies regardless of whether the queen - control allele , , is dominant or recessive .the evolutionary dynamics demonstrating condition for single mating and for a dominant queen - control allele are shown in figure [ fig : n1domsim](a ) .furthermore , the queen - control allele , , when fixed in the population , is stable against invasion by the non - controlling allele if the following condition holds : condition also applies regardless of whether the queen - control allele , , is dominant or recessive .the evolutionary dynamics demonstrating condition for single mating and for a dominant queen - control allele are shown in figure [ fig : n1domsim](b ) . 0.45 ) , and we set , , and .( the initial conditions are ( a ) and for each of the four curves , and ( b ) and for each of the four curves.),scaledwidth=100.0% ] 0.5 ) , and we set , , and .( the initial conditions are ( a ) and for each of the four curves , and ( b ) and for each of the four curves.),scaledwidth=100.0% ] note that condition is always easier to satisfy than condition .therefore , three scenarios regarding the two pure equilibria are possible : the first possibility is that queen control is unable to invade a wild - type population and is unstable , when fixed , against invasion by non - control .the second possibility is that queen control is able to invade a wild - type population but is unstable , when fixed , against invasion by non - control .the third possibility is that queen control is able to invade a wild - type population and is stable , when fixed , against invasion by non - control . in the case where queen control can invade a wild - type population butis unstable when fixed , brouwer s fixed - point theorem guarantees the existence of at least one mixed equilibrium at which controlling and non - controlling queens coexist .regions of the parameter space are shown in figure [ fig : regions ] , and evolutionary dynamics illustrating the three scenarios are shown in figure [ fig : regions_sim ] . versus shows the three possibilities for the dynamical behavior of the queen - control allele around the two pure equilibria .for this plot , we set and .,scaledwidth=60.0% ] 0.32 .( b ) corresponds to the middle , green region in figure [ fig : regions ] .( c ) corresponds to the upper , blue region in figure [ fig : regions ] .( the initial conditions are ( a ) and , ( b , lower curve ) and , ( b , upper curve ) and , and ( c ) and .),scaledwidth=100.0% ] 0.32 .( b ) corresponds to the middle , green region in figure [ fig : regions ] .( c ) corresponds to the upper , blue region in figure [ fig : regions ] .( the initial conditions are ( a ) and , ( b , lower curve ) and , ( b , upper curve ) and , and ( c ) and .),scaledwidth=100.0% ] 0.32 .( b ) corresponds to the middle , green region in figure [ fig : regions ] .( c ) corresponds to the upper , blue region in figure [ fig : regions ] .( the initial conditions are ( a ) and , ( b , lower curve ) and , ( b , upper curve ) and , and ( c ) and .),scaledwidth=100.0% ] two salient points regarding the dynamics of the queen - control allele deserve emphasis .first , the conditions for evolutionary invasion and stability of queen control do not depend on the queen s mating number . to develop intuition , consider the introduction of an initially rare dominant allele for queen control .when the allele is rare , for matings , colonies are more abundant than colonies by a factor of .a fraction of offspring of colonies arise from selecting sperm from wild - type males and are 100% wild - type , as though they had originated from colonies .however , the remaining fraction of offspring of colonies are produced in the same relative mutant / wild - type proportions as if they had originated from colonies .notice that the factor of from the matings cancels with the probability of of selecting sperm from the mutant male .therefore , we have a simple interpretation : for considering invasion of queen control , and at the leading - order frequency of the mutant allele , the system effectively consists of colonies and colonies at relative amounts that do not depend on .but colonies produce mutant and wild - type offspring in relative proportions that do not depend on , and colonies produce mutant and wild - type offspring in relative proportions that do not depend on .thus , does not enter into condition .second , queen control can evolve even if it results in efficiency losses . to see why , notice that , if the queen - control allele is dominant , then type colonies have the mutant phenotype , and if the queen - control allele is recessive , then type colonies have the mutant phenotype . in the dominant case ,workers in type colonies produce type males for every type male , but the queen produces type and type males in equal proportion . in the recessive case , workers in type colonies produce type and type males in equal proportion , but the queen produces only type males . in both cases , if the queen takes over production of males ( i.e. , if ) , then the frequency of the mutant allele in the next generation increases .thus , the allele for queen control can act as a selfish genetic element , enabling queen - induced worker sterility to develop in a population even if it diminishes colony reproductive efficiency .queens are easily selected to increase their production of male offspring and suppress workers production of male offspring . in this case , workers might also be selected to evade manipulation by queens , setting up an evolutionary arms race . when does queen control evolve and persist in the population ? consider the following scenario .initially , there is a homogeneous population of colonies .all queens are homozygous for allele at locus , and all workers are homozygous for allele at locus . in each colony , the fraction of queen - derived males within the colony is , and the overall reproductive efficiency of the colony is .suppose that a mutation , , acts in a queen at locus _ a _ , causing her to completely suppress her workers production of drones . in colonies headed by controlling queens ,all males originate from the controlling queen ( ) , and the overall reproductive efficiency of the colony is . according to equations and , if is sufficiently large ( /4 $ ] ) , then the controlling queens will increase in frequency and fix in the population .once the queen - control allele has fixed , each colony s male eggs originate only from the queen ( ) , and each colony has overall reproductive efficiency . next ,consider a subsequent mutation , , that acts in workers at locus .the allele changes a worker s phenotype , causing the mutant worker to become reproductive again .the allele for worker reproduction can be either dominant , so that type and type workers are reproductive , or recessive , so that only type workers are reproductive .if a colony contains only workers with the reproductive phenotype , then the fraction of queen - derived males within the colony is , and the overall reproductive efficiency of the colony is .thus , the allele for worker reproduction essentially undoes the effects of the allele for queen control .what are the requirements for queen control to be evolutionarily stable against a mutation in workers that restores their reproduction ? to answer this question for a dominant allele, we turn to equation ( 53 ) in , which is the condition , for any number of matings , , for stability of a recessive mutation in workers that results in worker sterility : setting in equation ( 53 ) in , this condition becomes \left [ 2\left(\frac{r'}{r_\frac{1}{2}}\right ) - 1 \right ] > 1 \label{eqn : resistance_rec_ster_sta}\ ] ] in condition , is the colony reproductive efficiency when a fraction of workers are reproductive , is the colony reproductive efficiency when a fraction of workers are reproductive , and is the fraction of queen - derived males when a fraction of workers are reproductive .if condition is satisfied , then a subsequent dominant mutation , , that acts in workers to restore their reproduction _ can not _ invade a queen - controlled population . to further determine if the dominant allele can not fix, we must also consider the equation directly after equation ( 34 ) in , which is the condition , for any number of matings , , for invasion of a recessive mutation in workers that results in worker sterility .setting and in the equation directly after equation ( 34 ) in , we obtain in condition , is the colony reproductive efficiency when a fraction of workers are reproductive , and is the fraction of queen - derived males when a fraction of workers are reproductive . if condition is satisfied , then a subsequent dominant mutation , , that acts in workers to restore their reproduction _ can not _ fix in the population .notice that condition depends on the parameters , , and , which are related to the effects of the allele for worker reproduction .also , notice that condition depends on the parameters and , which are related to the effects of the allele for worker reproduction .the properties of the particular dominant allele for worker reproduction that is under consideration are therefore essential for determining if the effects of the allele for queen control can be undone by worker resistance . to gain insight , regarding the parameters , , , , and in conditions and, we can consider the following simple case : for the parameter choices given by equations , condition becomes also for the parameter choices given by equations , condition becomes to determine if queen control is evolutionarily stable against a recessive mutation in workers that restores their reproduction , we turn to the equation directly after equation ( 49 ) in , which is the condition , for any number of matings , , for stability of a dominant mutation in workers that results in worker sterility : setting in the equation directly after equation ( 49 ) in , this condition becomes in condition , is the colony reproductive efficiency when a fraction of workers are reproductive , and is the fraction of queen - derived males when a fraction of workers are reproductive . if condition is satisfied , then a subsequent recessive mutation , , that acts in workers to restore their reproduction _ can not _invade a queen - controlled population . to further determine if the recessive allele can not fix, we must also consider equation ( 20 ) in , which is the condition , for any number of matings , , for invasion of a dominant mutation in workers that results in worker sterility .setting in equation ( 20 ) in , we obtain > 2 \label{eqn : resistance_dom_ster_inv}\ ] ] in condition , is the colony reproductive efficiency when a fraction of workers are reproductive , is the colony reproductive efficiency when a fraction of workers are reproductive , and is the fraction of queen - derived males when a fraction of workers are reproductive . if condition is satisfied , then a subsequent recessive mutation , , that acts in workers to restore their reproduction _ can not _ fix in the population .notice that condition depends on the parameters and , which are related to the effects of the allele for worker reproduction .also , notice that condition depends on the parameters , , and , which are related to the effects of the allele for worker reproduction .the properties of the particular recessive allele for worker reproduction that is under consideration are therefore essential for determining if the effects of the allele for queen control can be undone by worker resistance . to gain insight , regarding the parameters , , , , and in conditions and, we can again consider the simple case given by equations .for the parameter choices given by equations , condition becomes also for the parameter choices given by equations , condition becomes figure [ fig : rep ] shows the evolutionary outcome of queen control for parameters and .we set without loss of generality . in each panel , the boundary between the lower , red region and the middle , green region is given by condition .the boundary between the middle , green region and the upper , blue region is given by condition for ( figure [ fig : rep](a ) ) , condition for ( figure [ fig : rep](b ) ) , condition for ( figure [ fig : rep](c ) ) , and condition for ( figure [ fig : rep](d ) ) .for values in the lower , red region , the mutation for queen control is unable to spread to fixation . for values in the middle , green region ,the mutation for queen control invades and is evolutionarily stable to non - control , but the subsequent mutation for worker reproduction also invades and is evolutionarily stable , undoing the effects of queen control . for values in the upper , blue region ,the mutation for queen control invades and is evolutionarily stable to non - control , and the subsequent mutation for worker reproduction is unable to invade , rendering queen control evolutionarily stable against counteraction by workers .corresponding simulations of the evolutionary dynamics are shown in figure [ fig : rep_sim ] . in figure[ fig : rep_sim ] , the quantity that is plotted on the vertical axis is the average fraction of queen - derived males in the population . since figure [ fig : rep_sim ] is for single mating ( ) and a dominant queen - control allele , we have , where , , , , , and are the frequencies of the six types of colonies in the population . 0.35 without loss of generality , and we assume that the queen - control allele eliminates workers reproduction. if the efficiency loss from queen control is too severe ( corresponding to values of in the red region ) , then queen control does not evolve ( or it invades without fixing , and a subsequent mutation acting in workers causes them to become fully reproductive again ) .if the efficiency loss or gain from queen control is moderate ( corresponding to values of in the green region ) , then queen control evolves , but a subsequent mutation acting in workers causes them to become fully reproductive again . if the efficiency gain from queen control is sufficiently large ( corresponding to values of in the blue region ) , then queen control evolves , and workers subsequently acquiesce by remaining non - reproductive .the lower boundary is given by equation , and the upper boundary is given by ( a ) equation for , ( b ) equation for , ( c ) equation for , and ( d ) equation for .for this plot , we use equations , and we set and .,scaledwidth=100.0% ] 0.35 without loss of generality , and we assume that the queen - control allele eliminates workers reproduction. if the efficiency loss from queen control is too severe ( corresponding to values of in the red region ) , then queen control does not evolve ( or it invades without fixing , and a subsequent mutation acting in workers causes them to become fully reproductive again ) .if the efficiency loss or gain from queen control is moderate ( corresponding to values of in the green region ) , then queen control evolves , but a subsequent mutation acting in workers causes them to become fully reproductive again . if the efficiency gain from queen control is sufficiently large ( corresponding to values of in the blue region ) , then queen control evolves , and workers subsequently acquiesce by remaining non - reproductive .the lower boundary is given by equation , and the upper boundary is given by ( a ) equation for , ( b ) equation for , ( c ) equation for , and ( d ) equation for .for this plot , we use equations , and we set and .,scaledwidth=100.0% ] 0.35 without loss of generality , and we assume that the queen - control allele eliminates workers reproduction. if the efficiency loss from queen control is too severe ( corresponding to values of in the red region ) , then queen control does not evolve ( or it invades without fixing , and a subsequent mutation acting in workers causes them to become fully reproductive again ) .if the efficiency loss or gain from queen control is moderate ( corresponding to values of in the green region ) , then queen control evolves , but a subsequent mutation acting in workers causes them to become fully reproductive again . if the efficiency gain from queen control is sufficiently large ( corresponding to values of in the blue region ) , then queen control evolves , and workers subsequently acquiesce by remaining non - reproductive .the lower boundary is given by equation , and the upper boundary is given by ( a ) equation for , ( b ) equation for , ( c ) equation for , and ( d ) equation for .for this plot , we use equations , and we set and .,scaledwidth=100.0% ] 0.35 without loss of generality , and we assume that the queen - control allele eliminates workers reproduction . if the efficiency loss from queen control is too severe ( corresponding to values of in the red region ) , then queen control does not evolve ( or it invades without fixing , and a subsequent mutation acting in workers causes them to become fully reproductive again ) .if the efficiency loss or gain from queen control is moderate ( corresponding to values of in the green region ) , then queen control evolves , but a subsequent mutation acting in workers causes them to become fully reproductive again . if the efficiency gain from queen control is sufficiently large ( corresponding to values of in the blue region ) , then queen control evolves , and workers subsequently acquiesce by remaining non - reproductive .the lower boundary is given by equation , and the upper boundary is given by ( a ) equation for , ( b ) equation for , ( c ) equation for , and ( d ) equation for .for this plot , we use equations , and we set and .,scaledwidth=100.0% ] 0.45 .it s possible that a mutation causing queen control evolves ( a ) , and worker reproduction is subsequently restored ( b ) .but if the efficiency gain due to queen control is large enough , then queen control evolves ( c ) , and workers are unable to regain reproductive ability ( d ) .( in ( b ) and ( d ) , denotes the colony efficiency when of workers in the colony have the phenotype for worker reproduction .we follow the assumption for in equations for determining the values of ( b ) and ( d ) for these simulations .the initial conditions are ( a ) and , ( b ) and , ( c ) and , and ( d ) and . for ( b ) and( d ) , we introduce the allele for worker reproduction at time .),scaledwidth=100.0% ] 0.45 .it s possible that a mutation causing queen control evolves ( a ) , and worker reproduction is subsequently restored ( b ) .but if the efficiency gain due to queen control is large enough , then queen control evolves ( c ) , and workers are unable to regain reproductive ability ( d ) .( in ( b ) and ( d ) , denotes the colony efficiency when of workers in the colony have the phenotype for worker reproduction .we follow the assumption for in equations for determining the values of ( b ) and ( d ) for these simulations .the initial conditions are ( a ) and , ( b ) and , ( c ) and , and ( d ) and . for ( b ) and( d ) , we introduce the allele for worker reproduction at time .),scaledwidth=100.0% ] 0.45 .it s possible that a mutation causing queen control evolves ( a ) , and worker reproduction is subsequently restored ( b ) .but if the efficiency gain due to queen control is large enough , then queen control evolves ( c ) , and workers are unable to regain reproductive ability ( d ) .( in ( b ) and ( d ) , denotes the colony efficiency when of workers in the colony have the phenotype for worker reproduction .we follow the assumption for in equations for determining the values of ( b ) and ( d ) for these simulations .the initial conditions are ( a ) and , ( b ) and , ( c ) and , and ( d ) and . for ( b ) and( d ) , we introduce the allele for worker reproduction at time .),scaledwidth=100.0% ] 0.45 .it s possible that a mutation causing queen control evolves ( a ) , and worker reproduction is subsequently restored ( b ) .but if the efficiency gain due to queen control is large enough , then queen control evolves ( c ) , and workers are unable to regain reproductive ability ( d ) .( in ( b ) and ( d ) , denotes the colony efficiency when of workers in the colony have the phenotype for worker reproduction .we follow the assumption for in equations for determining the values of ( b ) and ( d ) for these simulations .the initial conditions are ( a ) and , ( b ) and , ( c ) and , and ( d ) and .for ( b ) and ( d ) , we introduce the allele for worker reproduction at time .),scaledwidth=100.0% ] there is a subtlety , however .figure [ fig : rep ] assumes that queen control can be easily undone by a single mutation in workers .this assumption is not necessarily true. a single mutation in a worker may not be sufficient to reverse the primer or releaser effects of a queen s complex pheromonal bouquet .the queen or dominant individual can also perform oophagy of worker - laid eggs or physical aggression , and it is unclear if a single mutation in a worker can enable her to overcome such behavioral dominance activities .thus , there is another important aspect to the question of evolutionary stability of queen control .if there is a high genetic barrier against workers resistance to partial queen control , then can _ partial _ queen control incentivize workers to become completely sterile ?consider , again , that there is initially a homogeneous population of colonies .all queens are homozygous for allele at locus , and all workers are homozygous for allele at locus .each colony s fraction of queen - derived males is , and each colony s overall reproductive efficiency is . suppose that a mutation , , acts in a queen at locus _ a _ , causing her to partially suppress her workers production of drones . in colonies headed by partially controlling queens ,a fraction of males originate from the partially controlling queen , with , and the overall reproductive efficiency of the colony is . according to equations and ,if is sufficiently large , then the partially controlling queens will increase in frequency and fix in the population .once the allele for partial queen control has fixed , a fraction of each colony s male eggs originate from the queen , and each colony has overall reproductive efficiency .next , consider a subsequent mutation , , that acts in workers at locus .the allele changes a worker s phenotype , causing the mutant worker to become completely sterile .the allele for worker sterility can be either recessive , so that only type workers are sterile , or dominant , so that type and type workers are sterile . if a colony contains only workers with the phenotype for sterility , then the fraction of queen - derived males within the colony is , and the overall reproductive efficiency of the colony is .what are the requirements for partial queen control to enable the evolutionary success of a mutation in workers that renders them sterile ? to answer this question for a recessive allele , we turn to the equation directly after equation ( 34 ) in , which is the condition , for any number of matings , , for invasion of a recessive mutation in workers that causes worker sterility : setting and in the equation directly after equation ( 34 ) in , this condition becomes in condition , is the colony reproductive efficiency when a fraction of workers are sterile , and is the fraction of queen - derived males when a fraction of workers are sterile . if condition is satisfied , then a subsequent recessive mutation , , that acts in workers to render them sterile invades a partially queen - controlled population . to further determine if the recessive allele can fix , we must also consider equation ( 53 ) in , which is the condition , for any number of matings , , for stability of a recessive mutation in workers that causes worker sterility . setting in equation ( 53 ) in , we obtain \left [ 2\left(\frac{r^*}{r_\frac{1}{2}}\right ) - 1 \right ] > 1 \label{eqn : sterility_rec_ster_sta}\ ] ] in condition , is the colony reproductive efficiency when a fraction of workers are sterile , is the colony reproductive efficiency when a fraction of workers are sterile , and is the fraction of queen - derived males when a fraction of workers are sterile .if condition is satisfied , then a subsequent recessive mutation , , that acts in workers to render them sterile is evolutionarily stable .notice that condition depends on the parameters and , which are related to the effects of the allele for worker sterility .also , notice that condition depends on the parameters , , and , which are related to the effects of the allele for worker sterility .the properties of the particular recessive allele for worker sterility that is under consideration are therefore essential for determining if the allele for partial queen control can facilitate the evolution of complete worker sterility . to gain insight , regarding the parameters , , , , and in conditions and, we can consider the following simple case : for the parameter choices given by equations , condition becomes also for the parameter choices given by equations , condition becomes to determine if partial queen control can enable the evolutionarily success of a dominant mutation in workers that renders them sterile , we turn to equation ( 20 ) in , which is the condition , for any number of matings , , for invasion of a dominant mutation in workers that results in worker sterility : setting in equation ( 20 ) in , this condition becomes > 2 \label{eqn : sterility_dom_ster_inv}\ ] ] in condition , is the colony reproductive efficiency when a fraction of workers are sterile , is the colony reproductive efficiency when a fraction of workers are sterile , and is the fraction of queen - derived males when a fraction of workers are sterile . if condition is satisfied , then a subsequent dominant mutation , , that acts in workers to render them sterile invades a partially queen - controlled population . to further determine if the dominant allele can fix , we must also consider the equation directly after equation ( 49 ) in , which is the condition , for any number of matings , , for stability of a dominant mutation in workers that causes worker sterility .setting in the equation directly after equation ( 49 ) in , we obtain in condition , is the colony reproductive efficiency when a fraction of workers are sterile , and is the fraction of queen - derived males when a fraction of workers are sterile . if condition is satisfied , then a subsequent dominant mutation , , that acts in workers to render them sterile is evolutionarily stable .notice that condition depends on the parameters , , and , which are related to the effects of the allele for worker sterility .also , notice that condition depends on the parameters and , which are related to the effects of the allele for worker sterility .the properties of the particular dominant allele for worker sterility that is under consideration are therefore essential for determining if the allele for partial queen control can facilitate the evolution of complete worker sterility . to gain insight , regarding the parameters , , , , and in conditions and, we can again consider the simple case given by equations .for the parameter choices given by equations , condition becomes also for the parameter choices given by equations , condition becomes figure [ fig : sterility ] shows how partial queen control can facilitate complete worker sterility . in each panel ,the boundary between the lower , red region and the middle , green region is given by condition .for values in the lower , red region , the queen does not seize partial control . for values in the middle , green region ,the queen seizes partial control , and the workers may or may not become sterile .the boundary between the middle , green region and the upper , blue region is given by condition for ( figure [ fig : sterility](a ) ) , condition for ( figure [ fig : sterility](b ) ) , condition for ( figure [ fig : sterility](c ) ) , and condition for ( figure [ fig : sterility](d ) ) .this boundary determines if workers become sterile after the queen has seized partial control of male production .suppose that the queen seizes partial control of male production . for values in the middle , green region , the mutation for worker sterility does not invade . for values in the upper , blue region , the mutation for worker sterility invades and is evolutionarily stable , rendering workers totally non - reproductive .corresponding simulations of the evolutionary dynamics are shown in figure [ fig : sterility_sim ] .the average fraction of queen - derived males in the population , , is calculated in the same way as for figure [ fig : rep_sim ] .0.45 ) .a mutation in queens then causes them to seize partial control of male production ( ) .more powerful queen control ( i.e. , mutations causing larger values of ) can evolve more easily , since the critical value of decreases with .but more powerful queen control also lowers the critical value of for a subsequent mutation , acting in workers , to render them sterile .the lower boundary is given by equation , and the upper boundary is given by ( a ) equation for , ( b ) equation for , ( c ) equation for , and ( d ) equation for .for this plot , we use equations , and we set .( if we considered instead , then , when plotted between on the horizontal axis , this figure would look qualitatively the same , except that the middle , green region would be smaller.),scaledwidth=100.0% ] 0.45 ) . a mutation in queens then causes them to seize partial control of male production ( ) .more powerful queen control ( i.e. , mutations causing larger values of ) can evolve more easily , since the critical value of decreases with .but more powerful queen control also lowers the critical value of for a subsequent mutation , acting in workers , to render them sterile .the lower boundary is given by equation , and the upper boundary is given by ( a ) equation for , ( b ) equation for , ( c ) equation for , and ( d ) equation for .for this plot , we use equations , and we set .( if we considered instead , then , when plotted between on the horizontal axis , this figure would look qualitatively the same , except that the middle , green region would be smaller.),scaledwidth=100.0% ] 0.45 ) . a mutation in queens then causes them to seize partial control of male production ( ) .more powerful queen control ( i.e. , mutations causing larger values of ) can evolve more easily , since the critical value of decreases with .but more powerful queen control also lowers the critical value of for a subsequent mutation , acting in workers , to render them sterile .the lower boundary is given by equation , and the upper boundary is given by ( a ) equation for , ( b ) equation for , ( c ) equation for , and ( d ) equation for .for this plot , we use equations , and we set .( if we considered instead , then , when plotted between on the horizontal axis , this figure would look qualitatively the same , except that the middle , green region would be smaller.),scaledwidth=100.0% ] 0.45 ) . a mutation in queens then causes them to seize partial control of male production ( ) .more powerful queen control ( i.e. , mutations causing larger values of ) can evolve more easily , since the critical value of decreases with .but more powerful queen control also lowers the critical value of for a subsequent mutation , acting in workers , to render them sterile .the lower boundary is given by equation , and the upper boundary is given by ( a ) equation for , ( b ) equation for , ( c ) equation for , and ( d ) equation for .for this plot , we use equations , and we set .( if we considered instead , then , when plotted between on the horizontal axis , this figure would look qualitatively the same , except that the middle , green region would be smaller.),scaledwidth=100.0% ] 0.45 . if queens seize a small amount of control over male production ( a ) , then a subsequent mutation , acting in workers , does not cause them to become sterile ( b ) . if queens seize a large amount of control over male production ( c ) , thena subsequent mutation , acting in workers , causes them to become sterile ( d ) .thus , queen control can facilitate the formation of a sterile worker caste .( in ( b ) and ( d ) , denotes the colony efficiency when of workers in the colony have the phenotype for worker sterility .we follow the assumption for in equations for determining the value of for these simulations .the initial conditions are ( a ) and , ( b ) and , ( c ) and , and ( d ) and . for ( b ) and( d ) , we introduce the allele for worker sterility at time .),scaledwidth=100.0% ] 0.45 . if queens seize a small amount of control over male production ( a ) , then a subsequent mutation , acting in workers , does not cause them to become sterile ( b ) . if queens seize a large amount of control over male production ( c ) , then a subsequent mutation , acting in workers , causes them to become sterile ( d ) . thus , queen control can facilitate the formation of a sterile worker caste .( in ( b ) and ( d ) , denotes the colony efficiency when of workers in the colony have the phenotype for worker sterility .we follow the assumption for in equations for determining the value of for these simulations .the initial conditions are ( a ) and , ( b ) and , ( c ) and , and ( d ) and . for ( b ) and( d ) , we introduce the allele for worker sterility at time .),scaledwidth=100.0% ] 0.45 . if queens seize a small amount of control over male production ( a ) , then a subsequent mutation , acting in workers , does not cause them to become sterile ( b ) . if queens seize a large amount of control over male production ( c ) , then a subsequent mutation , acting in workers , causes them to become sterile ( d ) .thus , queen control can facilitate the formation of a sterile worker caste .( in ( b ) and ( d ) , denotes the colony efficiency when of workers in the colony have the phenotype for worker sterility .we follow the assumption for in equations for determining the value of for these simulations .the initial conditions are ( a ) and , ( b ) and , ( c ) and , and ( d ) and . for ( b ) and( d ) , we introduce the allele for worker sterility at time .),scaledwidth=100.0% ] 0.45 . if queens seize a small amount of control over male production ( a ) , then a subsequent mutation , acting in workers , does not cause them to become sterile ( b ) . if queens seize a large amount of control over male production ( c ) , then a subsequent mutation , acting in workers , causes them to become sterile ( d ) .thus , queen control can facilitate the formation of a sterile worker caste .( in ( b ) and ( d ) , denotes the colony efficiency when of workers in the colony have the phenotype for worker sterility .we follow the assumption for in equations for determining the value of for these simulations .the initial conditions are ( a ) and , ( b ) and , ( c ) and , and ( d ) and . for ( b ) and( d ) , we introduce the allele for worker sterility at time .),scaledwidth=100.0% ]we have studied , in a haplodiploid population - genetic model of a social hymenopteran , the conditions for invasion and fixation of genes that act in queens to suppress worker reproduction .we have also studied the conditions under which selection subsequently favors genes that act in workers to resist queen control .the condition for evolutionary invasion and stability of queen control , condition , is always easier to satisfy than the conditions for subsequent worker acquiescence ; the former condition does not require colony efficiency gains to queen control , while the latter conditions do .therefore , there always exist regions of parameter space where queen control can invade and fix , but where worker suppression of queen control is subsequently selected for . in these cases ,queen control can be thought of as coercive ( that is , against workers evolutionary interests ) .there also exist regions of parameter space where queen control invades and fixes , and where the conditions for worker acquiescence are satisfied where evolved queen control can be thought of as honest signalling ( that is , in workers evolutionary interests ) .we have thus shown that , within the same simple setup , both coercive control and control via honest signalling are possible .this theoretical result is interesting in light of the continuing empirical debate over whether queen control represents coercion or signalling .many recent works have expressed disfavor toward the coercion hypothesis , but our results demonstrate that coercive control could have evolved often in the social hymenoptera . the crucial consideration in our analysis is how the establishment of queen control changes the colony s overall reproductive efficiency .the efficiency increase , , needed for a queen - control allele to be stable to counteraction by workers , given by conditions or , increases with the strength of queen control ( i.e. , the amount by which exceeds ) .but the efficiency increase , , needed for a subsequent allele , acting in workers , to induce their sterility , given by conditions or , decreases with the strength of queen control ( i.e. , the magnitude of ) .thus , stronger queen control is more susceptible to worker resistance , but it also more easily selects for worker non - reproduction .an understanding of the long - term evolutionary consequences of queen control must consider the specific types of mutations that act in workers and incorporate both of these effects . in our analysis ,colony efficiencies with and without queen control were treated as static parameters . however , because queen control directly limits the workers contribution to the production of drones , it makes it beneficial for workers instead to invest their resources in colony maintenance tasks .therefore , colony efficiency could change if the evolution of queen - induced worker sterility is followed by the evolution of more efficient helping by workers . under this scenario ,it is possible that queen control establishes in a system where worker resistance is initially under positive selection conditions and do not hold but that subsequent efficiency gains by the now - sterile worker caste increase sufficiently that conditions and come to hold , so that worker resistance is no longer selected for . our results facilitate a crucial connection with ongoing experimental efforts in sociobiology .research is underway on the chemical characteristics of queen - emitted pheromones that induce specific primer or releaser effects on workers , and on the molecular mechanisms and gene networks behind reproductive regulation .such experimental programs , together with measurements of the effects of queen control on colony parameters and the mathematical conditions herein , could promote understanding of the precise evolutionary steps that have led to reproductive division of labor .this work was supported by the john templeton foundation and in part by a grant from b. wu and eric larson .xx andrade - silva , a. c. r. , nascimento , f. s. , 2015 .reproductive regulation in an orchid bee : social context , fertility and chemical signalling .106 , 4349 .ayasse , m. , jarau , s. , 2014 .chemical ecology of bumble bees .59 , 299319 .barron , a. b. , oldroyd , b. p. , ratnieks , f. l. w. , 2001 .worker reproduction in honey - bees ( _ apis _ ) and the anarchic syndrome : a review .50 , 199208 .bello , j. e. , mcelfresh , j. s. , millar , j. g. , 2015 . isolation and determination of absolute configurations of insect - produced methyl - branched hydrocarbons .usa 112(4 ) , 10771082 .bhadra , a. , mitra , a. , deshpande , s. a. , chandrasekhar , k. , naik , d. g. , hefetz , a. , gadagkar , r. , 2010 .regulation of reproduction in the primitively eusocial wasp _ropalidia marginata _ : on the trail of the queen pheromone .36 , 424431 .bourke , a. f. g. , 1988 .worker reproduction in the higher eusocial hymenoptera .63 , 291311 .bourke , a. f. g. , franks , n. r. , 1995 .social evolution in ants .princeton university press , princeton , nj .brunner , e. , kroiss , j. , trindl , a. , heinze , j. , 2011 .queen pheromones in _ temnothorax _ ants : control or honest signal ?chapuisat , m. , 2014 .smells like queen since the cretaceous .science 343 , 254255 .dapporto , l. , bruschini , c. , cervo , r. , petrocelli , i. , turillazzi , s. , 2010 .hydrocarbon rank signatures correlate with differential oophagy and dominance behaviour in _ polistes dominulus _ foundresses213 , 453458 .doebeli , m. , abouheif , e. , 2015 .modeling evolutionary transitions in social insects .elife 5 , e12721 .eliyahu , d. , ross , k. g. , haight , k. l. , keller , l. , liebig , j. , 2011 .venom alkaloid and cuticular hydrocarbon profiles are associated with social organization , queen fertility status , and queen genotype in the fire ant _solenopsis invicta_. j. chem . ecol .37 , 12421254 .endler , a. , liebig , j. , schmitt , t. , parker , j. e. , jones , g. r. , schreier , p. , holldobler , b. , 2004 .surface hydrocarbons of queen eggs regulate worker reproduction in a social insect .usa 101 , 2945 - 2950 .fischman , b. j. , woodard , s. h. , robinson , g. e. , 2011 .molecular evolutionary analyses of insect societies .usa 108(2 ) , 1084710854 .gadagkar , r. , 1997 . the evolution of communication and the communication of evolution : the case of the honey bee queen pheromone ; in orientation and communication in arthropods , volume 84 of the series exs , 375395 , edited by m. lehrer ; birkhauser , basel , switzerland .gadagkar , r. , 2001 . the social biology of ropalidia marginata : toward understanding the evolution of eusociality .harvard university press , cambridge , ma .gonzlez - forero , m. , gavrilets , s. , 2013 .evolution of manipulated behavior .american naturalist 182 , 439451 .gonzlez - forero , m. , 2014 .an evolutionary resolution of manipulation conflict .evolution 68 , 20382051 .gonzlez - forero , m. , 2015 .stable eusociality via maternal manipulation when resistance is costless .journal of evolutionary biology 28 , 22082223 .heinze , j. , holldobler , b. , peeters , c. , 1994 .conflict and cooperation in ant societies .naturwissenschaften 81 , 489497 .heinze , j. , 2004 .reproductive conflict in insect societies . adv .34 , 157 .heinze , j. , dettorre , p. , 2009 .honest and dishonest communication in social hymenoptera .212 , 17751779 .holldobler , b. , wilson , e. o. , 1990 .harvard university press , cambridge , ma .holman , l. , 2010 .queen pheromones : the chemical crown governing insect social life .3 , 558560 . holman , l. , lanfear , r. , dettorre , p. , 2013 .the evolution of queen pheromones in the ant genus _lasius_. j. evol .26 , 15491558 .holman , l. , 2014 .bumblebee size polymorphism and worker response to queen pheromone .peerj 2 , e604 .hoover , s. e. r. , keeling , c. i. , winston , m. l. , slessor , k. n. , 2003 .the effect of queen pheromones on worker honey bee ovary development .naturwissenschaften 90 , 477480 .hunt , j. h. , 2007 .the evolution of social wasps .oxford university press .katzav - gozansky , t. , 2006 .the evolution of honeybee multiple queen - pheromones a consequence of a queen - worker arms race ?j. morphol .23 , 287294 .keller , l. , nonacs , p. , 1993 .the role of queen pheromones in social insects : queen control or queen signal ?45 , 787794 .khila , a. , abouheif , e. , 2008 .reproductive constraint is a developmental mechanism that maintains social harmony in advanced ant societies .usa 105 , 1788417889 .khila , a. , abouheif , e. , 2010 . evaluating the role of reproductive constraints in ant social evolution .b 365 , 617630 .kocher , s. d. , richard , f .- j . ,tarpy , d. r. , grozinger , c. m. , 2009 .queen reproductive state modulates pheromone production and queen - worker interactions in honeybees .20(5 ) , 10071014 .kocher , s. d. , ayroles , j. f. , stone , e. a. , grozinger , c. m. , 2010 .individual variation in pheromone response correlates with reproductive traits and brain gene expression in worker honey bees .plos one 5(2 ) , e9116 .kocher , s. d. , grozinger , c. m. , 2011 .cooperation , conflict , and the evolution of queen pheromones .chem . ecol .37 , 12631275 .koedam , d. , brone , m. , van tienen , p. g. m. , 1997 .the regulation of worker - oviposition in the stingless bee _ trigona ( tetragonisca ) angustula _ illiger ( apidae , meliponinae ) .insectes soc .44 , 229244 .konrad , m. , pamminger , t. , foitzik , s. , 2012 .two pathways ensuring social harmony .naturwissenschaften 99 , 627636 .le conte , y. , hefetz , a. , 2008 .primer pheromones in social hymenoptera .53 , 523542 .leonhardt , s. d. , menzel , f. , nehring , v. , schmitt , t. , 2016 .ecology and evolution of communication in social insects .cell 164 , 12771287 .maisonnasse , a. , alaux , c. , beslay , d. , crauser , d. , gines , c. , plettner , e. , le conte , y. , 2010 . new insights into honey bee ( _ apis mellifera _ ) pheromone communication .is the queen mandibular pheromone alone in colony regulation ?zool . 7 , 18 .michener , c. d. , 1974 .the social behavior of bees : a comparative study .harvard university press , cambridge , ma .mitra , a. , 2014 .queen pheromone and monopoly of reproduction by the queen in the social wasp _ropalidia marginata_. proc .indian nat .80 , 10251044 .mullen , e. k. , daley , m. , backx , a. g. , thompson , g. j. , 2014 .gene co - citation networks associated with worker sterility in honey bees .nowak , m. a. , tarnita , c. e. , wilson , e. o. , 2010 .the evolution of eusociality .nature 466 , 1057 - 1062 .nunes , t. m. , mateus , s. , favaris , a. p. , amaral , m. f. z. j. , von zuben , l. g. , clososki , g. c. , bento , j. m. s. , oldroyd , b. p. , silva , r. , zucchi , r. , silva , d. b. , lopes , n. p. , 2014 .queen signals in a stingless bee : suppression of worker ovary activation and spatial distribution of active compounds .rep . 4 , 7449 .oi , c. a. , van oystaeyen , a. , oliveira , r. c. , millar , j. g. , verstrepen , k. j. , van zweden , j. s. , wenseleers , t. , 2015 .dual effect of wasp queen pheromone in regulating insect sociality .25 , 16381640 .oi , c. a. , van zweden , j. s. , oliveira , r. c. , van oystaeyen , a. , nascimento , f. s. , wenseleers , t. , 2015 . the origin and evolution of social insect queen pheromones : novel hypotheses and outstanding problems .bioessays 37 , 808821 .oldroyd , b. p. , halling , l. , rinderer , t. e. , 1999 .development and behaviour of anarchistic honeybees .b 266 , 18751878 .olejarz , j. w. , allen , b. , veller , c. , nowak , m. a. , 2015 . the evolution of non - reproductive workers in insect colonies with haplodiploid genetics .elife 4 , e08918 .olejarz , j. w. , allen , b. , veller , c. , gadagkar , r. , nowak , m. a. , 2016 .evolution of worker policing .( in press ) .oster , g. f. , wilson , e. o. , 1978 .caste and ecology in the social insects .princeton university press , princeton , nj .peso , m. , elgar , m. a. , barron , a. b. , 2015 .pheromonal control : reconciling physiological mechanism with signalling theory .90 , 542559 .ratnieks , f. l. w. , 1988 .reproductive harmony via mutual policing by workers in eusocial hymenoptera .132 , 217 - 236 .ratnieks , f. l. w. , foster , k. r. , wenseleers , t. , 2006 .conflict resolution in insect societies .51 , 581608 .rehan , s. m. , berens , a. j. , toth , a. l. , 2014 . at the brink of eusociality : transcriptomic correlates of worker behaviour in a small carpenter bee .rehan , s. m. , toth , a. l. , 2015 . climbing the social ladder : the molecular evolution of sociality .30 , 426433 .richard , f .- j. , hunt , j. h. , 2013 . intracolony chemical communication in social insects . insect . soc .60 , 275291 .saha , p. , balasubramaniam , k. n. , kalyani , j. n. , supriya , k. , padmanabhan , a. , gadagkar , r. , 2012 .clinging to royalty : _ ropalidia marginata _ queens can employ both pheromone and aggression .insectes soc .59 , 4144 .sharma , k. r. , enzmann , b. l. , schmidt , y. , moore , d. , jones , g. r. , parker , j. , berger , s. l. , reinberg , d. , zwiebel , l. j. , breit , b. , liebig , j. , ray , a. , 2015 .cuticular hydrocarbon pheromones for social behavior and their coding in the ant antenna . cell rep .12 , 12611271 .sledge , m. f. , boscaro , f. , turillazzi , s. , 2001 .cuticular hydrocarbons and reproductive status in the social wasp _polistes dominulus_. behav .49 , 401409 .smith , a. a. , holldobler , b. , liebig , j. , 2011 . reclaiming the crown : queen to worker conflict over reproduction in _aphaenogaster cockerelli_. naturwissenschaften 98 , 237240 .smith , a. a. , holldobler , b. , liebig , j. , 2012 .queen - specific signals and worker punishment in the ant _ aphaenogaster cockerelli _ : the role of the dufour s gland . anim .83 , 587593 .strauss , k. , scharpenberg , h. , crewe , r. m. , glahn , f. , foth , h. , moritz , r. f. a. , 2008 .the role of the queen mandibular gland pheromone in honeybees ( _ apis mellifera _ ) : honest signal or suppressive agent ?62 , 15231531 .thompson , g. j. , yockey , h. , lim , j. , oldroyd , b. p. , 2007 .experimental manipulation of ovary activation and gene expression in honey bee ( _ apis mellifera _ ) queens and workers : testing hypotheses of reproductive regulation .307a , 600610 .toth , a. l. , tooker , j. f. , radhakrishnan , s. , minard , r. , henshaw , m. t. , grozinger , c. m. , 2014 .shared genes related to aggression , rather than chemical communication , are associated with reproductive dominance in paper wasps ( _ polistes metricus _ ) .bmc genom .van oystaeyen , a. , oliveira , r. c. , holman , l. , van zweden , j. s. , romero , c. , oi , c. a. , dettorre , p. , khalesi , m. , billen , j. , wackers , f. , millar , j. g. , wenseleers , t. , 2014 .conserved class of queen pheromones stops social insect workers from reproducing .science 343 , 287290 .van zweden , j. s. , 2010 . the evolution of honest queen pheromones in insect societies .3 , 5052 . van zweden , j. s. , bonckaert , w. , wenseleers , t. , dettorre , p. , 2013 .queen signaling in social wasps .evolution 68 , 976986 .vienne , c. , errard , c. , lenoir , a. , 1998 .influence of the queen on worker behaviour and queen recognition behaviour in ants .ethology 104 , 431446 .wagner , d. , brown , m. j. f. , broun , p. , cuevas , w. , moses , l. e. , chao , d. l. , gordon , d. m. , 1998 .task - related differences in the cuticular hydrocarbon composition of harvester ants , _ pogonomyrmex barbatus_. j. chem .24(12 ) , 20212037 .wenseleers , t. , hart , a. g. , ratnieks , f. l. w. , 2004 . when resistance is useless : policing and the evolution of reproductive acquiescence in insect societies .164(6 ) , e154-e167 .wenseleers , t. , ratnieks , f. l. w. , 2006 . enforced altruism in insect societies .nature 444 , 50 .wilson , e. o. , 1971 .the insect societies .harvard university press , cambridge , ma .wossler , t. c. , crewe , r. m. , 1999 .honeybee queen tergal gland secretion affects ovarian development in caged workers .apidologie 30 , 311320 .yew , j. y. , chung , h. , 2015 .insect pheromones : an overview of function , form , and discovery .lipid res .59 , 88105 .zhou , x. , rokas , a. , berger , s. l. , liebig , j. , ray , a. , zwiebel , l. j. , 2015 .chemoreceptor evolution in hymenoptera and its implications for the evolution of eusociality .genome biol .7(8 ) , 24072416 .in this supporting information , we derive evolutionary invasion and stability conditions for queen control of worker reproduction .the mathematical structure of our model is identical to the model featured in .there are three types of females : , , and .there are two types of males : and .the three types of unfertilized females are denoted by , , and .the two types of males are denoted by and .each colony is headed by a single , reproductive female .the types of colonies are denoted by ; ; and , where is the number of the queen s matings that were with type males .we naturally have that .the mating events are shown in figure [ fig : colonies](a ) .the reproduction events are shown in figure [ fig : colonies](b ) .we focus on the evolution of the colony frequencies . using figure [ fig : colonies](b ) , we write each type of reproductive of a colony ( , , , , and ) as follows : \\ x_{aa } = & \sum_{m=0}^{n } \left [ \frac{m}{n}rx_{aa , m } + \frac{1}{2}r'x_{aa , m } + \frac{n - m}{n}r'x_{aa , m } \right ] \\ x_{aa } = & \sum_{m=0}^{n } \left [ \frac{m}{2n}r'x_{aa , m } + \frac{m}{n}r'x_{aa , m } \right ] \\ y_{a } = & \sum_{m=0}^{n } \left [ \frac{2n - m(1-p)}{2n}rx_{aa , m } + \frac{3n-2m+(2m - n)p'}{4n}r'x_{aa , m } \right . \\ & \left .+ \frac{(n - m)(1-p')}{2n}r'x_{aa , m } \right ] \\ y_{a } = & \sum_{m=0}^{n } \left [ \frac{m(1-p)}{2n}rx_{aa , m } + \frac{2m+n+(n-2m)p'}{4n}r'x_{aa , m } \right . \\ & \left .+ \frac{n+m+(n - m)p'}{2n}r'x_{aa , m } \right ] \\ \end{aligned } \label{eqn : si_dom_r_steady_state}\ ] ] among equations , the three equations that are relevant for considering invasion of a rare , dominant allele are a small amount of the allele is introduced into the population , then shortly after the perturbation , the colony frequencies have the following form ( with ) : x_{aa,1 } & = & + \epsilon\delta^{(1)}_{aa,1 } & + \mathcal{o}(\epsilon^2 ) \\[0.1 cm ] x_{aa,0 } & = & + \epsilon\delta^{(1)}_{aa,0 } & + \mathcal{o}(\epsilon^2 ) \end{aligned } \label{eqn : si_dom_epsilon}\ ] ] using equations , the density constraint , equation , takes the following form : : we substitute equations , , , and into equations .we find that the condition for evolution of queen control is that the dominant eigenvalue of the jacobian matrix in the following equation is greater than zero : the dominant allele for queen control of worker reproduction increases in frequency if in an alternative treatment , we can consider evolution in discrete time .consider a small amount of the mutant allele , , in the population . denotes the abundance of heterozygous mutant females in generation , and denotes the abundance of mutant males in generation . assuming that each new generation consists only of offspring from the previous generation , what are the abundances of and in the next generation , ?consider the following reproduction events . females mate with wild - type males at rate to make colonies , and colonies make new females at rate . males mate with wild - type females at rate to make colonies , and colonies make new females at rate . females mate with wild - type males at rate to make colonies , and colonies make new males at rate . males mate with wild - type females at rate to make colonies , and colonies make new males at rate . these reproduction events can be summarized as : we again focus on evolution of the colony frequencies . using figure [ fig : colonies](b ) , we write each type of reproductive of a colony ( , , , , and ) as follows : \\ x_{aa } = & \sum_{m=0}^{n } \left [ \frac{m}{n}rx_{aa , m } + \frac{1}{2}rx_{aa , m } + \frac{n - m}{n}r'x_{aa , m } \right ] \\ x_{aa } = & \sum_{m=0}^{n } \left [ \frac{m}{2n}rx_{aa , m } + \frac{m}{n}r'x_{aa , m } \right ] \\y_{a } = & \sum_{m=0}^{n } \left [ \frac{2n - m(1-p)}{2n}rx_{aa , m } + \frac{3n-2m+(2m - n)p}{4n}rx_{aa , m } \right .\\ & \left .+ \frac{(n - m)(1-p')}{2n}r'x_{aa , m } \right ] \\y_{a } = & \sum_{m=0}^{n } \left [ \frac{m(1-p)}{2n}rx_{aa , m } + \frac{2m+n+(n-2m)p}{4n}rx_{aa , m } \right . \\ & \left . + \frac{n+m+(n - m)p'}{2n}r'x_{aa , m } \right ]\\ \end{aligned } \label{eqn : si_rec_r_steady_state}\ ] ] among equations , the six equations that are relevant for considering invasion of a rare , recessive allele are if a small amount of the allele is introduced into the population , then shortly after the perturbation , the colony frequencies have the following form ( with ) : x_{aa,1 } & = & + \epsilon\delta^{(1)}_{aa,1 } & + \epsilon^2\delta^{(2)}_{aa,1 } & + \mathcal{o}(\epsilon^3 ) \\[0.1 cm ] x_{aa,0 } & = & + \epsilon\delta^{(1)}_{aa,0 } & + \epsilon^2\delta^{(2)}_{aa,0 } & + \mathcal{o}(\epsilon^3 ) \\[0.1 cm ] x_{aa,2 } & = & & + \epsilon^2\delta^{(2)}_{aa,2 } & + \mathcal{o}(\epsilon^3 ) \\[0.1 cm ] x_{aa,1 } & = & & + \epsilon^2\delta^{(2)}_{aa,1 } & + \mathcal{o}(\epsilon^3 ) \\[0.1 cm ] x_{aa,0 } & = & & + \epsilon^2\delta^{(2)}_{aa,0 } & + \mathcal{o}(\epsilon^3 ) \end{aligned } \label{eqn : si_rec_epsilon}\ ] ] equations , together with the density constraint , equation , yield equation at . at , the density constraint , equation ,takes the following form : we substitute equations , , , and into equations . at , we have the dominant eigenvalue of the matrix in is zero , and the corresponding eigenvector is we therefore use in the following calculations . we then substitute equations , , , , and into equations . at , we have ^ 2 \label{eqn : si_rec_r_d_dt_aa0 } \end{aligned}\ ] ] we also have ^ 2 \\\dot{\delta}^{(2)}_{aa,0}r^{-(n+1 ) } = & \ ; \frac{-1}{2n}\left(-2\delta^{(2)}_{aa,1}+n\delta^{(2)}_{aa,0}\right ) \\ & + \frac{2}{n}\delta^{(2)}_{aa,2 } \\ & + \frac{1}{2}\delta^{(2)}_{aa,1 } \\& + r'r^{-1}\delta^{(2)}_{aa,0 } \\ & -\frac{2n}{(n+2)^2}\left[\delta^{(1)}_{aa,0}\right]^2 \\\dot{\delta}^{(2)}_{aa,2}r^{-(n+1 ) } = & \ ; -\delta^{(2)}_{aa,2}+\frac{n(n-1)}{2(n+2)^2}\left[\delta^{(1)}_{aa,0}\right]^2 \\\dot{\delta}^{(2)}_{aa,1}r^{-(n+1 ) } = & \ ; -\delta^{(2)}_{aa,1}+\frac{2n}{(n+2)^2}\left[\delta^{(1)}_{aa,0}\right]^2 \\\dot{\delta}^{(2)}_{aa,0}r^{-(n+1 ) } = & \ ; -\delta^{(2)}_{aa,0}+\frac{1}{2n}\delta^{(2)}_{aa,1 } \end{aligned } \label{eqn : si_rec_derivative_aaa_2_r}\ ] ] integrating the equation for , we get ^ 2[1-\exp(-r^{n+1}t ) ] \label{eqn : si_rec_r_aa2}\ ] ] integrating the equation for , we get ^ 2[1-\exp(-r^{n+1}t ) ] \label{eqn : si_rec_r_aa1}\ ] ] using the solution for to solve for , we get ^ 2[1-(1+r^{n+1}t)\exp(-r^{n+1}t ) ] \label{eqn : si_rec_r_aa0}\ ] ] the equations for and can be manipulated to yield ^ 2 \\ \end{aligned}\ ] ] integrating this equation to solve for the quantity , we obtain ^ 2 \\ & + \frac{2n((2-p)(p - p'r'r^{-1})+pp'r'r^nt)}{(n+2)^2p^2}\left[\delta^{(1)}_{aa,0}\right]^2\exp(-r^{n+1}t ) \\ & -\frac{8n(p - p'r'r^{-1})}{(n+2)^2p^2(2+p)}\left[\delta^{(1)}_{aa,0}\right]^2\exp\left(\frac{-(2+p)}{2}r^{n+1}t\right ) \end{aligned } \label{eqn : si_rec_r_aa1_aa0}\ ] ] the queen - control allele invades a resident wild - type population if substituting , , , , and into , we find that the recessive allele for queen control of worker reproduction increases in frequency if suppose that we initially have a population in which all queens suppress their workers reproduction . if we introduce a small amount of the allele for no queen control , , and if the queen - control allele , , is dominant , then is queen control evolutionarily stable to being undone by non - controlling queens ?based on what we already have , the evolutionary stability condition for a dominant queen - control allele is obtained readily using a simplified procedure .notice that the calculations of section [ sec : si_rec_invasion ] for invasion of a recessive queen - control allele describe the following scenario : we begin with a homogeneous population of colonies , where all individuals are homozygous for the allele .a fraction of males originate from the queen , and each colony s reproductive efficiency is .the mutant allele s only effects are to change the value of relative to and to alter the colony efficiency relative to . here , and are the fraction of queen - derived males and the colony efficiency , respectively , for colonies headed by type and type queens , while and are the corresponding parameters for colonies headed by type queens . then consider the evolutionary stability of a dominant allele for controlling queens .we again begin with a homogeneous population of colonies , but in this case , all individuals are homozygous for the allele . a fraction of males originate from the queen , and each colony s reproductive efficiency is .the mutant allele s only effects are to change the value of relative to and to alter the colony efficiency relative to .here , and are the fraction of queen - derived males and the colony efficiency , respectively , for colonies headed by type and type queens , while and are the corresponding parameters for colonies headed by type queens .therefore , if we take equation , swap and , swap and , and reverse the sign of the inequality , then we obtain the condition for evolutionary stability of a dominant queen - control allele : suppose that we initially have a population in which all queens suppress their workers reproduction .if we introduce a small amount of the allele for no queen control , , and if the queen - control allele , , is recessive , then is queen control evolutionarily stable to being undone by non - controlling queens ?based on what we already have , the evolutionary stability condition for a recessive queen - control allele is obtained readily using a simplified procedure .notice that the calculations of section [ sec : si_dom_invasion ] for invasion of a dominant queen - control allele describe the following scenario : we begin with a homogeneous population of colonies , where all individuals are homozygous for the allele .a fraction of males originate from the queen , and each colony s reproductive efficiency is .the mutant allele s only effects are to change the value of relative to and to alter the colony efficiency relative to . here , and are the fraction of queen - derived males and the colony efficiency , respectively , for colonies headed by type queens , while and are the corresponding parameters for colonies headed by type and type queens. then consider the evolutionary stability of a recessive allele for controlling queens .we again begin with a homogeneous population of colonies , but in this case , all individuals are homozygous for the allele .a fraction of males originate from the queen , and each colony s reproductive efficiency is .the mutant allele s only effects are to change the value of relative to and to alter the colony efficiency relative to . here , and are the fraction of queen - derived males and the colony efficiency , respectively , for colonies headed by type queens , while and are the corresponding parameters for colonies headed by type and type queens. therefore , if we take equation , swap and , swap and , and reverse the sign of the inequality , then we obtain the condition for evolutionary stability of a recessive queen - control allele :
a trademark of eusocial insect species is reproductive division of labor , in which workers forego their own reproduction while the queen produces almost all offspring . the presence of the queen is key for maintaining social harmony , but the specific role of the queen in the evolution of eusociality remains unclear . a long - discussed scenario is that a queen either behaviorally or chemically sterilizes her workers . however , the demographic and ecological conditions that enable such manipulation are unknown . accordingly , we propose a simple model of evolutionary dynamics that is based on haplodiploid genetics . we consider a mutation that acts in a queen , causing her to control the reproductive behavior of her workers . our mathematical analysis yields precise conditions for the evolutionary emergence and stability of queen - induced worker sterility . these conditions do not depend on the queen s mating frequency . moreover , we find that queen control is always established if it increases colony reproductive efficiency and can evolve even if it decreases colony efficiency . we further outline the conditions under which queen control is evolutionarily stable against invasion by mutant , reproductive workers .
cosmology is living a golden age thanks to the analysis of high quality data that are being collected during the last years by several experiments . among the observablesused to probe the nature of the universe , the cosmic microwave background ( cmb ) temperature and polarization fluctuations provide a unique tool that is helping to establish a well defined picture of the origin , evolution , and matter and energy content of the universe ( e.g. , * ? ? ?* ; * ? ? ?* see for a recent review ) .however , since the public release of the wmap 1st - year data in 2003 , and the subsequent data releases , several results have been reported that seem to challenge the statistically isotropic and gaussian nature of the cmb , predicted by the standard inflationary theory . among these anomalies , the exceptionally large and cold spot ( hereinafter the cold spot or cs ) that was identified in the southern hemisphere ( l = , b = ) through a wavelet analysis is one of the features that has attracted more attention from the scientific community .the cs has been widely confirmed by subsequent analyses ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) , carried out by different groups and using different kinds of techniques .recently have claimed that the cold spot originally found by was , actually , an artifact caused by the particular choice of the spherical mexican hat wavelet ( smhw ) as the tool to analyse the data . to support this argument, the authors showed how the use of isotropic filters with variable width , like a top - hat or a gaussian function , failed to provide a deviation from gaussianity .we do not agree with the conclusions reached in that paper .the results obtained by the authors just indicate that not all filtering kernels are equally optimal to detect or amplify a particular signature. in particular , wavelets ( which are compensated filters ) are better suited for this purpose than other non - optimised kernels .it is well known that wavelets increase the signal - to - noise ratio of those features with a characteristic scale similar to the one of the wavelet .this amplification is obtained by filtering out the instrumental noise and the inflationary cmb fluctuations at smaller and larger scales .the arguments given by have been merely repeated by in a recent work .a number of possible explanations for the cs have been suggested in the literature , namely contamination from residual foregrounds ( e.g. , * ? ? ?* ; * ? ? ?* ) , particular brane - world models , the collision of cosmological bubbles ( e.g. * ? ? ?* ) the non - linear integrated sachs - wolfe effect produced by the large scale structure ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?? * ; * ? ? ?* ) , or inverse compton scattering via the sunyaev - zeldovich effect , supported by the presence of a large cluster of galaxies in the direction of the cs ( the eridanus super - group , * ? ? ?however , some works have shown that these explanations are very unlikely ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) , since , depending on the case , they would require very special conditions to be able to explain the cs , such as a very particular mixing - up of the foreground emissions , an unfeasible electron gas distribution , a very peculiar situation of the milky way with respect to some hypothetical large voids , or the existence of huge voids much larger than the ones expected from the standard structure formation scenario . in particular , the latter would imply a much more extreme departure from gaussianity than the one that these models are trying to explain ! nevertheless , there is an alternative hypothesis that has not been ruled out yet , which is compatible with current observations . suggested that the cs could be produced by the non - linear evolution of the gravitational potential generated by a collapsing cosmic texture . in that work, a bayesian analysis showed that the texture hypothesis was preferred with respect to the pure standard gaussian scenario , and that the values describing the properties of the texture were compatible with current cosmological observations .in particular , the energy scale for the symmetry breaking that generates this particular type of topological defect ( gev ) , was in agreement with the upper limits established by means of the angular power spectrum ( e.g. , * ? ? ? * ; * ? ? ? * ) .of course , this result does not guarantee by itself the existence of cosmic textures , nor that the cs is caused by a collapsing texture .in fact , further tests are needed , and some of them were already indicated in .first , the texture model makes predictions about the expected number of cosmic textures with an angular scale equal or greater than .in particular , the presence of around 20 cosmic textures with is predicted . some works , like ,have already reported the existence of other anomalous spots , which could potentially be related to the presence of additional textures .second , the pattern of the cmb lensing signal induced by such a texture is known , and high resolution cmb experiments ( like the atacama cosmology telescope and the south pole telescope ) should be able to detect such a signal , if present .this issue has recently been addressed by .finally , the polarization of the cmb is an additional source of information that provides further insight on the texture hypothesis .a lack of polarization is expected for the texture hypothesis , as compared to the typical values associated to large fluctuations of a gaussian and isotropic random field .this is because the effect of a collapsing texture on the cmb photons is merely gravitational .this difference in the polarization is the topic of this work .nevertheless , it is worth recalling that a collapsing texture is not the only way of producing a local non - linear evolution of the gravitational potential , and , therefore , a relative lack of the local polarization signal .other physical processes could also generate such a secondary anisotropy on the cmb photons .in fact , some of these effects have also been proposed as possible explanations for the cs .for instance , as previously mentioned , a very large void ( e.g. , * ? ? ?* ) could produce the required non - linear evolution and , therefore , it would be affected by a relative lack of polarization . however , this explanation is discarded from both current large scale structure modelling ( e.g. , * ? ? ?* ; * ? ? ?* ) and dedicated observations .for this reason , in this paper we consider the non - linear integrated sachs - wolfe ( also called rees - sciama ) effect caused by a collapsing texture as the most plausible explanation . in any case , we remark that the results derived in this paper for the texture model can also be expected in the most general situation of any physical process producing cmb secondary anisotropies , in the form of large spots in temperature , via the non - linear sachs - wolfe effect .the paper is organised as follows . in section [ sec : chara ] we provide a characterisation of the radial profile ( both in temperature and polarization ) for gaussian spots as extreme as the cs . for comparison, we also investigate the case of random positions .a method , which exploits the correlation between the temperature and the polarization profiles , is proposed to discriminate between the gaussian ( null ) and the texture ( alternative ) hypotheses in section [ sec : method ] .the results are given in section [ sec : results ] , where the ability to discriminate between the two considered hypotheses is discussed for different instrumental sensitivities .finally , conclusions are presented in section [ sec : final ] .as already mentioned , the cross - correlation of the temperature ( ) and polarization ( ) signals around the position of the cs , could be an excellent discriminator between the null and alternative hypotheses .in other words , this quantity could indicate whether this feature is better described by a standard gaussian and isotropic field , or , conversely , by a non - standard cosmological model producing temperature spots which do not present a correlated polarization feature ( as the topological defects ) . in this latter case ,the cs is assumed to be caused by a secondary anisotropy of the cmb photons , altered by a non - linearly evolving gravitational potential produced , for instance , by a collapsing cosmic texture .hence , this _ alternative _ hypothesis ( ) would correspond to cmb fluctuations generated by the standard inflationary model , but with a non - negligible contribution from topological defects ( as it would be the case for the cs ) .conversely , the _ null _ hypothesis ( ) would be the case in which all the cmb fluctuations ( including the cs ) are due to a pure standard gaussian and isotropic field .it is interesting to point out that for the case of the alternative hypothesis , the e - mode signal is not expected to contain contributions from scalar perturbations but only from vector perturbations , which are around one order of magnitude smaller .therefore , for a cmb temperature feature as extreme as the cs , one would expect more polarization signal if such temperature fluctuation is caused by the standard inflationary model , than for the case in which , for instance , a collapsing cosmic texture is producing such a large spot .we aim to characterise the cmb temperature and ( e - mode ) polarization features through a radial profile .the reason to adopt this characterisation is simple : the shape of the cs is close to spherical , with a typical size of around 10 degrees . at this point , it is important to recall that the cs was first identified as an anomalous feature with an amplitude of times the dispersion of the spherical mexican hat wavelet ( smhw ) coefficients at a wavelet scale arcmin ( for details see * ? ? ? * ) .follow - up tests explored additional characteristics of the cs finding even lower p - values ( e.g. , * ? ? ? * ) , but for the sake of simplicity and robustness , we adopt the original detection as the statistical property that characterises the cs .let us define , for a given position , the radial profile in temperature and in polarization as : where and are the temperature and the e - mode polarization maps , respectively .the sums are extended over the positions which are at a distance from i.e ., such as : $ ] . is the width of the considered rings and represents the number of positions ( or pixels in a map at a given angular resolution ) satisfying the previous condition . in figure[ fig : profiles ] we plot the mean value and dispersion of the temperature and polarization radial profiles for two different cases .the first case , labelled as _ extrema _ , corresponds to the radial profiles and associated to positions where the cmb gaussian temperature field has a feature , at least , as extreme as the cs ( i.e. , having an amplitude above 4.57 times the dispersion of the wavelet coefficients at a wavelet scale arcmin , in absolute value ) . note that , although the cs is actually cold ( i.e , it is a minimum ) , in this work we will consider the more general case of having an extremum of the cmb field .we adopt this criterion since , for the case of cosmic textures , either hot or cold spots can be produced .the second case is labelled as _ random _ , and it corresponds to the radial profiles associated to random positions selected in the cmb gaussian temperature field .these mean radial profiles have been obtained after averaging over many simulations , carrying out the following procedure .first , a cmb gaussian simulation is generated ( containing t , q , and u maps ) at a resolution given by the healpix parameter .subsequently , the temperature component of the simulation is filtered with the smhw at a scale arcmin .a feature as extreme as the cs is then sought in the wavelet coefficient map .if this is not found , a new simulation is generated .conversely , if a cs - like feature is present in the temperature map , we compute the e map from the pseudo - scalars q and u , as well as the temperature and polarization profiles at the position where the extremum is located . in addition , a random position in the temperature map is selected and the and profiles are computed at this random position . the left panel in figure [ fig : profiles ] shows both cases ( _ extrema _ as the blue solid line , and _random _ as the green dot - dashed line ) for temperature , while the right panel corresponds to the ( e - mode ) polarization .let us remark that , for _ extrema _ that are cold spots , the absolute value of the profile has been considered .the curves show the profiles from 0.5 to 25 degrees , with a step of degrees .we also plot the 1- level ( dotted lines ) associated to the probability distribution of the profiles at a given distance , obtained from the 10000 simulations used to compute these estimates .note that , for the case of the polarization signal , the _ extrema _ and the _ random _ profiles overlap at the 1- level , which indicates that very little information can be obtained from the analysis of the polarization signal alone .this is a justification to consider the polarization information only via the cross - correlation with the temperature fluctuations .hence , the differences between these curves , expressed in terms of their mutual correlations , are the ingredients used to define a methodology to discriminate between the standard gaussian ( null ) and the non - standard _ cosmic texture _( alternative ) hypotheses .this is addressed in the next section .in this section we describe a methodology to distinguish between the competitive hypotheses already mentioned : the standard gaussian and isotropic inflationary model ( , null hypothesis ) , and a non - standard model that accounts , for instance , for cosmic textures , in addition to cmb fluctuations coming from the standard inflationary model ( , alternative hypothesis ) .the key point to discriminate between these two scenarios is to exploit the differences between the cross - correlation of the temperature and the polarization radial profiles described in the previous section .let us define , first , these two hypotheses ( sections [ subsec : signal_h0 ] and [ subsec : signal_h1 ] ) in terms of the temperature and polarization profiles . afterwards( section [ subsec : discriminator ] ) we will build the discriminator , based on the fisher discriminant .under the assumption of the null hypothesis ( i.e. , a cmb signal completely described in terms of the standard inflationary model ) , the cross - correlation of the temperature and polarization profiles at position ( i.e. , where the cmb temperature map presents a cs - like feature ) is given by : is a vector with components ( i.e. , ) . in our analysiswe have since we consider values of and from 1 to 18 degrees , with degree .we have tested that including smaller or larger scales does not increase significantly the discrimination ability of our estimator . after averaging over simulations ,we can obtain both , the mean value of this signal vector ( ) , and the covariance matrix among the different components of the vector ( , with dimension ) .we define the -component of the signal vector and the -element of the covariance matrix as : where is the total number of simulations used to compute these estimators . in our analysis , we consider = 10,000 . under the assumption of the alternative hypothesis ( e.g. , the case in whichthe cmb fluctuations are generated from the standard gaussian and isotropic field , plus a contribution of cosmic textures which is , indeed , responsible for the cs ) , the cross - correlation of the temperature and polarization profiles at position ( where the cmb temperature map has a feature as extreme as the cs ) is given by : where the first term at the right - hand side of the equation corresponds to the correlation between a radial profile in temperature for a cs - like feature , and a radial profile in polarization associated to a typical fluctuation generated by the gaussian and isotropic component .this term accounts for the fact that a cosmic texture would add an almost negligible polarization signal . as already mentioned , the reason is that textures do not produce e - mode scalar perturbations , but vector perturbations , which are one order of magnitude smaller than the former .in addition , the term is a small correction ( as compared to the previous term ) that can be seen as a bias accounting from the residual correlations between the temperature and polarization profiles in a random position of the cmb map , i.e. , , where : with note that the bias term is required to account for the typical correlations that exist between the temperature and the polarization field . in other words , it accounts for the te cross - correlations due to the underlying isotropic and gaussian fluctuations , where the cs ( caused by the cosmic texture ) is placed . as in the previous case, is a vector with components .its mean value ( ) and covariance matrix accounting for the correlations between the components ( ) are given by : as before , is the total number of simulations ( 10,000 ) used to compute these estimators .the signal vectors defining the and hypotheses ( and , respectively ) contain all the required information to distinguish between these two different scenarios. however , a practical way to add together all this information is required ( each vector has components ) .there are different possibilities , such as building a .however , we prefer to adopt a mechanism that provides an optimal way to combine this information in the sense of obtaining the largest separation between the two hypotheses : the fisher discriminant . the reader can find applications of the fisher discriminant related to cmb gaussianity studies in several works ( e.g. , * ? ? ?* ; * ? ? ?the fisher discriminant applied to signals corresponding to the hypothesis , provides a set of numbers ( ) where all the information available for the null hypothesis ( i.e. , , and ) has been optimally combined . to construct this combination ,the overall properties of the alternative hypothesis ( i.e. , and ) are also taken into account .analogously , the fisher discriminant applied to signals following the hypothesis provides a set of , that are built from the information related to the hypothesis ( i.e. , , and ) , and the overall information related to the null hypothesis ( i.e. , and ) .more specifically ( see , for instance , * ? ? ?* ) , for a given simulation , the and quantities are given by : where denotes standard matrix transpose , and the matrix is obtained as .in this section we present the results of applying the previously described methodology to cmb simulations .we have performed simulations that are compatible with the cosmological model determined by the analysis of the wmap data .the determination of the radial profiles in the temperature and ( e - mode ) polarization maps is performed at , since only angular scales larger than 1 degree are considered .as mentioned in the previous section , 10,000 simulations have been used to estimate the mean value of the signal vectors ( and ) , that contain the cross - correlation between the profiles and , as well as the covariance matrices and defining the correlation between the components of these vectors .one thousand additional simulations have been used to calculate the distribution of the fisher discriminants and .we have studied the power of the proposed methodology to distinguish between the null ( ) and the alternative ( ) hypotheses , for different instrumental noise levels in the polarization maps ( ) . in particular , we have studied in detail three scenarios corresponding to an ideal instrument ( ) , to the quijote experiment ( per square degree , see * ? ? ?* ) , and to the esa planck satellite ( per square degree , see * ? ? ?* ) . in figure[ fig : fisher ] we plot the distribution of the fisher discriminants ( solid blue lines ) and ( dot - dashed red lines ) for these three cases : the ideal noise - free experiment is represented in the left panel , the output for the quijote experiment is provided in the middle panel , and , finally , the case for the planck satellite is shown in the right panel .the vertical lines indicate the mean value for each distribution . at a power of the test ,the significance levels are : 0.008 for the ideal experiment , 0.014 for the quijote experiment , and 0.069 for the planck satellite .a more complete picture of the significance level to discriminate between the and hypotheses is given in figure [ fig : s2n ] , where the significance level ( for a power of the test of 0.5 ) is shown as a function of the instrumental noise level .the vertical lines from left to right indicate the noise levels for quijote , planck and wmap5 .the previously estimated significance levels have been calculated for te correlations _ given _ that the temperature was anomalous .therefore we can denote them as .however , we can use both , te and t , in order to discriminate between the null and alternative hypotheses .hence we have .we will set since this is a very robust and conservative estimation for the p - value of the cs in temperature ( see * ? ? ? * for details ) . in this way , the significance levels ( in percentage ) are found to be 0.014% for an ideal noise free experiment , 0.025% for the quijote telescope and 0.12% for planck .significance level to reject the hypothesis ( at a power of the test of 0.5 ) , as a function of the instrumental noise level in the polarization map ( given in per square degree ) .the vertical lines from left to right indicate the noise levels for quijote , planck and wmap5.,width=302 ] the cmb polarization signature of the cs is proposed to distinguish between the possibility that it is just a rare fluctuation from the gaussian inflationary scenario ( null hypothesis ) or that it is due to the gravitational effect produced by a non - standard cosmological model , as for example the cosmic texture model ( alternative hypothesis ) .obviously , cosmic textures are not the only physical process generating secondary anisotropies via the non - linear integrated sachs - wolfe effect .for instance , a very large void in the large scale structure , could generate |at least qualitatively speaking| a similar effect .however , as many works have already indicated ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) , the void hypothesis is very unlikely . on the contrary ,cosmic textures have proven to be a plausible explanation . whereas polarization alone is not enough to discriminate between the two hypotheses ,the te cross - correlation provides a significant signal . in the casethat the null hypothesis is correct , one would expect a significant cross - correlation signal with an amplitude corresponding to that of the largest spot in the temperature sky . on the contrary , if the alternative hypothesis is true , no additional polarization ( and thus no cross - correlation ) signal would be expected from the gravitational effect of the texture collapse . in this latter case ,the only expected te signal would be the one corresponding to a random inflationary fluctuation .the test proposed in this paper makes use of the fisher discriminant constructed from all possible te cross - correlation combinations formed from the temperature and polarization profiles at the position of a cs - like feature . in the best case of an ideal noise - free polarization experiment the null hypothesis for the cs te signalcan be rejected at a significance level of 0.8% .for the case of quijote and planck this result becomes 1.4% and 6.9% , respectively .finally , we may wonder about the probability at which the null hypothesis can be rejected by taking into account both the temperature and polarization information of the cs . considering that in the inflationary scenario the probability of having a temperature as extreme as the one measured for the cs is 1.8% , then the combination of this probability with the one found in this work using polarization information , would provide a significance of 0.014% , 0.025% and 0.12% for the ideal , quijote and planck experiments , respectively .we acknowledge partial financial support from the spanish ministerio de ciencia e innovacin project aya2007 - 68058-c03 - 02 .pv also thanks financial support from the _ ramn y cajal _ programme .mt thanks the physikalisch - meteorologisches observatorium davos / world radiation center ( pmod / wrc ) for having provided him facilities to carry out the investigation . the healpix package was used throughout the data analysis .barreiro r.b . ,2010 , in _ highlights of spanish astrophysics v _ , proceedings of the viii scientific meeting of the spanish astronomical society ( sea ) , eds .j. gorgas , l. j. goicoechea , j. i. gonzlez - serrano , j. m. diego rubio - martn j.a .et al . , 2010 , in _ highlights of spanish astrophysics v _ , proceedings of the viii scientific meeting of the spanish astronomical society ( sea ) , eds .j. gorgas , l. j. goicoechea , j. i. gonzlez - serrano , j. m. diego
one of the most interesting explanations for the non - gaussian cold spot detected in the wmap data by , is that it arises from the interaction of the cmb radiation with a cosmic texture . in this case , a lack of polarization is expected in the region of the spot , as compared to the typical values associated to large fluctuations of a gaussian and isotropic random field . in addition , other physical processes related to a non - linear evolution of the gravitational field could lead to a similar scenario . however , some of these alternative scenarios ( e.g. , a large void in the large scale structure ) have been shown to be very unlikely . in this work we characterise the polarization properties of the cold spot under both hypotheses : a large gaussian fluctuation and an anomalous feature generated , for instance , by a cosmic texture . we also propose a methodology to distinguish between them , and we discuss its discrimination power as a function of the instrumental noise level . in particular , we address the cases of current experiments , like wmap and planck , and others in development as quijote . we find that for an ideal experiment with a high polarization sensitivity , the gaussian hypothesis could be rejected at a significance level better than 0.8% . while wmap is far from providing useful information in this respect , we find that planck will be able to reach a significance of around 7% ; in addition , we show that the ground - based experiment quijote could provide a significance of around 1% , close to the ideal case . if these results are combined with the significance level found for the cold spot in temperature , the capability of quijote and planck to reject the alternative hypothesis becomes 0.025% and 0.124% , respectively . [ firstpage ] cosmology : cosmic microwave background methods : data analysis methods : statistical
the ligo project is part of the current initiative to detect gravitational radiation via its perturbing effects on resonant laser interferometers .this initiative involves several large collaborations around the world , including virgo , geo , tama , and aciga .a paramount issue in all such projects is the maximization of interferometer sensitivity to detect the extremely weak signals that are expected from even the most powerful astrophysical sources . to that end ,coupled - cavity systems with multiple resonant stages will be used to maximize the shot - noise - limited signal - to - noise ratio of the gravitational wave ( gw ) signal readout . for these complex interferometric detectors ,intensive modeling is necessary to estimate their performance in the presence of general optical imperfections .a hierarchy of approaches has been used to estimate detector performance , each method negotiating the trade - off between accuracy and computational complexity .analytical methods may suffice for the consideration of optical defects that can be treated as pure losses , or for some cases involving geometric or randomized mirror deformations .a matrix model that evaluates the coupling between the first few lowest - order tem laser modes has been shown to be useful for the study of mirror tilts and beam displacements ; and another matrix model using discrete hankel transforms exists for problems with axial symmetry .such models , which consider the exchange of power between a limited set of pre - specified modes , allow one to obtain fast results that can be used for predicting certain important interferometer behaviors ( e.g. , time - dependent detector responses ) , at the expense of some sophistication in modeling the detailed interferometer steady - state power buildup . for the consideration of highly general optical imperfections , however , the most comprehensive method is the complete modeling of the transverse structure of the laser field wavefronts ; this method simulates the electric fields on large grids , and uses ( fourier transform - based ) numerical computations for the propagations of these laser beams through long cavities .this technique is useful in the case of mirrors with complex deformations , and for mirrors with significant losses ( due to absorption , scattering , and/or diffractive loss from finite - sized apertures ) , that violate the assumption of unitarity for mirror operators in the matrix models , thus introducing complications into that approach .grid - based simulations of intra - cavity laser fields have a long history ( e.g. , ) , and have been applied previously ( e.g. , ) to the study of the interferometric gw detectors now being implemented .but simplifications are generally imposed , such as restricting the optical deformations that are studied ( e.g. , considering only geometric imperfections , like tilt and curvature mismatch ) , and/or by modeling a relatively simple cavity system . in this paper , we describe a simulation program ( originally based upon the work of vinet _ et al . _ ) that has been extended to efficiently model the complex fields that build up between realistically imperfect optical components , while resonating within complete , coupled - cavity interferometers like those used for the ligo detectors .versions of our program have been used for a variety of applications by the gravitational wave community , including numerous design and performance estimation tasks conducted by the ligo group itself ( see section [ s6 ] ) , as well as for collaborative investigations between ligo scientists and other groups such as aciga - ligo efforts to explore alternative interferometer length control schemes , and tama - ligo efforts to estimate the effects of mirror imperfections and thermal lensing upon the performances of their future , large - scale interferometers .a wide variety of interferometer imperfections can be modeled with our program , including mirror tilts and shifts , beam mismatch and/or misalignment , diffractive loss from finite mirror apertures , mirror surface figure and substrate inhomogeneity profiles in particular , using deformation phase maps that are adapted from measurements of real mirror surfaces and substrates as well as fluctuations of reflection and transmission intensity across the mirror profiles .in addition to the main ( carrier frequency ) laser field , auxiliary fields ( i.e. , radio frequency sidebands ) for ligo s heterodyne signal detection scheme are also modeled , allowing us to give absolute numbers for the shot - noise - limited gw - sensitivity of a single ligo interferometer .furthermore , a number of active optimization procedures are performed continuously during code execution , guaranteeing that several key parameters in the ligo configuration will be brought to their optimum values upon program completion ; the gw - sensitivity is thus maximized for each specific `` realistic '' interferometer , given the particular imperfections being simulated in that run . considering the time needed for program execution , we note that though our principal usage of the program has been on various supercomputing platforms , a full run of the simulation code ( with all options included ) can be performed in a non - prohibitive amount of time on a modest sun sparcstation , even for significantly ( and non - symmetrically ) deformed mirrors .this has been achieved through a combination of fast iteration techniques , efficient parameter optimization routines , and procedures designed to carefully choose the initial guesses for laser fields that are computed via relaxation .lastly , we note that versions of the code are available for both the first - generation ligo and advanced - ligo ( dual recycling ) configurations , though we will focus primarily upon the former in this paper .the discussion is organized as follows : in section [ s2 ] , we give a description of the physical system that is modeled here ( i.e. , a first - generation ligo interferometer ) , and the computed electric fields that are necessary for the calculation of its shot - noise - limited gw - sensitivity function . in section [ s3 ], we provide an overview of the technical details of the program s operations , including the modeling of the optics , the iterative method used for computing the interferometer s electric fields , and the various optimization procedures that are performed to maximize its sensitivity . in section [ s4 ] , we present an array of runs with representative sets of mirror substrate and surface deformation maps adapted from real mirror measurements , in order to demonstrate how realistically - deformed mirrors will reduce the circulating power stored in a ligo interferometer ( as well as altering its modal structure ) , thus degrading the interferometer sensitivity and interfering with its control systems . in section [ s5 ] , we discuss the impact of optical imperfections upon ligo science capabilities , by estimating how deformed - mirror effects will reduce ligo s ability to detect astrophysical sources of gravitational waves , such as non - axisymmetric pulsars and coalescing black hole binaries . in section [ s6 ] , we conclude with a discussion of various ligo research and development initiatives to which this grid - based simulation work has contributed .figure [ fg1 ] is a schematic diagram of the core optical configuration of a full - sized , first - generation ligo interferometer .a summary of typical interferometer ( and computational ) parameters for this configuration is presented in table [ tbl1 ] .these are the primary program input values that we will use for the sample runs to be presented in section [ s4 ] . not shown ( or modeled by us ) are the mode cleaning , frequency stabilizing , and matching optics which prepare the laser light for the interferometer ; also not modeled are the pickoffs , phase modulators , or the full control system apparatus that will be used in a real interferometer to read out its complete operational state .schematic diagram of the core optical configuration of a first - generation ligo interferometer .( not drawn to scale . ) ] .[tbl1]typical parameter values for a first - generation ligo interferometer , including physical specifications and computational parameters required for the simulation program .the labeling of the optical elements are as depicted in fig .some parameters are optimized during code execution , and are given here only as approximate ranges of values . [cols="<,<",options="header " , ] the system depicted in fig . [ fg1 ] , essentially a michelson interferometer , converts the differential arm length changes caused by a gravitational wave ( gw ) into an oscillating output field amplitude at the exit port of the beamsplitter , where a carrier field dark - fringe would otherwise ( ideally ) have been maintained .the partially - transmitting input mirrors and highly - reflective end mirrors form fabry - perot ( fp ) arm cavities in the `` inline '' and `` offline '' arms , which amplify this effect ; and the power recycling mirror provides a broadband amplification for the stored energy in the interferometer that is available for signal detection .coupling occurs between the fields from each of the different cavities in this optical configuration , and is responsible for the complex response behaviors of the system .radio - frequency modulation sidebands ( `` rf - sidebands '' ) impressed upon the carrier - frequency input light serve as a `` local oscillator '' for a heterodyne detection scheme .substantial rf - sideband power is made to emerge from the beamsplitter exit / signal port , and it is added to the quantity of gw - induced carrier light that is escaping there , to ultimately produce an output signal linear in _ h _ , the dimensionless amplitude of the gw .the carrier power is maximized by giving it a double - resonance : it is first resonant in ( either of ) the fp arm cavities , and again resonant in the `` power recycling cavity '' ( prc ) formed by mirrors , , and . alternatively , the rf - sidebands ( in this standard ligo detection scheme ) are only resonant in the prc , to take advantage of the broadband amplification there ; but they are far - off - resonance in the long fp - arms , so that they may serve as a stable reference which is unaffected by gravitational waves . assuming that the carrier is held to its double resonance ( giving it a phase shift of in resonant reflection from the fp - arms ) , the rf - sidebands have their own resonance requirements , defined by the two conditions : , and . referring to fig .[ fg1 ] , represents or , , and the rf - modulation frequency is defined according to .the first ( anti - resonance ) condition is not enforced as a precise equality , in order to avoid the unwanted resonance of non - negligible second - order modulation sidebands ( at ) in the fp arm cavities .but the second ( prc - resonance ) condition must be achieved to sub - wavelength tolerances , and requires careful fine - tuning corrections due to the effects of realistically - deformed optics ( see sec .[ s3s3s2 ] ) .the heterodyne gw - signal is obtained by interfering the gw - induced carrier beam at the beamsplitter exit / signal port with the emerging sideband beams , and demodulating the resultant photodiode output signal at .efficient coupling of the rf - sidebands to the signal port is achieved by incorporating a macroscopic length asymmetry between the two arms of the prc , , thus allowing an optimal fraction of sideband power to be extracted at the exit port during each round - trip through the prc , even while the carrier is held to a dark - fringe there .this signal generation method ( see sec . [ s3s3s4 ] ) is referred to as the `` schnupp asymmetry scheme '' . to simplify our simulation task ,we model only those aspects of the ligo system which have a direct role in generating the gw - signals : the aforementioned carrier beam , and its rf - sidebands , resonating in the core optical system of fig .if we make the ( good ) approximation that most of the sensitivity - limiting shot noise at the beamsplitter exit port is due to these fields some carrier field power emerging ( primarily ) because of imperfect dark - fringe contrast , and rf - sideband field power being maximally channeled to the exit port and neglect contributions from various other fields used for interferometer control systems , or from other ( controlled ) noise couplings , then we can do a full calculation of the shot - noise - limited gw - sensitivity of the simulated interferometer . beyond this , rather than explicitly emulating all of the control systems of the real ligo , the simulation program uses alternative methods ( see sec . [ s3s3 ] ) for the numerous parameter adjustments ( resonance finding , etc . ) that are needed to optimize interferometer performance ; and it does so in a manner that reflects the behavior of the real system as accurately as possible . in summary ,the primary goal of our modeling efforts is to completely solve for the steady - state carrier and sideband fields that resonate in a ligo interferometer with realistic optical imperfections , which is held to its proper operating point , and is optimally configured for maximal gw - sensitivity .the full effects of optical imperfections upon the sensitivity of interferometric detectors are then determined from these simulated cavity fields .here we present an overview of the program and its operations during execution , and the additional tasks which must be done during pre- and post - processing stages in the course of performing simulation experiments .the full technical details of our work can be found in . in regards to computing platforms , the simulation program was originally written in the sparccompiler 3.0 version of fortran 77 .a complete run ( including carrier and sideband frequencies , with all interferometer fields computed and all optimizations done ) with a set of non - ideal mirrors , and with ( for example ) pixelized grid maps used for the optical computations , takes less than a day on a 2-processor sparcstation 20 .this computation time is reduced to a couple of minutes when the program is executed on massively - parallel supercomputing platforms ( e.g. , , ) .the runs to be presented in this paper were performed using 32 nodes of the paragon machine _ trex _ , a 512 ( compute ) node machine utilizing intel i860 processors .we use the customary approach for the grid - based modeling of the laser field wavefronts : a primary propagation direction is assumed for the collimated beam(s ) in each part of the interferometer , and a perpendicular slice can be taken anywhere along the beam propagation axis , at locations of interest .each of these beam slices is recorded on a two - dimensional ( 2d ) grid , with a pixel entry representing the complex electric field ( `` e - field '' ) amplitude at that transverse spatial position in the slice .figure [ fg2 ] is an example of such a grid - based electric field , in particular that of a hermite - gaussian tem mode .no polarization vector is currently recorded in our grids ( we can not , for example , model birefringence effects ) .the precision that can be achieved by the program depends upon how accurately it simulates the two basic physical processes which must be performed on the interferometer e - fields : _ propagations _ , and _ interactions with mirrors_. we discuss propagations first . transverse slice of a hermite - gaussian tem mode ( only the real part is shown ) , taken at the waist plane of the beam , and recorded on a pixelized grid . ]the program utilizes siegman s method for the plane - to - plane propagation of light in the paraxial approximation , performed via a three step process : a fourier transform and an inverse transform , sandwiched around a pixel - by - pixel multiplication step in spatial - frequency - space ( `` _ _ k__-space '' ) , in which the e - field slice is multiplied with a distance - dependent 2d propagator matrix of the same pixelization . for each curved , potentially jagged mirror profileto be simulated , a flat plane is defined near that side of the optic to serve as a reference plane .propagations thus translate an e - field slice through the large distance from an initial reference plane to a destination reference plane near a mirror ( or at any chosen position along the beam axis ) .the computationally intensive parts of this process ( i.e. , the transforms ) can be performed rapidly by a fast fourier transform ( fft ) routine ( e.g. , ) .all such macroscopic - distance propagations ( i.e. , ) of e - fields are done in this manner .a very important issue to deal with for propagations is the problem of _ aliasing _ , a common complication for calculational procedures that use discrete fourier transform methods .aliasing can create artifacts in the simulation results if ( relatively ) large - angle scattering sends power beyond the edge of the grid calculational window during the propagations .such power automatically re - enters the calculation from the other side of the grid , and may ( fraudulently ) be incorporated back in the physical simulation , instead of being filtered out , as it would be by absorbing baffles in a real interferometer .this problem is most significant for the long propagations through the 4 km fp arm cavities. we can reduce ( or eliminate ) such aliasing by filtering the relevant _ k_-space matrix that will multiply an e - field slice , in between the fourier transform and inverse - transform steps of a propagation .the goal is to preserve as much `` real power '' , while eliminating as much `` aliased power '' , as possible ; the distinction between them is that the former always stays within the apertures of the finite - sized mirrors , while the latter leaves the grid completely , but re - enters from the other side and comes far enough into the middle of the grid to fall once again within the mirror apertures. let be the physical side - length of the square calculational window , be the mirror aperture diameter , and be the propagation distance length .the cutoff between physically real power and `` aliasing '' power is a matter of propagation angle : power traveling at may be able to stay within the mirrors , while power with usually can not , but will sometimes be able to leave the grid _ and _ return to erroneously re - enter the mirror apertures . for a discrete grid with pixels on a side ( numbered ) ,these angles correspond to ( respectively ) the pixel numbers ] , where is the light wavelength .all of the aliasing power can be safely eliminated by nulling pixels with in the _ k_-space propagator matrix , without removing any real power , as long as .but if , then one must choose some compromise between _ keeping _ real power and _ cutting out _ aliasing power for pixels . one can , of course , force to be true by increasing the calculational window size to be much larger than the mirrors ( i.e. , ) , but this requires more pixels to be used in the grid ( e.g. , ) , which increases the computational load .making large without using an adequate number of pixels would lead to poor sampling of the laser beam itself , thus creating a new aliasing problem , due to the inadequate resolution of the discrete grids . in our runs, we use large calculational windows and many pixels whenever possible ( such as for the runs presented below ) ; but when necessary , the program has an option which applies an apodization scheme to `` trim '' the propagation operators in a way that enforces a graduated compromise between keeping real power and eliminating aliasing power .next , we consider mirror interactions .the program must carry out two basic mirror interaction operations : reflections ( ) and transmissions ( ) .mirrors will not be perfectly uniform or flat in the direction transverse to the beam ; they will have inhomogeneities in the refraction index and/or thickness of their substrates , spatially - varying surface height profiles , variations in the quality of the reflective coatings , macroscopic curvatures , tilts , etc .this leads to optical path length variations ( i.e. , spatially - varying phase delays ) across their profiles , as well as variations in the _ amplitudes _ of and , the latter mostly due to variations in the reflective side or anti - reflective ( ar ) side coatings .these phase and amplitude effects are simulated by creating complex mirror maps for all and operators , which are also recorded on 2d pixelized grids . as demonstrated by vinet _ , it is a good ( `` short distance '' ) approximation to treat each pixel of the beam as an independent little plane wave , and reflect ( or transmit ) that piece of the e - field by multiplying that pixel in the e - field map by the corresponding pixel in the relevant mirror map , so that each e - field pixel interacts only with the mirror pixel located immediately in front of it . thus each mirror reflection or transmission operationis reduced to a pixel - by - pixel multiplication step between a mirror operator map and the e - field slice on the reference plane near to it .the simulation program uses these mirror interaction operations , in combination with the propagation algorithm described above , to model the entire behavior of the laser fields in the interferometer .there are some limitations in using this methodology for mirror interactions , besides the obvious one of restricting it to operations over short distances ( i.e. , in the near - field ) . derive a requirement which specifies how small the deviations of a mirror profile can be from its idealized shape , before this pixel - by - pixel multiplication method loses a large amount of accuracy compared to the exact calculation via the huygens - fresnel integral formulation in scalar wave theory . realistically imperfect( ligo - quality ) mirrors easily satisfy this requirement . furthermore , as noted by tridgell _ , the difference in phase between one pixel in a mirror map and its neighbor must be smaller than if a continuously - varying surface displacement on the mirror is to be adequately sampled by the grid . for our simulation parameters ( cf .table [ tbl1 ] ) , this limits mirror tilts to radians ( not including the base tilt of the beamsplitter , which is handled separately ) , and mirror curvature radii to ; both of these limitations are easily satisfied in our runs . finally , we supplement these requirements with a general rule - of - thumb : a _ tiny gaussian beam _ , with a waist size equal to the width of a pixel ( _ and _ an initial propagation direction with respect to the beam axis that is defined by the e - field s wavefront curvature at the location of that pixel ) , must not expand or shift over so much so that it would relocate a significant fraction of its power onto any neighboring pixels , during the entire course of the mirror interaction . if that rule is broken , then this method for mirror interactions will be inaccurate , and the pixel - by - pixel multiplication of maps will not be sufficient for modeling reflection or transmission operations . to create a complete ( but not over - determined ) description of a mirror , we consider the full information expressed by mirror interactions , as well as physical symmetries and the conservation of energy . a 2-port mirror ( reflective- and ar - sides )must relate 2 complex input fields to 2 complex output fields .thus 4 complex ( or 8 real ) elements are needed , at each pixel location , to specify that pixel of mirror completely .these 4 complex elements correspond to the complex reflection and transmission operations performed from the two different sides of the mirror .the two transmission operations from either side barring excessive beam expansion or focusing during the transmission must in fact be the same : ( except for the beamsplitter , which requires distinct transmission maps for the inline and offline paths ) .the mirror description is thus reduced to 3 complex elements per pixel , i.e. , 3 complex mirror maps : the transmission map _t _ , and reflection maps from either side , and .next , by energy conservation we have ( for each mirror pixel ) : , and , where , are the losses experienced in reflection from either side of the mirror .but these conditions are not sufficient . by considering the complete ,superposed e - fields which exist on either side of the mirror , both before and after the mirror interaction ( i.e. , the incoming e - fields vs. the outgoing ones ) , and by requiring that the total power in all e - fields _ must not increase _ due to the interaction , one may obtain the following inequality ( expressed in terms of the incoming , `` before '' fields ) : for the simplest case of a loss - free mirror ( = 0 , and ) , this just reduces to the complex generalization of one of stokes relations : . assuming the transmission coefficient to be real, this generates the familiar possibilities for the phase relationships between the reflectivities on either side of a mirror : e.g. , , , etc . for the more general case of a lossy mirror, we can convert eq .( [ dispeq1 ] ) into a simpler prescription by considering variations to the relative phases and amplitudes of and , thus generating the strict energy conservation condition : this inequality ( generalized further for the beamsplitter ) must be satisfied ( at every pixel ) to guarantee conservation of energy in a mirror , given any e - fields which could be incident upon it .our simulation program enforces this condition by testing the input data maps for each mirror , and rejecting runs with unphysical specifications .given all of these requirements , the pixelized mirror maps described above are versatile tools for modeling a wide variety of optical features and imperfections , as discussed in sec .[ s1 ] .our program models a static interferometer , assumed to be held to the correct operating point for an indefinite period of time .this assumption enormously simplifies the program , and our overall simulation task , while still enabling us ( see sec .[ s3s4 ] ) to compute the frequency - dependent gw - response of a ligo interferometer .( the dynamics of ligo control systems do not necessarily come into this calculation , since the mirrors act like `` free - masses '' at the gw - frequencies that ligo is most sensitive to , and because the gw s are presumably not strong enough to interfere with the interferometer s resonant lock . )the primary task of the simulation program , therefore , is to compute the relaxed , steady - state , resonant electric fields that build up in each part of the interferometer , when it is excited by a laser beam of fixed amplitude , orientation , and frequency ( at the nominal carrier and sideband frequencies ) , entering through the power recycling mirror .the multiply - coupled - cavity nature of the interferometer , and the physical complexity of the simulation , means that the program must iteratively solve for an irreducible number of resonant e - fields ( one per cavity ) in the interferometer . for the first - generation ligo configuration ,three e - fields must be computed via relaxation : one in the prc , and one in each of the fp arm cavities ( though the precise location of each relaxed e - field in its cavity can be arbitrarily chosen ) .when the relaxation algorithm is finished , these three principal e - fields can be propagated , reflected , transmitted , and/or superposed together in order to generate the complete steady - state e - fields everywhere in the system . for each e - field that will be relaxed , there is a steady - state equation of the form : the left hand side of eq .( [ dispeq3 ] ) is the e - field to be solved for ; we display it here as a vector because it has a propagation direction : either `` forward '' or `` backward '' along the beam axis .the expression , , represents this steady - state e - field after it has gone on a round - trip through the interferometer , and has returned to its starting point .( note that this `` round - trip '' includes all possible closed - loop paths through the interferometer that do not ever pass through the same location with the same propagation direction of any of the other principal e - fields being relaxed . )lastly , represents a composite excitation e - field , which may consist of both a primary excitation e - field ( e.g. , the input laser beam , for the e - field being relaxed in the prc ) , plus any `` leak '' e - fields that arrive from the other principal fields being relaxed ( e.g. , fields leaking from the fp - arms into the prc ) .it is this leak - field part of the excitation term which is the source of many coupled - cavity effects in the interferometer ; and it is the presence of the term in eq .( [ dispeq3 ] ) which makes `` self - coupled '' , thus requiring us to use an iterative process to solve for it . specifying a relaxation convergence schemeis done by prescribing how the iterative guess for a given e - field is obtained from its iteration ( and from the iteration of the other e - fields being relaxed simultaneously ) .considering eq .( [ dispeq3 ] ) , the simplest possibility is to make the choice : this process can then be repeated for as many iterations as necessary , until the steady - state equation , eq .( [ dispeq3 ] ) , is satisfied by to within a pre - specified threshold of accuracy ( typically 1 part in in power , for our runs ) .this relaxation formula is guaranteed to succeed at smoothly converging an e - field to its correct steady - state form barring complications which may arise from changes to the interferometer caused by the parameter optimization routines ( see sec .[ s3s3 ] ) because it imitates how power actually does build up , through a sum of many bounces , in the cavity system of a real interferometer .this iteration process has been used successfully by previous researchers ( e.g. , ) . sincethis method does model the true physical buildup of power , however , it requires a great many iterations to converge , especially for coupled - cavity systems with large q - factors ( i.e. , long transient decay times ) .we have found this relaxation scheme to be forbiddingly slow for a full - ligo simulation program . therefore , as first suggested ( and implemented , for a simpler cavity arrangement ) by a ligo colleague , our simulation program uses a different approach . instead of eq .( [ dispeq4 ] ) for choosing the iteration , we use the following expression : where are unknown , complex coefficients that are solved for by minimizing the error in what the steady - state equation will become in the _ next _ round , with ( as a function of these unknown coefficients ) taking the place of in eq .( [ dispeq3 ] ) , and taking the place of ( noting that does not yet take into account the new coefficients that will be chosen for the _ other _ e - fields being iterated ) . the resulting expression for the steady - state error can be differentiated with respect to the six available degrees of freedom ( the real and imaginary parts of , , and ) , resulting in six simultaneous equations that are solved via matrix inversion .this relaxation method ( which essentially reverts to the simpler method if we set ) gains a huge advantage compared to the method of eq .( [ dispeq4 ] ) , by having these additional , useful degrees of freedom available for each iteration .the number of iterations necessary to achieve convergence are greatly reduced ( often by - 2 orders of magnitude ) , resulting in much faster e - field relaxation .this `` abc '' iteration algorithm does have certain drawbacks .it is more difficult to implement in code , and it requires more information to be stored ( thus using more memory ) , and more propagations to be performed ( despite what is stated in , because we use three unknown coefficients instead of two : ) , during each iteration .it is also somewhat less stable , because of the wide range of abc - coefficients that may be chosen by the error minimization algorithm ; in fact , large excursions in the iterated e - field structure and power level appear to be _ required _ during ( typically ) the first - 100 iterations , in order for the accelerated convergence scheme to function properly .we have found that the best ( straightforward ) way to enhance the convergence stability of the `` abc '' relaxation algorithm , without slowing it down to the pace of the method of eq .( [ dispeq4 ] ) , is to place hard limits upon the allowed choices of the abc - coefficients during the early stages ( the first - 200 iterations ) of runs .this allows the relaxation process to avoid failure during the initial , large e - field adjustments , after which it settles down to converge efficiently to the steady - state solution .lastly , we note that the choice of for any one field will affect future iterations of the other fields , because they are all coupled via the leak - fields . iterating them independently from oneanother ignores this coupling , and causes some slowing ( and oscillation ) of the relaxation process ( we see a temporary `` sloshing '' of power back and forth between the prc field and the fp arm cavity fields ) .this problem can be eliminated by choosing the coefficients for all three relaxed e - fields _ simultaneously _ , by calculating the 18 unknown parameters ( i.e. , 9 complex `` abc '' coefficients ) in order to minimize a specially - weighted sum of the iteration errors for all three relaxed fields .such a `` global abc '' relaxation scheme is significantly more demanding in terms of coding difficulty and computer memory usage , but it is capable of making the relaxation process more stable , and further reduces the number of iterations needed to reach convergence . though we have not yet implemented this global relaxation scheme into simulations of the first - generation ligo interferometer , we have successfully incorporated global abc relaxation ( with four relaxed e - fields needed , instead of three ) into the version of our code used for the advanced - ligo dual recycling configuration , with good results . in summary , the `` abc '' iteration method presented here is a sufficiently reliable and _ extremely fast _ relaxation scheme which greatly reduces program execution time , thus enabling us to perform full - ligo simulation runs ( with significant optical deformations ) in a reasonable amount of time . at the core of our efforts to make a realistic simulation of a ligo interferometer are several procedures which bring the system into an `` optimal '' configuration for signal detection . the problem of optimization is a highly nontrivial matter : not only does the interferometer possess numerous degrees of freedom that must be optimized ( e.g. , all resonant cavity lengths , the schnupp length asymmetry , etc . ) , but each evaluation of performance ( i.e. , gw - sensitivity ) as a function of these optimizable parameters is extremely time consuming , since it requires the detailed computation of the carrier and sideband e - fields everywhere in the interferometer .it therefore appears infeasible to use the brute - force method of optimizing the interferometer s gw - sensitivity function by evaluating it for a thorough sampling of points over the entire , multidimensional parameter space . as an alternative , each key parameteris optimized separately in our program , using some error signal ( to be brought to zero ) , or some function of merit ( to optimize ) , which is strongly ( and solely ) dependent upon that individual parameter , and which is a true measure of when that parameter is well chosen for the maximization of gw - sensitivity .this strategy is aided by the fact that the interferometer s gw - sensitivity is quite insensitive to particular _combinations _ of parameter changes , such that if certain optimizable parameters are displaced from one apparently optimal point in the multidimensional parameter space , then the other parameters will adjust themselves ( via our optimization procedures ) to compensate with virtually no reduction in overall interferometer sensitivity .we note that these optimization routines in our program are performed concurrently with the e - field relaxations , so that the final results emerging from the iterative relaxation scheme are the steady - state e - fields of a _ fully - optimized _ interferometer .the following subsections give an overview of the parameters that the program optimizes , the criteria for optimizing them , and the physical considerations which underlie their significance .note that we have performed extensive empirical tests to verify that each of the optimization procedures discussed below works as desired . as discussed in sec .[ s2 ] , the carrier frequency beam must have a double resonance in the system consisting of the power recycling cavity and fabry - perot arm cavities , while being held to a dark - fringe at the exit port of the beamsplitter .our method for achieving these resonance ( and dark - fringe ) conditions is similar to that of vinet _ et al . _ , in which we null the computed `` phases '' between certain specified cavity e - fields by adjusting the various cavity lengths .( a length change will alter the phase relationship between an e - field that has taken a round - trip through an adjusted path length , and one that has not potentially bringing a cavity to resonance . ) alternatively , we have chosen not to use the method of mcclelland _ et al . _ , in which the e - fields are re - relaxed for each trial set of lengths until the configuration for maximum power buildup is found , since that would involve a time - consuming search over a multi - dimensional phase space of independent , controllable cavity lengths .the phase between two e - fields is defined via an overlap integral ( or more precisely , by a discrete sum over the pixelized grids ) , as follows : \equiv \text{tan}^{-1}[\frac { \im < \roarrow{e}_{1}|\roarrow{e}_{2 } > } { \re < \roarrow{e}_{1}|\roarrow{e}_{2}>}]~ , \nonumber \\\text{with:}~ < \roarrow{e}_{1}|\roarrow{e}_{2 } > \equiv \frac{(\text{calc .window size})^{2}}{n^{2 } } \cdot \sum_{i=1}^{n } ~ \sum_{j=1}^{n } & \roarrow{e}_{1}^\ast(i , j ) \cdot \roarrow{e}_{2}(i , j ) ~. \label{dispeq6}\end{aligned}\ ] ] the phase between an e - field and its round - tripped analogue can be driven to zero by a ( sub - wavelength ) path length change , computed via the formula : ( an extra factor of is included here because the round - trip doubles the phase change caused by a cavity length adjustment ) .note that this procedure can not simply be performed once these phases depend in detail upon the structure of the iterated e - fields , and they must be repeatedly measured and adjusted throughout the e - field relaxation process . to achieve resonance in a particular cavity ,the proper solution is to ensure that the fields , , and ( as defined in sec .[ s3s2 ] ) all have zero phase between them .( this condition of mutually zero phase is given some theoretical justification in the literature , as well as being apparent from eq .( [ dispeq3 ] ) ) .the phase - nulling procedure is performed in the fp arm cavities and in the prc by performing microscopic adjustments to the three relevant cavity lengths here : , , and the `` common - mode '' recycling cavity length , .similarly , the dark - fringe condition at the beamsplitter exit port is achieved by setting the phase between the carrier e - fields coming from the inline / offline recycling cavity arms to an odd multiple of , by microscopically adjusting via `` differential - mode '' corrections to and . unlike the procedure discussed in , we do not choose any particular spatial mode ( such as the lowest order , tem mode ) of the interferometer e - fields for calculating these phases .rather , we use the entire e - fields , for two reasons : first , for the dark - fringe condition , one wishes to minimize the _ total _ dark - port power emerging from the beamsplitter exit port , all of which contributes to the shot noise ; second , for the carrier resonance conditions , coupling between modes brings power back into the tem mode from higher modes , so that bringing the _ total _ field ( i.e. , the `` perturbed interferometer mode '' ) to resonance _ also _ maximizes the tem power used in calculations of the gw - signal . in any case , we have observed that these two methods ( i.e. , with or without spatial mode selection for resonance - finding ) typically give very similar results .the rf - sidebands must also satisfy important conditions , specifically resonance in the prc , and near - anti - resonance in the fp arm cavities .these conditions are affected by the optics , and the sidebands must also therefore be tuned for optimum performance . as a first approximation, the cavity lengths and rf - modulation frequency are initialized through an analytic evaluation designed to simultaneously optimize carrier and rf - sideband performance .but finer adjustments are required for the prc resonance condition , and since the cavity lengths are already fixed by the resonance requirements of the carrier beam ( the carrier and sideband e - fields are relaxed in separate code executions , with the carrier first ) , the free parameter which remains to be adjusted is the sideband frequency . in a manner analogous to that specified for cavity length changes , sideband frequency adjustments for resonanceare periodically computed as : ; though somewhat smaller frequency changes are actually made in practice , during each adjustment , to lessen the disturbances to the e - field relaxation process .the cumulative frequency change for typical runs is small ( usually a few hundred hz or less ) , but necessary to achieve sideband prc resonance .lastly , we note that the overall results of runs for the two different rf - sidebands ( i.e. , the `` upper '' and `` lower '' sidebands , ) are usually fairly similar ; we generally perform computations for only one sideband , and assume mirror - image results for the other .the level of power buildup in the interferometer , and hence its gw - response , depends upon the reflectivities of the mirrors , and is constrained by the mirror losses . in ligo ,the reflectivities of most of the mirrors have been predetermined according a variety of auxiliary physical requirements ( e.g. , ) .but the principal criterion for specifying the reflectivity of the power recycling mirror is the maximization of the gain that is achievable with power recycling .the power recycling gain , in turn , depends critically upon the losses experienced in the imperfect interferometer , which can not be precisely determined via analytical estimates . is therefore a parameter that can be optimized by the program during each run optimization is performed specifically during _ carrier _ runs , since it is the carrier which requires a high prc gain ; the sidebands do not benefit from long prc storage times , and would in fact suffer less degradation with a prc storage time of zero , and immediate ejection at the beamsplitter exit port . ] , in order to determine the best achievable recycling gain , given the particular optical imperfections being studied .the reflectivity of a real mirror , of course , is not something that can ordinarily be adjusted on the fly ; rather , the results of this optimization routine in our simulation program have been used to help determine appropriate `` design values '' of , for power recycling mirrors that are procured by ligo . for interferometer losses that are small , it can be shown ( e.g. , ) that the optimal choice for the recycling mirror transmission is , where is the effective total loss in the full interferometer ( including the loss in the power recycling mirror ) .the optimal reflectivity is therefore given by : .the value of , and thus of , will be strongly affected by optical deformations .the quantity used for this optimization procedure is the _ real part _ of the integrated overlap between the immediate ( `` prompt '' ) reflection of the laser excitation beam from the ar - side of the power recycling mirror , with the complete , composite e - field that is ultimately reflected back from the interferometer .( the _ imaginary _ part of this overlap integral is related to interferometer resonance , and will essentially be zeroed if the cavity lengths are properly adjusted for carrier resonance in the interferometer , as per sec .[ s3s3s1 ] . )the real part of the overlap constitutes an error signal ( cf .[ s3s1 ] ) .our program automatically obeys the _ amplitude _ part of the condition ( i.e. , ) for the recycling mirror ; and though the _ phase _ part of it is not mandated , we impose it for virtually all of our runs , including those described below in sec .[ s4 ] . ] that can be driven to zero by changes to .the magnitude of each change in the optimization process will be proportional to this error signal , though the proportionality constant ( which we have selected through empirical tests ) is not crucial , as long as it is large enough to achieve rapid optimization ( and close convergence to the optimal value ) , while being small enough to ensure a stable optimization process .this nulling of is equivalent to minimizing the total ( `` interferometer mode '' ) power that is reflected from the system , thus maximizing the power that is dissipated ( and hence , that is circulating ) inside of it . in practice, ligo has chosen an initial value for that is slightly below such an `` optimized '' value , both to hedge ( on the safer side ) against uncertainties ( and gradual increases with time ) of the effective interferometer losses , and to provide some reflected light for length control signals .for the specific runs to be presented in this paper , the recycling mirror reflectivity has been driven all the way to ; though runs can also be performed with our program in which is held to any particular fixed value that may be preferred due to practical considerations .also noted in sec .[ s2 ] was the incorporation of a macroscopic length asymmetry ( tens of cm ) between the inline and offline paths of the prc , in order to maximally channel sideband power out through the beamsplitter exit port , for use as a local oscillator in the heterodyne gw - signal detection scheme .the maximization of this local oscillator light requires a careful balance between extracting sideband light from the interferometer promptly , before significant power is wasted due to mirror losses ; yet leaving the sidebands in the interferometer long enough ( i.e. , for enough round - trip bounces ) so that they can take full advantage of the broadband amplification provided by power recycling . the total phase between the inline and offline rf - sideband fields , when they meet at the beamsplitter , is analytically given as : ] ( exponentiating each pixel individually , not the whole matrix ) , we can remove the beam - weighted tilts from a given mirror deformation map by performing small angular corrections that set .this is done after first removing any piston offset from the mirror , via uniform displacements that set to zero .an important point about this procedure is that the appropriate beam spot size must be used for the modes in the overlap coefficients given above , in order for the `` beam - weighting '' of each mirror s tilt to be correct ; but since the beam spot size is different at different locations in the interferometer , one must therefore know which specific mirror a given deformation map will be used for , before its tilt can be properly removed . also , we note that this tilt - removal process will actually give the full mirror a nonzero tilt , overall ; but the _ center _ of the mirror ( i.e. , the part most sampled by the beam ) will be essentially flat ( on average ) with respect to the plane transverse to the beam propagation axis .an example of such a tilt - removed surface deformation map is shown in fig .[ fg3 ] .sample map of a mirror surface with realistic deformations , after being processed by the tilt - removal algorithm .the mirror map , with most of the border region ( lying beyond the finite - sized mirror apertures ) clipped here for visual clarity , is oriented such that the incident beam propagation axis is along the vertical direction .the width of the grid shown here is cm ( deformation heights not shown to scale vs. width ) . ]a full set of program runs results in a detailed specification of the final , steady state of the interferometer . of the large amount of output data available ( either directly or after some post - processing calculations ) , here are some of the most important quantities that we examine : the _ relaxed powers _ ( for the carrier and rf - sideband fields ) at several key locations in the interferometer .the _ complete , relaxed e - fields _ at all desired locations available for graphical display , and/or for complete _ modal decomposition _ into hermite - gaussian tem modes , which assists in the interpretation of interferometer conditions ( e.g. ,tem/tem power indicates residual tilts , tem/tem power indicates beam mismatch , etc . ) . the _ carrier contrast defect _ , which quantifieshow well a carrier dark - fringe was achieved at the beamsplitter exit port for the imperfect interferometer . defining the carrier powers emerging from the relevant beamsplitter ports as and ,the contrast defect is given as : a large contrast defect implies a substantial carrier power loss at the beamsplitter ( and thus a broadband power loss in the system ) , as well as a large carrier contribution to the shot noise at the signal port photodetector , and an excess of raw power falling on that photodetector .the _ optimized interferometer parameters _ , as described above in sec .in addition to their role in optimizing the performance of the simulated interferometer , the computed values of these parameters often have intrinsic importance in terms of ligo design considerations . the _ gw - strain - equivalent shot noise spectral density _ , , of a single ligo interferometer .this crucial output function allows us to directly evaluate the sensitivity of the ligo detector to astrophysical sources of gravitational waves , given interferometers with realistic optical deformations ( and a realistic heterodyne gw - detection scheme ) .consider fig .[ fg4 ] , which shows the three most significant noise sources for the first - generation ligo interferometers : _ seismic _ , _ thermal _ , and _ shot noise_. they are plotted versus gw - frequency _ f _ , as spectral densities expressed in terms of the gravitational - wave fourier amplitudes , , that would induce equivalent signals in a ligo interferometer .these particular curves are the first - generation ligo requirements for the maximum contributions that would be acceptable from each of these three main noise categories actually represents an approximate conglomeration of mirror internal vibration noise , suspension pendulum noise , and other technical noise sources . ]. the total noise envelope can be obtained from these individual curves by adding them together in quadrature ( i.e. , incoherent addition of uncorrelated noise is assumed ) .requirement curves for the primary noise sources expected to limit the gw - sensitivity of first - generation ligo interferometers . ] of these contributions , seismic and thermal noise are _ random forces _ which will push the ligo mirrors around in imitation of gw s .photon shot noise , on the other hand , is a form of _ sensing noise _, representing the quantum mechanical limit on the accuracy to which the mirror positions can be measured , given the finite amount of carrier power resonating in the fp arm cavities , and the finite amount of local oscillator sideband light available at the signal port . while the quality of interferometer optics has little effect upon the level of random force noise contributions ( other than _ radiation pressure _ noise , which should be unimportant for the first - generation ligo interferometers ) , the quality of the optics does have a direct impact upon the sensing noise limitation to ligo s gw - sensitivity .the presence of imperfect optics not only reduces the amount of circulating interferometer power available for sensing mirror positions ( and thus gw - induced mirror motions ) , but also increases the amount of unwanted power at the beamsplitter exit port such as carrier contrast defect power , and rf - sideband power in non - tem modes which contribute to the shot noise , but not to the gw - signal .we therefore focus upon the _ shot - noise - limited _ region of the ligo noise envelope in evaluating the effects of optical imperfections . for each set of output results, can be computed , and compared either to the first - generation ligo requirements ( see sec . [ s4 ] below ) , or to astrophysical predictions ( see sec . [ s5 ] ) , in order to determine the effects of optical deformations upon ligo s ability to detect gw s of reasonable , anticipated strengths .a full derivation of the formula for is given elsewhere .here we present the resulting expression , in terms of the relevant output data from our simulation program .the shot noise sensitivity limit is expressed in terms of the strength of the gw needed to produce a signal that could match this noise . for a monochromatic gravitational wave of the form : which is incident upon a single interferometric detector with optimal incidence angle and polarization, a gw - amplitude of would produce a unity signal - to - shot - noise ratio when sampled over unity bandwidth ( i.e. , 1 second integration time ) . with these definitions, we have : where : is the total excitation laser power ( before radio frequency modulation ) in watts , is planck s constant , is the carrier frequency , is the quantum efficiency of the photodetector at the signal port , and represent the division of laser power between the carrier and either one of its rf - sidebands ( cf .( [ dispeq8 ] ) ) , and terms like `` '' , etc . , represent the _ dimensionless _ relaxed power value ( in either the tem mode or the total in all modes , as indicated ) that is reported by the numerical simulation code for a carrier or sideband e - field in the indicated interferometer location .these `` dimensionless '' power values for the simulated e - fields are all normalized in the program to an input carrier / sideband laser beam power of 1 watt .note that we have not included both the upper and lower sidebands separately here ; in many cases it is sufficient to plug the simulation results for either of them into eq .( [ dispeq11 ] ) , and assume the interferometer performance to be fairly symmetrical about the carrier frequency for these rf - modulated fields . some remaining quantities in eq .( [ dispeq11 ] ) ( e.g. , , ) are estimated mirror ( amplitude ) reflection and transmission coefficients for the path that any gw - induced signal fields would take from the fp - arms ( where they are physically generated ) through the interferometer , until they exit at the beamsplitter signal port .finally , the quantity is the _ effective storage time _ of the gw - induced signal fields in the realistically - deformed fp arm cavities .since the explicit simulation of the buildup of these signal e - fields in the arm cavities would involve an additional set of e - field relaxation procedures that are not performed by the program , the effective storage time of these gw - induced e - fields in the ( imperfect ) arm cavities must be approximated , as follows : where is the length of the fp arm cavities , is the speed of light , and a properly - weighted average has been performed over the results for the inline and offline fp - arms .we note that the dependence of upon the gw - frequency is contained within the expression , so that it has two basic regimes separated by the `` knee '' or pole frequency , , as follows : these two regimes are evident in the plot of that is included in fig .in this section , we will demonstrate the code with a set of runs incorporating measurement maps made from very high quality mirrors .the only interferometer imperfections included are these mirror deformation maps , the finite sizes of the mirrors , and a small amount of pure loss specified for each mirror .the pre - specified mirror loss values represent absorption in the mirror substrates and coatings , as well as high - angle scattering due to roughness finer than the resolution of our grids ( for which most of the scattered power is lost beyond the finite mirror apertures , particularly in the long fp - arms ) . for each individual run in this set, the simulated interferometer is completely optimized in the sense of sec .the program input parameters common to all runs are those listed in table [ tbl1 ] .first , however , we note that many tests of the simulation program have been performed to ensure its validity as a realistic model of a ligo interferometer .these tests include : comparisons with numerical simulation results in the literature , to verify basic operations like beam propagation and diffractive loss from finite mirrors ; comparisons against modal analysis methods for simple interferometer imperfections , such as mirror tilts ; and comparisons against analytical methods for computing the effects of zernike polynomial mirror surface deformations .we have implemented anti - aliasing and energy - conservation procedures ( cf .[ s3s1 ] ) to make sure that the physics of the interferometer is being properly simulated ; and we have performed numerous `` common - sense '' tests to check that the program s output results not only make physical sense , but also reasonably reflect the types of interferometer imperfections being modeled . finally , direct comparisons of our simulation results with experimental measurements for large - scale interferometers are now becoming possible ( e.g. , ) , and have thus begun to provide useful mutual feedback for both experimental and modeling efforts .the mirror maps used in these simulation runs have been derived from two measurements of real optical components : the first one , obtained by ligo from hughes - danbury optical systems , is a phase map of the reflection from the polished surface of the `` calflat '' reference flat mirror used by the axaf program ( e.g. , ) for the calibration of their extremely smooth , high - resolution conical mirrors ; and the second one is a transmission phase map of a trial ligo mirror substrate obtained from corning .both of these measurements were of uncoated , fused silica substrates .measurement maps of fully - coated mirrors were not available for the runs in this study .each of these near - ligo - quality mirror maps ( one surface reflection map and one substrate transmission map ) were extrapolated into an array of many maps , so that we could place deformed surfaces and substrates upon all of the interferometer mirrors simultaneously .one of us ( y. h. ) used a three - step process for creating new mirror deformation maps : first , the surface ( or substrate ) source map was fourier transformed into spatial - frequency space ; then , the relative phases of its fourier components were randomized ; and finally , the application of an inverse fourier transform created a new mirror with randomized features , but with the same power spectrum of deformations .this process was performed many times , to create 15 new surface maps out of the initial calflat map , and 7 new substrate maps out of the initial corning map .test runs with various groupings of these mirrors have demonstrated to us that different members of a family of randomized mirrors have ( as expected ) very similar characteristics to one another in terms of their effects upon interferometer performance , and that swapping one for another has little effect upon the output results of the simulation program .further preparation steps for the mirror maps were taken to adapt them to the appropriate grid parameters and mirror aperture dimensions ( cf .table [ tbl1 ] ) , and also to supply related deformation maps for the beamsplitter , given its tilt angle and consequently elliptical apertures ; full details of all such preparations are given in .each of the resulting mirror maps were then tilt - removed ( using appropriate beam spot sizes ) with respect to normal - incidence laser beams , as per sec .[ s3s3s6 ] .an example of one of these surface deformation maps has been shown in fig .[ fg3 ] . as a final step ,it was necessary to create several families of surface maps with different levels of deformations , in order to help the ligo project evaluate a range of mirror polishing specifications for the procurement of the core optics , as well as to make room for the not - fully - determined effects of mirror coating deformations . to accomplish this ,the calflat - derived surface deformation height maps were uniformly multiplied by scale factors to generate the new families .the original family of surfaces which possess rms deformations of .6 nm when sampled over their central 8 cm diameters has been labeled `` '' ( with ) .the scaled - up families are labeled , , and , respectively .the mirror substrate maps ( possessing rms deformations of .2 nm over their central 8 cm diameters ) , however , were considered likely to represent the best quality of fused silica substrates obtainable for the first - generation ligo interferometers , and were not re - scaled ; all of the deformed - mirror runs discussed below were done with this same family of substrate maps .note that a direct comparison of the substrate rms value with that of the surfaces is not informative , since the substrates are typically sampled less frequently by the electric fields ( particularly for the carrier light resonating in the fp arm cavities ) , and are therefore less significant ( for the carrier fields , at least ) in their effects .given these families of mirror deformation maps , it becomes possible to examine the overall performance of a coupled - cavity ligo interferometer in the presence of realistic " mirror deformations , to comprehensively estimate its true capabilities .this study contains results from five separate simulation runs : one run with perfectly smooth mirror substrates and surfaces , and four runs with : ( i ) deformed substrate maps for all of the mirrors , plus , ( ii ) deformed surface maps for all mirrors from , respectively , the , , , or families . for all cases other than the `` perfect mirrors '' run , the transmission and ( reflective - side ) reflection maps for each 2-port mirrorwere constructed from one surface phase map and one substrate phase map ; the program then derives the ar - side reflection map from the other maps via energy conservation ( i.e. , the lossless - mirror stokes condition , cf . sec .[ s3s1 ] ) .the beamsplitter s two reflection and two transmission operators were constructed from one surface map and two substrate maps , with the remaining map derived via its generalized energy conservation formula .the results are summarized in table [ tbl2 ] .several of the quantities described in sections [ s3s3 ] and [ s3s4 ] are included , as well as the true ( `` absolute '' ) carrier and sideband power exiting at the beamsplitter signal port .the dc values and pole frequencies of the gw - sensitivity curves calculated for each run are also given , from which one can construct for each case as follows ( cf .( [ dispeq11])-([dispeq13 ] ) ) : some of these quantities have also been re - computed in the hypothetical case of an _ idealized output mode cleaner _ functioning at the signal port , which would act to strip away all of the non - tem light ( contributing only to noise ) from the exiting beams , while passing all of the tem light ( contributing all of the signal and some shot noise ) through to the output photodetector .a photodetector quantum efficiency of was assumed for computing the values of , , etc .llllll * quantity * & + deformed surfaces ( rms in wavelengths ) & zero & & & & + deformed substrates ( y / n ) & no & yes & yes & yes &yes + recycling mirror reflectivity & 98.61% & 98.37% & 98.07% & 97.39% & 93.90% + schnupp length asymmetry ( cm ) & 9.0 & 12.3 & 13.5 & 15.9 & 24.7 + tem carrier power , recycling cavity & 72.40 & 61.54 & 51.84 & 38.41 & 16.37 + tem carrier power , fabry - perot arm avg . &4726.7 & 4012.0 & 3374.3 & 2491.7 & 1042.4 + tem carrier power , exit port & & & & & + total carrier power , exit port & & & & & + carrier contrast defect , & & & & & + tem 1-sideband power , recycling cavity & 59.08 & 28.32 & 25.01 & 20.17 & 10.66 + tem 1-sideband power , exit port & .9067 & .6745 & .6955 & .7344 & .8196 + total 1-sideband power , exit port & .9071 & .7590 & .7761 & .8082 & .8795 + gw - response pole frequency , ( hz ) & 90.32 & 90.38 & 90.45 & 90.61 & 91.45 + modulation depth , & 0.279 & 0.405 & 0.455 & 0.501 & 0.549 + absolute carrier exit - port power ( mw ) & 12.41 & 47.1 & 77.2 & 118.6 & 187.8 + absolute 2-sideband exit - port power ( mw ) & 207.1 & 358.9 & 458.3 & 571.3 & 737.7 + dc gw - sensitivity , & & & & & + + modulation depth , & 0.053 & 0.078 & 0.093 & 0.113 & 0.156 + absolute carrier exit - port power ( mw ) & & & 0.12 & 0.27 & 1.10 + absolute 2-sideband exit - port power ( mw ) & 7.7 & 12.2 & 17.9 & 28.1 & 59.6 + dc gw - sensitivity , & & & & & + utilizing this output data , one noteworthy result is that several effects of deformed optics have a _quadratic _ dependence upon the deformation amplitudes shows an _ increase _ in exit - port sideband power , as the deformation levels get very large .this counterintuitive behavior is due to great effectiveness of the parameter optimizations ( cf .[ s3s3 ] ) specifically that of the schnupp length asymmetry , which is forced to a larger value for highly deformed mirrors , in order to get the sideband fields out of the degraded interferometer as soon as possible .this type of behavior for the sideband fields in the presence of highly deformed mirrors , though technically correct , may be unduly optimistic when considered in the context of a real ligo system , in which the many parameters are not as easily adjustable if at all adjustable in the experimental system , as they are in the simulation program . ] .this is as expected , since power scattered out of a gaussian beam by mirror roughness scales like the square of the roughness amplitude , even when the deformations are spread over a range of spatial frequencies .for example , the effects of imperfect mirrors on the fp arm cavity power buildup can be expressed in terms of an equivalent `` effective mirror loss '' that would analytically reproduce the same amount of circulating fp - arm power .doing this for each of the runs , we obtain an effective loss function that increases quadratically ( versus mirror surface rms deformation amplitude ) from the baseline value of parts per million of `` absorptive '' loss that is put in by hand for each mirror .similarly , the contrast defect ( ) , which comes from power coupled into non - tem beam modes by ( longer spatial wavelength ) optical imperfections , is also well represented by a quadratic fit , as is shown in fig .lastly , the _ optimized _ recycling mirror ( power ) reflectivity , , should decrease quadratically from its `` perfect mirrors '' value , since 1- is directly proportional to interferometer losses ( cf .[ s3s3s3 ] ) , and the dominant losses ( contrast defect loss and high - angle scattering in the fp - arms ) are both quadratically dependent upon mirror deformation rms. a fit of vs. rms does indeed bear out this expected functional form ( though not all the way to a surface rms of zero , since other effects then take over substrate deformations , absorptive losses , etc . ) .the carrier contrast defect , , plotted versus rms mirror surface deformations .the dashed line is a quadratic fit to the points representing the runs performed with , respectively : `` perfect mirrors '' , then the , , , and mirrors .the horizontal lines are upper limits on the contrast defect values allowed for the first - generation ligo and enhanced - ligo interferometers , respectively . ]the most significant issue to address was whether ligo would be able to perform according to project requirements , given mirrors with realistic levels of optical deformations .that question is answered in the affirmative , as demonstrated in table [ tbl2 ] and fig s .[ fg5]-[fg6 ] , for all cases except that of the worst surfaces simulated here .first of all , the first - generation ligo interferometers are required to have a carrier power gain of at least 30 in the prc , and this target is achieved in all runs except for the ( ultra - conservative ) run performed with surface deformations .in addition , the first - generation interferometers must have a contrast defect of , a requirement that is also satisfied by all runs other then the case ( and anything better than would suffice ) .furthermore , it has been quoted that the `` enhanced '' ligo interferometers should satisfy the more stringent requirement of , which would be achieved by three of the five simulation runs here ( and anything better than would suffice ) an acceptable result , especially considering the likelihood of improved mirror quality by the time the enhanced interferometers are operational .one caveat , however , is that although the contrast defect requirements are met for most of the runs , a large amount of total power ( several hundred milliwatts ) falls upon the output photodetector in all cases , especially for runs with highly deformed mirrors in which the optimized sideband modulation depth is large , in order to help the signal compete against increased shot noise .if a signal detection apparatus that could handle this large amount of power can not be supplied , then an output mode cleaner may be needed ; table [ tbl2 ] shows that an output mode cleaner would greatly reduce the power that the photodetector must accommodate ( while also improving the gw - sensitivity by % ) , as long as it operates closely enough to an `` idealized '' performance , as described above .comparison of the shot noise curves , , computed for each of the interferometer simulation runs , against the official gw - strain - equivalent noise envelope requirement for the first - generation ligo interferometers . ]the most fundamental requirement to be satisfied is that the shot - noise - limited sensitivity curve for an interferometer with deformed mirrors ( cf .( [ dispeq11 ] ) , ( [ dispeq14 ] ) ) should fall within the bounds of the _ gw - strain - equivalent noise envelope requirement _ , which is the official total - noise limit set for the ( full 4 km baseline ) first - generation ligo interferometers . in fig .[ fg6 ] , the computed curves for these runs are plotted against the data points that define the ligo requirement envelope , with all data given in terms of spectral densities .the seismic , thermal , and shot - noise - dominated regimes of the requirement envelope are apparent in the figure , and our curves can be compared to the shot - noise - dominated region .we conclude , once again , that all runs other than the case succeed in meeting the initial - ligo requirement .the overall conformity of these results indicates that there is a very clear and very strict , though achievable quality level for the core optics that must be reached in order for the first - generation ligo interferometers to achieve their target performances .to place the results of our runs into a scientifically relevant perspective , we estimate the effects of optical deformations upon ligo s ability to detect gravitational waves from anticipated astrophysical events . focusing here upon gw - sources that might be detectable by the first - generation ligo interferometers , and considering cases for which an improvement in the shot noise limit may significantly increase the number of detection events ( or enlarge the detectable range of somereasonably well - understood scientific parameter ) , we arrive at two good candidates for study : _ periodic _ gw s from non - axisymmetric pulsars , and _ burst _ gw s from the coalescence of black hole / black hole ( bh / bh ) binaries . to facilitate comparisons against theoretical signal estimates , the output data from the simulation program are used to create representative interferometer noise curves . to that end, we form the quadrature sum ( for each run ) of three functions : the seismic and thermal noise _ requirement _ curves ( from fig . [ fg4 ] ) , added to the particular shot noise curve , , that is computed ( cf .( [ dispeq11 ] ) , with data from table [ tbl2 ] ) for each of the simulation runs .these summed , spectral density noise curves must be converted into useful signal - to - noise expressions .we follow the conventions of thorne , in which each of these total - noise curves ( ) is converted to a dimensionless expression ( ) , and is compared to the `` characteristic strength '' ( ) of a gw - source . for periodic sources ,the condition means that after _ coincidence detection _ in two identical interferometers for one - third of a year of integration time , a source with strength can be extracted from the gaussian noise with a confidence level of 90% .averaging over all polarizations and orientations of the source on the sky , and treating the gw - frequency ( equal to twice the pulsar s rotation frequency ) and phase as known , eq .52a of yields : for a pulsar with _ gravitational ellipticity _ , a rotation - axis moment of inertia of , radiating at frequency at a distance from the earth , and averaging over orientation angles of the source , we have ( from eq .55 of ) : in fig .[ fg7 ] , is plotted for all of the simulation runs , and displayed with them are two curves , each representing the _ locus _ of possible gw - strengths ( plotted versus frequency , up to ) for a pulsar with given and .we have chosen , which should be below the `` breaking strain '' of neutron star crusts , yet may produce a detectable signal . while this value is too large for millisecond pulsars given typical limits on their rates of gw - induced spin - down , it may not be unreasonable for newly - formed pulsars , of which it has been estimated that there may be such `` new '' pulsars in the galaxy , or one every kpc .plots of characteristic gw - signal strength versus frequency , for pulsars with specified ellipticity and distance from the earth ( dashed lines ) , displayed against the dimensionless noise curves ( for periodic searches ) that are computed from the output results of the simulation runs ( solid lines ) . ] by setting at the frequency of peak sensitivity ( ) for a given curve , one obtains the rough estimate that in going from the case to ( or very nearly to ) the perfect mirrors case , the typical `` lookout distance '' to which such a pulsar is detectable increases from .57 kpc to 2.0 kpc ( while increases to hz ) .this improvement roughly increases the expected number of pulsar detections ( in a galactic disk distribution ) by , and brings the lookout distance for the detection of even a _single _ pulsar with the first - generation ligo interferometers to a more reasonable value .alternatively , for a pulsar at a given distance and gw - emission frequency , it provides a factor of leeway in the smallest detectable value of .a similar formulation is used for the analysis of burst sources .each quadrature noise sum , , is again computed ; and with appropriate angle averaging and assumptions for optimal filtering , we have ( from eq .34 of ) : for bursts , means that after coincidence detection in two identical interferometers for one - third of a year of observation time , a detection of gw - strength has a 90% probability of being a real signal , rather than an accidental conspiracy of the gaussian noise in the detectors .( coincidence operation between multiple interferometers should theoretically eliminate the false burst signals caused by non - gaussian , uncorrelated noise , which we neglect here . )we consider a bh / bh binary system with equal component masses , , located a distance from the earth , and evolving in frequency through the inspiral phase until it reaches the onset of the coalescence phase ( i.e. , merger and ringdown ) at .for these parameters , eq .46b of yields( with a cumulative factor of 2 adjustment from factor of 2 corrections to eq s . 29 and 44 of ): the proper way to interpret this formula , is that _ if _ the frequency of peak detector sensitivity is , then the _ total _ integrated inspiral signal deposited into the detector ( for comparison with ) is determined by evaluated specifically at that . in fig .[ fg8 ] , we have plotted the noise curves ( for burst searches ) that are obtained for each of the simulation runs , along with two curves ( with arrows showing time evolution ) , for coalescence events that just manage to skirt the high - detection - confidence threshold of at during the course of their inspirals .the first conclusion that one may draw from this plot is that ligo s sensitivity to these coalescence events ( at least during inspiral ) is most strongly limited by the _ low frequency _ part of the noise curves , i.e. , the seismic and thermal noise limits . nevertheless , there is a measurable benefit from improving the shot noise limit : judging from the plot , it can be estimated that in going from the case to the perfect mirrors case , the lookout distance is increased from mpc to mpc ; or equivalently , it increases the expected number of detectable events by the factor .the actual rate of bh / bh coalescence events is extremely uncertain ( even their existence is uncertain ) , but good middle - of - the - road values that one could use as benchmarks are the `` best estimates '' that have been made by phinney , and narayan _ et al . _ , which are , respectively : per year out to 200 mpc ( assuming a hubble constant of ) , and per year out to .thus , the perfect mirrors run appears to put bh / bh binary coalescence events just within the conceivable reach of detection for the first - generation ligo interferometers . perhaps even more significantly ,improving the shot noise limit could increase ligo s sensitivity to the onset of the merger phase of bh / bh binaries with masses like these ; gw - emission during the actual merger is still poorly understood , but it may involve the most powerful radiation of detectable energy during the overall coalescence process .plots of characteristic gw - signal strength as a function of the detector s peak sensitivity frequency , during the inspiral phase of 10 black hole / black hole binaries ( dashed lines ) , displayed against the dimensionless noise curves ( for burst searches ) that are computed from the output results of the simulation runs ( solid lines ) . ]we end this section by cautioning that the aforementioned numbers must be considered very rough estimates , given the highly simplified ways in which we have treated the noise curves of real interferometers , the sophisticated data analysis methods needed to extract signals from the noise , and the many inherent uncertainties about the gw - sources themselves .in fact , rather than interpreting the results of this section as indicating how ligo _ optics _ would determine initial - ligo _ physics _ , it would be more appropriate to interpret these results as a demonstration of how initial - ligo _ physics _ places requirements on the _ optics_. the firm scientific conclusion that one can draw , however , is that using optical components of the best achievable quality can indeed make a difference in whether or not the first - generation ligo interferometers have a fighting chance to detect gravitational waves from these promising astrophysical sources .we have shown our simulation program to be useful for gaining physical insight into interferometer behavior , and for roughly estimating the effects of optical deformations upon ligo science capabilities . due to the highly detailed nature of the model, it has been an effective tool for research and development in the ligo project . to date, it has been used to address several important design issues , providing support for technical initiatives such as : ( i ) aiding ligo in its transition from argon - ion lasers ( ) to nd : yag lasers ( ) for the main carrier beam , and demonstrating that interferometer performance is less sensitive to mirror figure deformations ( of a specified physical amplitude ) when larger - wavelength laser beams are used ; ( ii ) assisting in the selection of the schnupp length asymmetry scheme for gw - signal readout over an alternative , external modulation ( mach - zehnder ) scheme ; ( iii ) helping model the performances ( particularly the effects of optical deformations upon interferometer control systems ) of the major ligo prototype interferometers , including the fixed mass interferometer ( fmi ) and the phase noise interferometer ( pni ) at mit , and the 40-meter interferometer at caltech ; ( iv ) providing assistance in the selection of optical parameters for the long - baseline ligo interferometers , such as beam spot sizes , mirror curvatures , and aperture sizes ( particularly the perspective - foreshortened beamsplitter aperture ) ; ( v ) conducting preliminary studies of the usefulness of an output mode cleaner at the beamsplitter signal port ( cf .[ s4s2 ] ) ; ( vi ) simulating the effects of refraction index variations due to _ thermal lensing _( e.g. , ) on interferometer performance , resulting in preliminary estimates of % degradation to from mirror coating absorption values of 0.6 ppm , or ( equivalently ) from substrate bulk absorption values ( in fused silica ) of .perhaps the most important use of the program has been its involvement in the ligo `` pathfinder project '' , the initiative to set specifications and tolerances for ligo s core optical components , and to procure them through a cooperative effort of several vendors and optics metrology groups .our program has also been used in conjunction with other modeling initiatives at ligo , to create a broad - based interferometer simulation environment involving different algorithmic approaches and physical regimes of interest .the latest versions of our program continue to be used to address important questions raised by the ligo project , such as estimating the performance of advanced - ligo detectors i.e. , interferometers incorporating dual recycling , or even resonant sideband extraction in the presence of optical deformations , and participating in initiatives to set core optical specifications ( and to design the control systems ) for those advanced detectors .many related issues will undoubtedly arise in the near future , as advanced interferometer configurations ( and increasingly better optics ) become available , for which this program can be used as a primary modeling tool for ligo and its collaborating gravitational wave groups .we are grateful to jean - yves vinet and patrice hello of the virgo project for supplying us with the original code that formed the early basis of our work ; and to hughes - danbury and corning for their generosity and spirit of research in sharing their data with us .we would like to thank d. shoemaker and d. sigg for their help in the early preparation of this manuscript .the development of our simulation program was principally supported by nsf cooperative agreement phy-9210038 .d. e. mcclelland _ et al ._ , class .* 18 * , no .19 , 4121 ( 2001 ) ; d. g. blair , j. munch , d. e. mcclelland , and r. j. sandeman , australian consortium for interferometric gravitational astronomy , arc project , 1997 ( unpublished ) .a. j. tridgell , d. e. mcclelland , and c. m. savage , in _ gravitational astronomy : instrument design and astrophysical prospects _ , edited by d. e. mcclelland and hbachor ( world scientific , singapore , 1991 ) , p. 222 .r. e. vogt _ et al ._ , `` proposal to the national science foundation : the construction , operation , and supporting research and development of a laser interferometer gravitational - wave observatory '' , 1989 ( unpublished ) .l. schnupp , technical note , 1986 ( unpublished ) ; r. drever , technical note , 1991 ( unpublished ) ; m. w. regehr , f. j. raab , and s. e. whitcomb , ligo tech . doc .p950001 - 00-r , 1995 ( unpublished ) .( numbered ligo technical documents referenced here are available at _ http://admdbsrv.ligo.caltech.edu / dcc/_. ) w. h. press , s. a. teukolsky , w. t. vetterling , and b. p. flannery , _ numerical recipes in fortran : the art of scientific computing _ ( cambridge university press , cambridge , and numerical recipes software , 1992 ) .a. abramovici _ et al ._ , in _ proceedings of the snowmass 95 summer study on particle and nuclear astrophysics and cosmology _ , edited by e. w. kolb and r. peccei ( world scientific , singapore , 1995 ) , p. 398
we describe an optical simulation program that models a complete , coupled - cavity interferometer like those used by the laser interferometer gravitational - wave observatory ( ligo ) project . a wide variety of interferometer deformations can be modeled , including general surface roughness and substrate inhomogeneities , with no _ a priori _ symmetry assumptions about the nature of interferometer imperfections . several important interferometer parameters are optimized automatically to achieve the best possible sensitivity for each new set of perturbed mirrors . the simulation output data set includes the circulating powers and electric fields at various points in the interferometer , both for the main carrier beam and for its signal - sideband auxiliary beams , allowing an explicit calculation of the shot - noise - limited gravitational - wave sensitivity of the interferometric detector to be performed . here we present an overview of the physics simulated by the program , and demonstrate its use with a series of runs showing the degradation of ligo performance caused by realistically - deformed mirror profiles . we then estimate the effect of this performance degradation upon the detectability of astrophysical sources of gravitational waves . we conclude by describing applications of the simulation program to ligo research and development efforts .
the rate distortion function , , specifies the number of codewords , on an exponential scale , needed to represent a source to within a distortion .shannon showed that for an additive distortion function and a known discrete source that produces independent and identically distributed ( iid ) letters according to a distribution , where is the mutual information for an input distribution and probability transition matrix .sakrison studied the rate distortion function for the class of _ compound _ sources .that is , the source is assumed to come from a known set of distributions and is fixed for all time .if is the set of possible sources , sakrison showed that planning for the worst case source is both necessary and sufficient in the discrete memoryless source case .hence , for compound sources , in berger s ` source coding game ' , the source is assumed to be an adversarial player called the ` switcher ' in a statistical game . in this setup , the switcher is allowed to choose any source from at any time , but must do so in a causal manner without access to the current step s source realizations .the conclusion of is that under this scenario , where is the convex hull of . in his conclusion, berger poses the question of what happens to the rate - distortion function when the rules of the game are tilted in favor of the switcher .suppose that the switcher were given access to the source realizations before having to choose the switch positions .the main result of this paper is that under these rules , where here , the are the distributions of the sources and is the set of all probability distributions on .section [ sec : def ] sets up the notation for the paper , and is followed by a description of the source coding game in section [ sec : game ] .the main result is stated in section [ sec : mainresult ] , and an example illustrating the main ideas is given in section [ sec : example ] .the proofs are located in section [ sec : proofs ] and some concluding remarks are made in section [ sec : conclusion ] .we work in essentially the same setup as berger s source coding game , and with most of the same notation .there are two finite alphabets and . without loss of generality , is the source alphabet and is the reproduction alphabet .let denote an arbitrary vector from and an arbitrary vector from .when needed , will be used to denote the first symbols in the vector .let be a distortion measure go to zero for all .the main result would hold in this setup as well . ]( any nonnegative function ) on the product set .then define for to be let be the set of probability distributions on , the set of types of length strings from , and let be the set of probability transition matrices from to .the rate distortion function of with respect to distortion measure is defined to be where and is the mutual information in the report , but any base can be used . ] \ ] ] the only interesting domain of values for is where let be a codebook of length vectors in . define if is used to represent an iid source with distribution , then the average distortion of is defined to be \ ] ] where let be the minimum number of codewords needed in a codebook so that .then , shannon s rate - distortion theorem ( ) says that if the source is iid with distribution , suppose as in berger s paper that a ` switcher ' is a player in a two person game with access to the position of a switch which can be in one of positions .the switch position corresponds to a memoryless source with distribution that is independent of all the other sources source , so long as they are all independent . in that sense ,the switcher has access to a _ list _ of sources , rather than a set of different distributions . ] .let be the vector of switch positions chosen by the switcher .let be the switcher s output at time and let be the output of the source at time .when needed , will denote the block of symbols for the source . the other person in the gameis called the ` coder ' .the coder s goal is to construct a codebook of minimal size to ensure the average distortion between the switcher s output and reconstruction in the codebook is at most .fix and .let denote the codebook chosen by the coder , and be the distortion between a vector and the best reproduction of in b ; in the sense of least distortion .the payoff of the game is the average distortion , which for a particular switching strategy is = \sum_{\x \in \mx^n } p_s(\x)d_n(\x;b)\ ] ] here is the probability of the switcher outputting the sequence averaged over any randomness the switcher chooses to use , as well as the randomness in the sources .let be the probability of the switcher using a switching vector and outputting a string .then , in berger s original game , the coder chooses a codebook that is revealed to the switcher .the switcher must then choose the switch position at every integer time without access to the actual letters that the sources produce at that time .the switcher , however , has access to the previous outputs of the switch .so in , an admissible joint probability rule for is of the form ( 0,0 ) # 1#2#3#4#5 ( 3844,2856)(1154,-2683 ) ( 1701,-1674)(0,0)[lb ] ( 1714,-992)(0,0)[lb ] ( 1714,-361)(0,0)[lb ] ( 1714,-2299)(0,0)[lb ] ( 4176,-1174)(0,0)[lb ] in this discussion , we consider the case when the switcher gets to see the outputs of the sources and then has to output a letter from one of the letters that the sources produced .the switcher outputs a letter , , which must come from the ( possibly proper ) subset of , . hence , for this ` cheating ' switcher , allowable strategies are of the form since the sources are still iid , define the minimum number of codewords needed by the coder to guarantee average distortion as . \leq d \\\textrm{for all allowable } \\\textrm{switcher strategies } \end{array } \right\}\ ] ] we are interested in the exponential rate of growth of with .define let be the set of distributions on the switcher has access to .let be the convex hull of .then let the conclusion of is that when the switcher is not allowed to witness the source realizations until committing to a switch position .the main result is the determination of in the case when the switcher gets to see the entire block of source outputs ahead of choosing the switching sequence .let the switcher ` cheat ' and have access to the outputs of all sources before choosing a symbol for each time .then , where is defined in ( [ eqn : defnc ] ) . here ,we have defined .the theorem s conclusion is that when the switcher is allowed to ` cheat ' , .the number of constraints in the set is exponential in the size of .depending on the source distributions , a large number of these constraints could be inactive .unfortunately , is generally not concave in for a fixed , so computation of may not be easy .qualitatively , allowing the switcher to ` cheat ' gives access to distributions which may not be .quantitatively , the conditions placed on the distributions in are precisely those that restrict the switcher from producing symbols that do not occur often enough on average .for example , let .then for every , since the sources are independent , is the probability that all sources produce the letter at a given time . in this case , the switcher has no option but to output the letter , hence any distribution the switcher mimics must have . the same logic can be applied to all subsets of . as commented in section v of , if . before giving the proof of the result ,an example is presented .suppose the switcher has access to two iid binary sources .source outputs with probability and source outputs with probability . then , since the sources are iid across time and independent of each other , for any time , similarly , hence , if at time , the switcher has the option of choosing either or ,suppose the switcher chooses with probability .this strategy is memoryless , but it is an allowable strategy for the ` cheating ' switcher .the coder then sees an iid binary source with a probability of a occurring being equal to : by using as a parameter , the switcher can produce s with a probability between and .the attainable distributions are shown in figure [ fig : example_simplex ] .this kind of memoryless , ` conditional ' switching strategy will be used for half of the proof of the main result .if the distortion measure is hamming distortion , clearly the switcher will choose and produce a bernoulli process .regardless of the distortion measure , contains all the distributions on that the switcher can mimic .( 0,0 ) is the set of distributions the switcher can mimic without cheating , and is the set attainable with cheating.,title="fig : " ] # 1#2#3#4#5 ( 4923,3778)(1384,-2951 ) ( 3274,-2873)(0,0)[lb ] ( 1443,316)(0,0)[lb ] ( 1384,-1574)(0,0)[lb ] ( 1384,-1161)(0,0)[lb ] ( 1384,-2106)(0,0)[lb ] ( 4691,-2873)(0,0)[lb ] ( 5282,-2460)(0,0)[lb ] ( 3746,-2873)(0,0)[lb ] ( 4278,-2873)(0,0)[lb ] ( 4536,-1145)(0,0)[lb ] ( 2836,-2326)(0,0)[lb ] ( 1891,616)(0,0)[lb ]first , the main tool of this section is stated .let denote the set of types for length sequences from .let be the set of strings that are within distortion of a string .fix a and an .then there exists a codebook where and where is the set of strings with type for large enough .we now show how the coder can get arbitrarily close to for large enough . for , define as let . for all sufficiently large know is a continuous function of ( ) .it follows then that because is monotonically decreasing ( as a set ) with that for all , there is a so that we will have the coder use a codebook such that all strings with types in are covered within distortion .the coder can do this for large with at most codewords where explicitly , this is done by taking a union of the codebooks provided by the type covering lemma and noting that the number of types is less than .next , we will show that the probability of the switcher being able to produce a string with a type not in goes to exponentially with . consider a type . by definition, there is some such that .let be the indicator function indicates the event that the switcher can not output a symbol outside of at time .then is a bernoulli random variable with a probability of being equal to .that is , we can envision as being a sequence of iid binary random variables with distribution .now for our type , we have that for all strings in the type class , .let be the binary distribution , assuming is small enough to make this a distribution ( if not , make delta small enough ) .therefore , and hence by pinsker s inequality .using standard types properties gives if we let be the event that has a type which is not in , we just sum over types not in to get now let . then , regardless of the switcher strategy , \leq d + d^\ast \cdot \exp_2 \bigg(-n \bigg ( \frac{\delta}{\ln 2 } - |\mx| \frac{\ln ( n+1)}{n}\bigg)\bigg)\ ] ] so for large we can get arbitrarily close to distortion while the rate is at most .using the fact that the rate - distortion function is continuous in gives us that the coder can achieve at most distortion on average while the rate is at most . since is arbitrary , .this section considers why .we will show that the switcher can target any distribution and produce a sequence of iid symbols with distribution .in particular , the switcher can target the distribution that yields and shannon s rate distortion theorem gives .the switcher will use a memoryless randomized strategy .let and suppose that at some time the set of symbols available to choose from for the switcher is exactly .that is .define to be the probability that at any time the switcher can choose any element of and no other symbols . then let be a probability distribution on with support ,i.e. , if , and . the switcher will have such a randomized rule for every nonempty subset of such that .let be the set of distributions on that can be achieved with these kinds of rules , so it is clear from the construction of that because the conditions in are those that prevent the switcher only from producing symbols that do not occur enough , but put no further restrictions on the switcher .so we need only show that .the following gives such a proof by contradiction .the set relation is true .suppose but .it is clear that is a convex set .let us view the probability simplex in .since is a convex set , there is a hyperplane through that does not intersect .hence , there is a vector such that for some real but . without loss of generality ,assume ( otherwise permute symbols ) .now , we will construct so that the resulting has , which contradicts the initial assumption .let for example , if , then and if .call the distribution on induced by this choice of .recall that .then , we have + \\ & \hspace{-.5in}\cdots & \hspace{-0.25in}+ a_{|\mx| } [ q(\{1,\ldots , |\mx|\ } ) - q(\{1,\ldots , |\mx| - 1\ } ) ] \end{aligned}\ ] ] by the constraints in the definition of , we have the following inequalities for : therefore , the difference of the objective is + \\ & & ( a_{|\mx|-1 } - a_{|\mx|})\bigg[\sum_{i=1}^{|\mx|-1 } p(i ) - q(i)\bigg ] + \\ & & \cdots + ( a_1-a_2)\bigg[p(1 ) - q(1)\bigg ] \\ & = & \sum_{i=1}^{|\mx| - 1 } ( a_i - a_{i+1})\bigg [ \sum_{j=1}^i p(j ) - \sum_{j=1}^i q(j ) \bigg]\\ & \geq & 0\end{aligned}\ ] ] the last step is true because of the monotonicity in the and the inequalities we derived earlier .therefore , we see that for the we had chosen at the beginning of the proof .this contradicts the assumption that , therefore it must be that .the rate - distortion function for the ` cheating ' switcher has been described .it is the maximization of the iid rate - distortion function over the distributions the switcher can simulate .it was assumed the switcher had access to all source outputs ahead of time , but the proof required only that the switcher had access to the source realizations for one step ahead at each time . in this paper , the sources were independent and memoryless . a minor tweak to the argumentalso gets the rate - distortion function if the sources are dependent but still memoryless .the region would just be modified to become : a more interesting problem is to consider what happens when the sources are independent but have memory .apparently , dobrushin has analyzed the case of the non - cheating switcher with independent sources with memory .one could imagine that , perhaps , giving the switcher access to all source realizations could result in the ability to simulate memoryless sources from a collection of sources with memory .similar techniques might also prove useful in considering a cheating ` jammer ' for an arbitrarily varying channel . while the problem is mathematically well defined , it seems unphysical in the usual context of jamming or channel noise .the idea may make more sense in the context of watermarking , where the adversary can try many different attacks on different letters of the input before deciding to choose one for each .the authors would like to thank the nsfgrfp for partial support of this research . also , we thank prof .michael gastpar , the students of the fall 2006 ee290s course at uc berkeley , and the reviewers for helping to refine the presentation of this work .j. wolfowitz , `` approximation with a fidelity criterion , '' in _5th berkeley symp . on math .stat . and prob ._ , vol .1.1em plus 0.5em minus 0.4emberkeley , california : university of california , press , 1967 , pp .
berger s paper ` the source coding game ' , _ ieee trans . inform . theory _ , 1971 , considers the problem of finding the rate - distortion function for an adversarial source comprised of multiple known iid sources . the adversary , called the ` switcher ' , was allowed only causal access to the source realizations and the rate - distortion function was obtained through the use of a type covering lemma . in this paper , the rate - distortion function of the adversarial source is described , under the assumption that the switcher has non - causal access to all source realizations . the proof utilizes the type covering lemma and simple conditional , random ` switching ' rules . the rate - distortion function is once again the maximization of the function for a region of attainable iid distributions .
randomized libraries are widely used to select novel proteins with specific biological and physiochemical properties .there are many protocols to create randomized libraries , such as cassette mutagenesis and error - prone pcr ( eppcr ) . in the cassette mutagenesis ,random mutagenesis is generated in a particular region or regions of the target gene through incorporation of degenerate synthetic dna sequence .usually equal molar nucleotides are used ( ratio of nucleotides ) , but on other occasions , different molar ratios of nucleotides in the mixtures are used to create predetermined physiochemical properties in the targeted region(s ) . for example , in the study of selection and characterization of small transmembrane proteins that bind and activate the platelet - derived growth factor ( pdgf ) receptor , fifteen transmembrane amino acids of e5 protein of bovine papillomavirus ( bpv ) were replaced with random sequences .the following library design was used to mimic the hydrophobic transmembrane region of bpv e5 protein : where the three ` nxr ` codons are followed by a ` caa ` codon encoding glutamine , which in turn is followed by 12 more ` nxr ` codons . for the ` nxr ` codons , ` n ` stands for an equal mix of ` a ` , ` c ` , ` g ` , and ` t ` , ` x ` is a mixture of ` t : c ` , and ` r ` is an equal mix of ` a ` and ` g ` . in order to guide and evaluate randomized library construction, the statistical properties of the libraries will be useful .one of the most asked questions is the number of unique sequences in the library ( the complexity of the library ) . in the following we are going to give some formulas to calculate the expected number of unique sequences in the library as well as its variance .our treatment is different from previous works in several ways .firstly , the previous works deal with only mixtures of equal molar ratio of the four nucleotides , while we can handle mixtures of arbitrary user - defined molar ratios , which is more useful in such situations as described above . as shown in the examples below, the different molar ratios in the nucleotide mixtures make a significant difference in the statistics of the library .secondly , we present a formula for the variance , and hence the standard deviation , of the number of unique sequences in the library .the standard deviation gives an indication of the spread of the distribution around the expected value .thirdly , by using a mathematical software library that can handle arbitrary numerical precision , we can calculate the statistics for much larger libraries .the statistics of library with mutations in more than amino acids can be calculated easily .the program can be accessed freely in the web server at http://graphics.med.yale.edu/cgi-bin/lib_comp.pl .the paper is organized as follows . in the _ theory _ section we derive the formulas for the expected number of unique sequences in the randomized library and its variance . within the assumption of the model ,the formulas are exact . for real calculations of randomized libraries , however , these formulas have to be rearranged due to the huge number of possible sequences . in _ software implementation _section we discuss several ways to make the calculation manageable while keeping the numerical accuracy of the calculation . in the last section two examples are given for a small library and a relatively bigger library .assume that the size of the library ( the number of transformants ) is and the total number of all possible sequences is .usually is a huge number for a large randomized library .for example , if nucleotide bases are mutated , the number of potential sequences is .we denote the probability of sequence as . for each sequence among these possible sequences , we can associate a random variable , which is either or , according as the sequence is or is not in the library . the respective probabilities of to take these two values are : [ e : xi ] the _ number of unique sequences _ in the library is given by the random variable : the properties that we are interested in are the average ( expectation ) of and its variance .the expectation gives the average of the number of distinct sequences in the library , and the variance shows the tightness of the distribution of the number of distinct sequences around its average .the average is given by the expectation of as and its variance is given by where is the _ covariance _ of and : from eq .we know that both and equal to : hence the variance of is to calculate we need , which in turn depends on .the joint probability distribution of and are given by from the joint probability the expectation of can be obtained as from which we obtain the covariance of and as putting all the pieces together we have the average and variance of as ,\ ] ] and \notag \\ & = \sum_{i=1}^n { v}_i - \left [ \sum_{i=1}^n { v}_i \right]^2 + 2 \sum_{i > j } { w}_{ij } \notag \\ & = \sum_{i=1}^n ( 1 - p_i)^{l}- \left [ \sum_{i=1}^n ( 1 - p_i)^{l}\right]^2 + 2 \sum_{i > j } ( 1 - p_i - p_j)^{l}.\end{aligned}\ ] ]when the number of possible sequences is small , the average and variance of the unique sequences in the library can be calculated directly using eqs . and .when becomes large , however , a direct calculation is not feasible . if in one position along the sequence we have possibilities ( for nucleotide usually is ) , and we have mutation in such positions , a direct calculation would have possible values of , which , as stated earlier , is too big to tackle directly .not all these , however , are unique . for a particular mixture ratio of ` x ` , we can calculate the probability of each nucleotide in that position from the nucleotide ratio , which is just the fraction of each nucleotide at positions with mixture ratio ` x ` .for example , if the ratio is for ` a ` , ` c ` , ` g ` , and ` t ` , then will take values of , , , and .all possible are given in the following multinomial expansion }{0pt}{}{b}{i_1 , i_2 , \dots , i_m } } { q}_1^{i_1 } { q}_2^{i_2 } \cdots { q}_m^{i_m } .\ ] ] for each possible , the number of such is given by the multinomial coefficient , }{0pt}{}{b}{i_1 , i_2 , \dots , i_m } } = \frac{b!}{i_1 !i_2 ! \cdots i_m ! } .\ ] ] the number of such unique probabilities , which equals the number of unique terms in the expansion of eq ., is given by which is much smaller than .hence the mean in eq . can be rewritten as and the variance in eq . as ^ 2 \\+ 2 \left [ \sum_{i=1}^h \frac{c_i ( c_i - 1)}{2 } w_{ii } + \sum_{i > j } c_i c_j w_{ij } \right ] .\end{gathered}\ ] ] furthermore , the number of the terms in eqs . and , , can be reduced if there is degeneracy among : where we combine terms of of the same value together ( there are of them ) : are just unique items among . in such cases the number of unique probabilities of is given by for example , if we want to mutate nucleotide acids in positions with nucleotide ratio , then , and .the brute force calculation would have to add up terms .. gives , while eq .gives , a significant reduction .the above statements apply to mutations with a single base composition , as in .they can , however , be generalized easily to handle multiple base compositions , as in .for example , the mutation in can be written as for the computation purpose . in this general situation ,the unique probabilities and its associated coefficient are given by the expansion of the product a c program has been written that uses eqs and to calculate the average and variance of the number of unique sequences in randomized libraries , where and are from the expansion of eq . .for small libraries , numerical accuracy is not a problem : standard programming languages are sufficient to give correct answers . for large libraries, careful attention has to be paid to the numerical stability , since there are many terms involved , and each term of probability is very small , as is the product of many raised to high power .standard programming languages like c usually can not handle the situation well . to overcome the issue of numerical accuracy ,the program links the library of pari / gp package , which can do numerical calculations with arbitrary precisions .furthermore , the package has the ability to do symbolic calculations , which makes some of the above mentioned calculations easier. a web server has been set up to access the program at http://graphics.med.yale.edu/cgi-bin/lib_comp.pl .the program is simple to use .the user just type in the library size and the nucleotide ratios of the mixtures , followed by the number of bases with that nucleotide ratio , separated by a space .multiple ratios of the mixtures can be handled .if one or more of the nucleotides is not included at a position , simply exclude them in the input ( although including them as zeros does not hurt : the program filters them out automatically ) .for example , to calculate the complexity for the library as described in eq .in the _ introduction _ section , user input for the nucleotide ratios and base numbers is in the format of : this section two artificial examples are given to show the effects of different molar ratios of the nucleotide mixtures on the statistics of the library .the first example is a small randomized library with mutations on two amino acids , with different nucleotide mixture ratios for the first , second , and third codon positions : .here we have and .the potential number of sequences is .the second library is relatively larger , with mutations in amino acids : .here we have and .the potential number of sequences is .three different sets of molar ratios are used for each library : set 3 , though not very common in practice , is included to show the effects of nucleotide molar ratio on the library statistics . the average and standard deviation ( square root of the variance ) of the number of unique sequences in the librariesare shown in figures [ f:2aa ] and [ f:8aa ] as a function of , the size of the library ( or the number of transformants ) . from these figureswe can see that the nucleotide ratios of the mixtures have a significant impact on the statistics .as expected , for both small and large libraries , libraries with equimolar ratios achieve a larger number of unique library members with the same library size than libraries with unequal ratios . on averageit requires more transformants for the libraries with unequal ratio mixtures to include the same number of unique sequences .for example , for the small library shown in figure [ f:2aa ] , the library with equal ratio mixtures ( set 1 ) will have an average number of unique sequences ( the theoretical limit ) at , while the library with unequal ratio mixtures set 2 needs a larger library size to get the same average number of unique sequences , and the library with a more skewed ratio mixtures set 3 needs .however , the ability to calculate precise library statistics allows one to more accurately assess the disadvantages of reduced library complexity against the advantages of selection for particular amino acids ( e.g. hydrophobic amino acids ) by using skewed nucleotide ratios .it should be mentioned that the nucleotide complexity considered here is not necessarily the same thing as amino acid complexity , due to the codon degeneracy .the standard deviation also behaves differently according to the difference in mixture ratios . for equal ratio ,the standard deviation has a sharp peak , centered around the point where the mean is about of the satuation . for unequal ratios , however ,the distributions of standard deviation is broad and multimodal , and the peaks shift to the higher . for large libraries ,the standard deviation is quite small when compared with the mean .the number of peaks in the standard deviation depends on the number of distinct sequence probabilities .in fact , if all probabilities are distinct , there will be peaks in the standard deviation . if these peaks from the left to the right are labeled from to , then the peak is associated with the fluctuation of the number of unique sequences in the transition from to as increases .when some or all sequence probabilities become degenerate , the peaks in the standard deviation will merge together . in the extreme case of equal ratios , all peaks merge into one peak and the standard deviation becomes single - modal .this work was supported by yale school of medicine .the author would like to thank dr .daniel dimaio and sara marlatt for bringing this problem to his attention .freeman - cook , l. l. , dixon , a. m. , frank , j. b. , xia , y. , ely , l. , gerstein , m. , engelman , d. m. , and dimaio , d. ( 2004 ) . selection and characterization of small random transmembrane proteins that bind and activate the platelet - derived growth factor receptor ., * 338 * , 907920 .
randomized libraries are increasingly popular in protein engineering and other biomedical research fields . statistics of the libraries are useful to guide and evaluate randomized library construction . previous works only give the mean of the number of unique sequences in the library , and they can only handle equal molar ratio of the four nucleotides at a small number of mutation sites . we derive formulas to calculate the mean and variance of the number of unique sequences in libraries generated by cassette mutagenesis with mixtures of arbitrary nucleotide ratios . computer program was developed which utilizes arbitrary numerical precision software package to calculate the statistics of large libraries . the statistics of library with mutations in more than amino acids can be calculated easily . results show that the nucleotide ratios have significant effects on these statistics . the more skewed the ratio , the larger the library size is needed to obtain the same expected number of unique sequences . the program is freely available at http://graphics.med.yale.edu/cgi-bin/lib_comp.pl .
while the origin of species has always been a central subject in evolutionary biology , the large number of recent empirical and theoretical developments has renewed the interest in the area .individual - based simulations , in particular , have been successful in fostering relevant discussions in speciation .specifically , simulations in which mating is restricted by spatial and genetic distances have been able to describe empirical patterns of species diversity and within - species genetical diversity .one of the simplest ways of introducing assortativeness in mating in a individual - based simulation is to attribute haploid genomes with biallelic loci to individuals and allow them to mate only if the genomes differ in no more than loci .this approach considers that mate choice often relies on multiple cues that are determined genetically . in the case of assortative mating, we assume that individuals have a certain tolerance to differences when choosing a mate , however if the other individual is too different , it will no longer be considered a potential mate . under these assumptions ,reproductive isolation was shown to be maintained among demes in the presence of sufficiently low migration rates .spatially explicit versions of this process have also been studied and , in particular , speciation was shown to emerge spontaneously if mating is also constrained by the spatial distance . in order to reflect the dynamics of evolving populations , most simulations need to incorporate several ingredients simultaneously , such as mutation , genetic drift , recombination , assortativeness in mating and individual s movement and spatial positioning .gavrilets proposed and analysed a number of simplified mathematical models that are closely related to these simulations , including selection , mutation , drift and population structure . these more realistic approaches to speciation do not allow for the detailed understanding of how each of the mechanisms involved contribute to the emergence and maintenance of reproductive isolation . to construct a dynamical theory that accounts for the predictions of the model described in and other similar models , it is important to understand the roles of their different ingredients and to validate their generality. it has already been shown that separation of individuals into males and females does not introduce important effects in the conditions for speciation , originally based on hermaphrodite populations . in this paperwe focus on the effect of genetic incompatibility and work out the theory for infinitely large populations with two biallelic loci ( ) without mutations .genetic incompatibilities will be implemented by allowing reproduction only if the alleles from the parents differ at most in one locus ( ) .this is the simplest system for which the genetical mechanism of interest may be implemented .we will show that this process leads to evolution by changing the allele frequencies and that it is one of the main ingredients in the process of speciation studied in . despite the changes in all allele frequencies, we will demonstrate that a certain combination of frequencies from the two loci remain constant during the evolution , revealing a strong correlation between the loci introduced by the genetic mating restriction .the paper is organized as follows . in section [ mod_reprod ]we describe the reproductive mechanism employed in the dynamics . in section [ no_restrictions ]we characterize the evolution of a population subjected to no mating restrictions ( random mating ) , which is similar to the hardy - weinberg ( hw ) equilibrium. the mathematical implications of the genetic restriction , including the description of equilibria and their features , are analyzed in section [ con_restr ] .finally , in section [ conclusions ] we expose our conclusions and discuss the possible evolutionary impacts of our results .mathematical technicalities not strictly essential to the discussion are included in the appendices .consider a population of hermaphrodite individuals with haplotypes , , , and ( and being the alleles at the locus 1 , and and the alleles at the locus 2 ) , whose composition at the generation is characterized by the numbers , , and ( , with and ) .all possibles encounters between members of this generation give an offspring which will be a member of the generation with a probability , and being the paternal haplotypes ( we include in both effects of compatibility of the parents and the viability of the new born individual ) . by assuming no overlap among generations , the contributions to the individuals with haplotype at generation can be inferred from table [ table1 ] . the number of individuals at time obeys thus the equation equivalent tables allow to obtain evolution equations for the remaining haplotypes in the following sections we analyze the dynamics of the haplotype frequencies in the infinite limit of the population size . for each different scenariowe specify the values of the probabilities by setting the total number of individuals constant along generations .this section summarizes the outcomes for the case of no genetic restrictions .although some of the results described in here can be found in the literature ( see for example ) , the following discussion is fundamental as a reference for comparing the results presented next .if random mating is assumed , for every encounter .substituting in equations ( [ conn_1]-[conn_4 ] ) and summing up , one obtains so that for very large populations . by introducing ,the so called linkage disequilibrium , and after some algebra , equations for the haplotype frequencies read from which one immediately sees that a sufficient condition for the equilibrium is , or .notice that the quantities representing the frequencies of the four available alleles , remain constant from the first generation .this is also the case in the hw equilibrium context , however it should be emphasized that in the present framework there are two independent allele frequencies ( because ) in contrast to the hw equilibrium where the only independent variable is the frequency of one of the two available alleles .the time dependence of the haplotype frequencies can be obtained analytically ( see appendix [ appendixa ] ) .here we just look for a relationship between the haplotype and the allele frequencies.we start calculating at time , whose solution is simply being the initial value of . combining ( [ allele_a]-[allele_b ] ) and ( [ sr11]-[sr14 ] ) , it is possible to deduce the following relationships ( see appendix [ appendixb ] ) accordingly , the haplotype frequencies reach an equilibrium asymptotically and , as in the case of the hw equilibrium , is related to the constant alleles frequencies , it is important to remark the asymptotic behavior of the haplotype frequencies toward the equilibrium ( equations ( [ pab_t]-[pab_t ] ) ) , in contrast to hw theorem in which the equilibrium of the genotype frequencies is attained in one generation .to mathematically describe the mating restriction imposed to individuals differing in more than one allele , we simply redefine the compatibility - viability rate as follows following the procedure of section [ no_restrictions ] , we obtain with notice that is not constant , in contrast to the rate of section [ no_restrictions ] , but varies along generations depending on how many incompatible encounters may take place .the more incompatible encounters , the bigger the chance of a compatible encounter to give an offspring viable for the next generation . by substituting ( [ rprime ] ) in ( [ rh1h2 ] ) , and ( [ rh1h2 ] ) in ( [ conn_1]-[conn_4 ] ) , equations for the haplotype frequencies reduce to in what follows , we explore the dynamics governed by equations ( [ freq_comres_1]-[freq_comres_4 ] ) on the basis of a stability analysis . equations ( [ freq_comres_1]-[freq_comres_4 ] ) display four different types of fixed points , summarized in table [ fixedpoints ] . as we will see next , only types 1 and 2 are stable .l c c c + type & label & coordinates & stability + + type 1 . continuous sets .two & & & + compatible haplotypes have & & & stable + zero frequency ; one allele is & & & + lost in one locus .the other & & & + locus remains polymorphic .& & & + + type 2 .three haplotypes have & & & + zero frequency .one allele is lost & & &stable + at both loci . & & & + . & & & + + type 3 .two incompatible & & &unstable + haplotypes have zero frequency . & & & + + type 4 .equiprobable & & &saddle + distribution . & & & + since , it is possible to give a graphical description of the dynamics by constructing a 3-dimensional phase space .we arbitrarily chose the frequencies , and as the independent dynamic variables .the constrains , , , and give the phase space the geometry of a tetrahedron having right triangular faces ( figure [ ptos_fixos ] ) .the top face of the tetrahedron , defined by the equation , corresponds to frequencies distributions having . implies and is represented by the origin .type 1 fixed points are located at four of the six edges of the tetrahedron ( colored edges in figure [ ptos_fixos ] ) , the points of type 2 are the vertices of the tetrahedron ( black circles ) , type 3 fixed points and are located at the midpoints of the edges not containing points of type 1 ( orange circles ) and finally , the center of the tetrahedron houses the type 4 fixed point ( brown circle ) .( axis , purple ) , ( axis , blue ) , ( diagonal on the plane , red ) and ( diagonal on the plane , cyan ) ; , , and ( vertices connecting the first family , black circles ) ; and ( midpoints of edges of the phase space not containing the first family , orange circles ) ; and ( center , brown circle ) .the shaded light brown surface represents the top face of the tetrahedral phase space of equation .middle : schematic representation of the stable fixed points .right : division of the phase space displaying the basins of attraction of type 1 fixed points .[ ptos_fixos],title="fig : " ] ( axis , purple ) , ( axis , blue ) , ( diagonal on the plane , red ) and ( diagonal on the plane , cyan ) ; , , and ( vertices connecting the first family , black circles ) ; and ( midpoints of edges of the phase space not containing the first family , orange circles ) ; and ( center , brown circle ) .the shaded light brown surface represents the top face of the tetrahedral phase space of equation . middle : schematic representation of the stable fixed points .right : division of the phase space displaying the basins of attraction of type 1 fixed points .[ ptos_fixos],title="fig : " ] ( axis , purple ) , ( axis , blue ) , ( diagonal on the plane , red ) and ( diagonal on the plane , cyan ) ; , , and ( vertices connecting the first family , black circles ) ; and ( midpoints of edges of the phase space not containing the first family , orange circles ) ; and ( center , brown circle ) .the shaded light brown surface represents the top face of the tetrahedral phase space of equation .middle : schematic representation of the stable fixed points .right : division of the phase space displaying the basins of attraction of type 1 fixed points .[ ptos_fixos],title="fig : " ] we start the discussion with type 3 fixed points for which the stability matrix is two times the identity . therefore , it has one single fully degenerated eigenvalue and both points and are unstable fixed points .the stability matrix of displays two different eigenvalues , and , the latter with degeneration 2 .accordingly , this fixed point has a saddle like behavior , being unstable on a two dimensional subspace and stable on a one dimensional subspace . from a geometrical point of view, it is interesting to note that the points and are equidistantly located from along the linear subspace spanned by the stable eigenvector ( figure [ ptos_fixos ] ) .fixed points of types 1 and 2 deserve a more detailed description .not displaying exactly the same properties , they share common features , which makes instructive to analyze the stability of both types at the same time .we take as an example the set of points , and its and limits , which are the points and , respectively .appendix [ appendixc ] explains how to transfer the outcomes of the following analysis to the remaining type 1 and type 2 fixed points .the stability matrix for any of such points has the following eigenvalues and eigenvectors : * : .* : .* : . in the first place ,notice that along the direction spanned by displacements are neutral .indeed , since for all points in the set , displacements from the fixed points in this direction are not amplified nor contracted .this is consistent with the fact that this direction corresponds to the axis itself , where the entire set is located .therefore , by displacing a point from a fixed point in this direction one simply moves to another fixed point and thus iterations do not evolve it further . in the directions spanned by and ,the eigenvalues show that the fixed points are stable ( points and are also stable , however the stability can not be inferred from the eigenvalues ) .notice that , properly scaled , eigenvectors and have the interesting property of connecting the fixed points ( as well as and ) with the points and , respectively .this property , illustrated in figure [ vectorsitos ] , will be used in section [ conserved ] .( green ) and ( brown ) corresponding to the points ( with ) , and . ]although the qualitative behavior of any fixed point in the set is the same ( one neutral direction and two stable directions pointing to type 3 fixed points ) , and even for the extremes and , points within the set differ from each other in the time to convergence .close to the fixed points the movement along a given eigendirection obeys so that ( being the i - th component of the initial displacement from the fixed point , for ) with this allows to compare the time constants in the directions and as the parameter varies along the set .the ratio gives accordingly , by displacing the fixed point close to the point ( ) , the time of convergence along becomes much larger compared to the time along .the opposite behavior is obtained by displacing the fixed point towards ( ) . for equal to zero , estimation ( [ time_estimation ] ) yields an infinitely slow convergence along , and an instantaneous convergence along .this is however a consequence of the attempt to linearize an equation with no linear contribution in its series expansion . since along , we can rewrite equation ( [ freq_comres_1 ] ) as whose leading order is quadratic .we write therefore for points close to , whose leading order solution reads besides demonstrating stability , this solution shows that convergence is superfast in comparison to the exponential behavior for points ( equation ( [ soluc_p1 ] ) ) . in the direction and .therefore , we rewrite equation ( [ freq_comres_2 ] ) as even by neglecting the term , this equation does not have a closed solution .yet , it is possible to extract a conclusion concerning stability and convergence rate .successive iterations of equation ( [ p01paralambda0v2 ] ) give where and .accordingly , for times and points close to the fixed point along , which again demonstrates stability , however a convergence results superslow when compared with points .numerical computations demonstrate that the right hand result is still valid for times arbitrarily large ( see appendix [ appendixe ] ) .quantities not changing in time give powerful insights in the understanding of dynamical problems . in the absence of restrictions in reproduction , allele frequencies and remain constant and this property characterizes the equilibrium ( [ constant_allele_frequencies_1]-[constant_allele_frequencies_4 ] ) . surprisingly , the dynamics under genetic restrictions has also a conserved quantity that , being different from the frequencies of the alleles , allows for a complete description of the dynamics and the equilibria . through equations ( [ freq_comres_1]-[freq_comres_4 ] )it can be shown that all allele frequencies obey the same evolution equation for . writing this equation for and anddividing one by the other implies that the quantity remains constant across generations .this implies that in the 3-dimensional haplotype phase space , the dynamics is constrained to the plane defined by the equation referred from now on as -plane .notice that the three aligned points , and are contained in the -plane for any . changing the value of simply rotates the -plane around the axis defined by these three points , making the description of the dynamics quite simple .specifically , the location of the -plane unambiguously determines two stable fixed points , which can be 1 . and for 2 . and for 3 . and for 4 . and for .the stable eigenvectors and , in turn , run along the borders of the -plane .the dynamics reduces therefore to a 2-dimensional hyperbolic motion with the central fixed point attracting trajectories in one direction ( corresponding to the axis ) and repelling in the other direction .the latter , unstable direction , gives rise to the unstable manifold connecting with two stable fixed points ( in any of the four combinations listed above ) .figure [ trajectories ] illustrates the picture for . from the previous paragraph results that by setting the plane of motion, initial conditions almost determine the equilibrium distribution of haplotype frequencies .there is still an ambiguity concerning which of the two stable fixed points intersected by the -plane is attained .of course , this ambiguity is solved by determining to which side respect to the the axis the initial condition is located . as we demonstrate next , a simple algorithm to determine the latter issue consists in computing initial values of ( [ allele_a]-[allele_b ] ) , and identifying the allele in the minor proportion .the right panel of figure [ ptos_fixos ] depicts a division of the haplotype space in four regions , and two planes forming the frontiers between them .these planes correspond to , and . in terms of the alleles frequencies, a straightforward calculation shows that on the -plane , whereas on the -plane . accordingly , in one and only one of the four regions , the alleles frequencies should satisfy * * but the second relation implies , which necessarily means and thus .this region of the haplotype space is therefore characterized by the fact that the allele is the allele in the minor proportion . as type 1fixed points labeled ( in purple in figure [ ptos_fixos ] ) have necessarily this property , it turns out that the region in consideration must contain all points in the phase space that are attracted to this set of fixed points .the conclusion is that points shadowed in light purple in figure [ ptos_fixos ] are the points with allele a in the minor proportion , and evolve to fixed points .similar arguments allow to conclude that the light blue region contains initial conditions having allele in the minor proportion ( evolving to fixed points ) , light red region contains initial conditions with allele in the minor proportion ( evolving to fixed points ) , and finally , light cyan region contains initial conditions with allele in the minor proportion ( evolving to fixed points ) . , and a set of trajectories with initial conditions chosen close to the stable manifold of . in the right pannel ,a projection of the t - plane on the plane .arrows indicate the direction of motion , and shadowed regions represent the basins of attaction of fixed points ( cyan ) and ( blue ) ( compare with figure [ ptos_fixos ] ) . in green and brown , the eigenvectors of the equilibrium .notice the bending of the trajectories towards ( green vector ) , making evident the differential rates of convergence in the two eigendirections . in the picture , which corresponds to .,title="fig : " ] , and a set of trajectories with initial conditions chosen close to the stable manifold of . in the right pannel ,a projection of the t - plane on the plane .arrows indicate the direction of motion , and shadowed regions represent the basins of attaction of fixed points ( cyan ) and ( blue ) ( compare with figure [ ptos_fixos ] ) . in green and brown , the eigenvectors of the equilibrium .notice the bending of the trajectories towards ( green vector ) , making evident the differential rates of convergence in the two eigendirections . in the picture , which corresponds to .,title="fig : " ] the construction presented above allows us to predict , for an arbitrary initial condition , the asymptotic equilibrium of the population in terms of two elements .first , it is necessary to establish the -plane where the initial condition is located and , second , the allele in the smaller proportion . in the example of figure [ trajectories ] ,a bunch of trajectories is simulated taking initial conditions close to the axis and having .the -plane intersects the set for the initial conditions having in the smaller proportion and the set for initial conditions having in the smaller proportion . from the conservation of results that at equilibrium the first bunch of trajectories converge to the point given by , and the second bunch of trajectories to the point given by .the practical result of this analysis is that the smallest among the initial allelic frequencies always goes to zero .this information , together with the conserved quantity suffices to determine all frequencies .for example , if is the smallest initial frequency , in the equilibrium and , consequently , . from equation ( [ t ] ) we find and and all haplotype frequencies have been calculated .the procedure outlined in sections [ conserved ] and [ equilibrium ] to predict the equilibrium from the initial conditions , in addition to the information provided in section [ rates ] concerning times to convergence , represent the full solution of the dynamics of the two - loci problem subjected to genetic restricted mating .geometrically , the dynamics reduces to a foliation of the 3-dimensional haplotype space in planes with a very simple motion , consisting of a central hyperbolic point repelling trajectories towards two stable equilibria .initial conditions and stable equilibria remain related through the existence of a conserved quantity , which defines the planes where the hyperbolic motion takes place . on the basis of times to convergence ,stable equilibria can be divided in two categories .stable equilibria of type 1 are attained exponentially , whereas type 2 equilibria are attained at much slower rates ( , being the distance to the fixed point ) .it is interesting to observe that this classification has a biological counterpart .specifically , slow - attained equilibria represent monomorphic populations , whereas exponentially - attained equilibria correspond to populations that are polymorphic at a single locus .double - polymorphic populations are unstable ( points of type 3 and 4 ) or evolve to any of the former scenarios , reveling the fact that genetic restricted mating has the net effect of a selection . as pointed out in , models of incompatibility based on genetic distance have two alternative interpretations .one interpretation corresponds to sexual haploid populations with fitness assigned to pairs of individuals ( here fitness is included in the rate ) , and the second interpretation concerns diploid populations reproducing through random mating , where fitness is a function of individual heterozygosity .accordingly , the model studied in this work describes an evolution process that eliminates double polymorphism ( first interpretation ) or alternatively a selection against double heterozigotes ( second interpretation ) .selection against heterozigote is also known as underdominance , and explains the disruptive selection causing sympatric speciation .section [ conserved ] revels another important aspect of restriction through genetic distance , which concerns the fact that the allele initially appearing in the smallest proportion remains always in the minor proportion , and vanish when the equilibrium is attained .the existence of the conserved quantity , in turn , also has an interesting consequence from the biological point of view .as represents the maximum polymorphism at the first locus , can be interpreted as a measure of the monomorphism for that gene .accordingly , the fact that remains constant along evolution , establishes that the correlation between the polymorphism at the two loci does not change . in the general situation of genes and mating genetic restriction by a distance ,stable equilibria are expected to be of different types , in the form of full monomorphic populations , polymorphic populations at a single locus , polymorphic populations at only two loci , etc , up to polymorphic populations at the loci .accordingly , such scenarios can be related to an elimination of -uple to -uple polymorphic populations , or alternatively as a selection against -uple to -uple heterozygotes .these results will be demonstrated in a subsequent publication . in a spatially structured populationit might happen that different regions converge to different equilibria , resulting in reproductively isolated species as obtained in . as the case studied hereexhibits reproductive isolation only in the trivial way accomplished by monomorphic species ( for instance , populations and are isolated with genetic distance within the populations ) , it is of special interest to explicitly consider the case and .in this situation , populations and are reproductively isolated with .moreover , the existence of a third single polymorphic species , or even the monomorphic species , may create an ring structure , revealing the richness of scenarios that can be realized through this simple arrangement . from the analysis of the times to convergence of section [ rates ] , it is also expected that times to convergence for will behave in the same way even for , displaying an exponential behavior for single polymorphic species , and a superslow convergence for monomorphic species . these times to convergenceshould eventually be compared with the time to fixation driven by random drift . as the model studied here assumes infinite size populations ,the model should be modified to take finite populations into account . a possible way to estimate the time to fixation driven by random driftwould be a moran approach .the case and , on the other hand , represents also an interesting issue to investigate , as it is expected to display three different time scales of convergence , corresponding to three types of stable equilibria. the scenarios described in the previous paragraph , as well as the influences of mutations on the results of section [ con_restr ] , will be the subject of a future work .nevertheless , we stress the importance of the study accomplished so far , as it reveals aspects of the dynamics that necessarily help to undertake the analysis in more complex frameworks. + acknowledgments .we thank yaneer bar - yam for helpful comments and discussions .this work was partly supported by fapesp ( fundao de amparo pesquisa do estado de so paulo ) and cnpq ( conselho nacional de desenvolvimento cientfico e tecnolgico ) .+ 32 j.a coyne and h.a .orr , 2004 .speciation . 1 ed .sinauer associates , inc .s. gavrilets , 2004 .fitness landscapes and the origin of species ( mpb-41 ) .princeton university press .butlin et al . , trends ecol .* 27 * ( 2012 ) 27 .p. nosil , 2012 .ecological speciation .oxford university press , oxford ; new york .higgs and b.derrida , j. phys .a. * 24 * , l985 ( 1991 ) . p.g . higgs and b.derrida ,evol . * 35 * , 454 ( 1992 ) .kondrashov , m. shpak , proc .lond r * 265 * ( 1998 ) 2273 .u. dieckmann , m. doebeli , nature * 400 * ( 1999 ) 354 .s. gavrilets , the am .* 154 * ( 1999 ) 1 .hoelzer , r. drewes , j. meier and r. doursat , plos comput . biol . * 4 * e1000126 ( 2008 ) . g.s .v. doorn , p. edelaar , and f.j .weissing , science * 326 * ( 2009 ) 1704 .fitzpatrick , j.a .fordyce , s. gavrilets , j. evol .* 22 * ( 2009 ) 2342 .de aguiar , m.a.m . , m. baranger , e.m .baptestini , l. kaufman , and y. bar - yam , nature * 460 * ( 2009 ) 384 d. ashlock , e.l .clare , t.e .von konigslow , w. ashlock , j. theor . biol .* 264 * ( 2010 ) 1202 .m. kopp , bioessays * 32 * ( 2010 ) 564 .c. j. melian , c. vilas , f. baldo , e. gonzalez - ortegon , p. drake and r. j. williams , ad .* 45 * ( 2011 ) 225 .p. desjardins - proulx , d. gravel , the am .* 179 * ( 2012 ) 137 a.b .martins , m.a.m . de aguiar , and y. bar - yam , pnas * 110 * ( 2013 ) 5080 .s. gavrilets , h. li , m.d .vose , proc .* 54 * ( 2000 ) 1126 .u. candolin , biological reviews * 78 * ( 2003 ) 575 . e.m .baptestini , m.a.m. de aguiar , y. bar - yam , j. theor . biol .* 335 * ( 2013 ) 51 .schneider , e. do carmo , y. bar - yam , m.a.m .de aguiar , phys .e * 86 * ( 2012 ) 041104 .j.f crow and m. kimura , 1970 .an introduction to population genetics theory , the blackburn press .ewens _ mathematical population genetics i. theoretical introduction _ series : biomathematics , vol. 9 ( new york : springer verlag , 1979 ) .moran , proc . cam .* 54 * 60 ( 1958 ) .de aguiar and y. bar - yam , phys .e * 84 * ( 2011 ) 031901 .the equation is a particular case of the logistic map , having analytical solutions only for , and .j. m. smith , the am . natur .* 100 * ( 1966 ) 637 .in this appendix we solve equations ( [ sr11]-[sr14 ] ) , performing explicit calculations for the expression ( [ sr11 ] ) .the remaining solutions can be obtained in an equivalent way .we write explicitly the time dependence or on the other hand , using the result ( [ d_de_t ] ) yields for long times , we obtain the equilibrium solutions , we demonstrate the product relationship between the allele and the haplotype frequencies when mating is not restricted by genetic distance .given the definitions of the allele frequencies , we calculate , for instance , the product using the result ( [ d_de_t ] ) leads to the equilibrium corresponds to the asymptotic limit of the previous equation , and is approached after a small number of generations strictly speaking , of the equilibrium value in less than ten generations .in this appendix we extend the results of the stability analysis of section [ stab_anal ] for the fixed points , and .we start with the eigenvalues and eigenvectors of the stability matrices of the different fixed points .* stability of points ( purpple line , * * : * : . * : . * stability of points ( red line , , ) * * : . * : * : . * stability of points ( cyan line , , ) * * : . * : * : .stable eigenvectors corresponding to points of type 1 and type 2 , properly scaled , connect the fixed points with type 3 fixed points . in figure[ vectorsitos_1a - c ] we expose this important property for some specific points at each set , including the points analyzed in section [ stab_anal ] ..49 in green , and in brown).,title="fig : " ] [ p1a ] .49 in green , and in brown).,title="fig : " ] [ p1c ] .49 in green , and in brown).,title="fig : " ] [ p1d ] .49 in green , and in brown).,title="fig : " ] [ p1d ] rates of convergence are inferred from the relation where denotes the type 3 fixed point ( or ) to which the eigenvector points . accordingly , going through the set from to ( see figure [ recorrido_bordes_estables_del_tetrahedro ] ) , makes to decreases .this time becomes almost zero at the point ( superfast convergence ) , and it starts increasing again by going to through the set ( cyan line in figure [ recorrido_bordes_estables_del_tetrahedro ] ) . at this becomes infinite ( superslow convergence ) , which means an equivalent behavior to that corresponding to . on the other hand , as , it turns out that , the time to convergence along the directions spanned by , displays the opposite behavior . finally , the picture is completed by observing that the remaining branch of the cycle ( along and along ) is an exact repetition of the branch described above . ]here we give a brief summary of the fitting process employed to solve equation ( [ p01paralambda0v2 ] ) . by iterating the map for different initial conditions and fitting the results , one obtains where is a function that diverges as when ( see figure [ adedelta_y_recta ] ) .accordingly , for very small values we can write
we study the evolution of allele frequencies in a large population where random mating is violated in a particular way that is related to recent works on speciation . specifically , we consider non - random encounters in haploid organisms described by biallelic genes at two loci and assume that individuals whose alleles differ at both loci are incompatible . we show that evolution under these conditions leads to the disappearance of one of the alleles and substantially reduces the diversity of the population . the allele that disappears , and the other allele frequencies at equilibrium , depend only on their initial values , and so does the time to equilibration . however , certain combinations of allele frequencies remain constant during the process , revealing the emergence of strong correlation between the two loci promoted by the epistatic mechanism of incompatibility . we determine the geometrical structure of the haplotype frequency space and solve the dynamical equations , obtaining a simple rule to determine equilibrium solution from the initial conditions . we show that our results are equivalent to selection against double heterozigotes for a population of diploid individuals and discuss the relevance of our findings to speciation .
nowadays , many different areas of science and engineering require high performance computing ( hpc ) to perform data and computationally intensive experiments .advances in transport phenomena , thermodynamics , material properties , machine learning , chemistry , and physics are possible only because of large scale computing infrastructures . yet , over the last decadethe definition of high performance computing changed drastically .the introduction of computing systems made from many inexpensive processors became an alternative to the large and expensive supercomputers .in the introduction of a practical way to link many personal computers in a computing cluster represented cost - effective and highly performance model .today , the beowulf concept represents a recognized subcategory within the high - performance computing community .this vision , although implemented in certain way , is not entirely realized and many challenges are still open . for instance , the cost of energy consumption of a single server during its life cycle can surpass its purchasing cost . furthermore , at least of the energy expenses in large - scale computing centers are dedicated to cooling .need for energy preservation have forced many hpc centers to relocate near power stations or renewable energy location and to utilize natural / green resources like sea water or outside air in naturally cold areas in the world .idle resources account for a portion of the energy waste in many centers , and therefore load balancing and workload scheduling are of increased interest in high performance computing . furthermore , depending on the specific computational tasks , it is more cost effective to run more nodes at slower speed .in other cases exploring parallelism on as many processors as possible provides a more energy efficient strategy , . as a consequence from the different computational demands a competition for resources emerges .subsequently , the scientific and engineering communities are turning their interests to the possibility of implementing energy - efficient servers utilizing low - power cpus for computing - intensive tasks .taking under consideration the above factors , in this paper we present a novel approach to scientific computation based on the beowulf concept , utilizing single board computers ( sbc ) .our goal is two fold .firstly , we want to show a low - energy consumption architecture capable to tackle heavily demanding scientific computational problems .second , we want to provide a low cost personal solution for scientists and engineers .recently this architecture has been presented at linux conference europe 2015 .the structure of this paper is as follows . in the next sectionwe describe the specific hardware we have used to build our cluster .then we outline the software packages the server runs on .we provide several benchmark tests to assess the performance of our suggested architecture .furthermore , we carry out two benchmark tools involving practical tcad for physicist and engineers in the semiconductor industry in order to assess the reliability of this machine in real life situations . in more detailwe run two well - known gnu packages - archimedes and nano - archimedes which respectively deal with classical ( cmos technology ) and quantum electron transport ( post - cmos technology ) .we conclude the paper with possible future directions .the single board computer is a complete computer architecture on a single circuit board . compared to its low cost, the board provides relatively high computational power , low power consumption and small space required for storage . in the last few years over different sbc boards of varying architectural and computational capabilities have been introduced . consequently , several teams have considered and implemented sbc boards as beowulf clusters , . while interesting projects are concentrating towards large centers and research institutions , others have demonstrated a beowulf cluster consisting of raspberry pi boards was described in .several factors need to be taken into consideration when developing such cluster - computing power , memory size and its speed , energy consumption levels , and in our specific case mobility of the cluster . despite the success of the iridis - pi project ,it is rather expensive since it had to use so many rasbperry pi boards to reach computational power and memory availability .furthermore , expansion of the number of boards leads to an increase of energy consumption and a need for cooling implementation .those reasons hinder the mobility of the cluster .finally , while they provided a proof of concept , most projects focus on proving the concept , applying it as a web server or for testing distributed software or distributed updates .the following subsections describe the most important hardware parameters of the radxa rock board , the cluster architecture and the software we have ran on it .the reader should keep in mind that the type of boards , architecture and software are not a necessary limitations and other specifications can be implemented as well . in this section ,we present the most relevant hardware parameters of the radxa rock board ( in the context of our project ) .the selection of this specific board depended on several primary factors : computing power , memory size and speed , communication speed .the size of the boards and the case we have built are also considered since heat dissipation is an essential factor for the necessary cooling strategies and therefore operational costs of the machine . we have concentrated on developing an architecture that is mobile and easy for transportation but at the same time able to perform meaningful science / engineering computational tasks . in terms of usability , determining the software needs and specific packages to be employed has been taken under consideration . our high performance parallel personal cluster ( ) consists of radxa rock pro nodes ( see fig .[ fig : radxacomp ] ) , with a plug - and - play model ( specifically developed for our purpose ) for the addition of more boards when we require more computational power .the radxa board itself has a rockchip cpu architecture with cores running at .the board is equipped with of ddr3 ram ( at ) , which has considerably more bandwidth compared to , for instance , raspberry pi or the beaglebone boards , .currently , the system - on - chip boards are not designed to extend the physical ram , but since we are using distributed memory provided by all boards , this disadvantage is mitigated to a certain level .the non - volatile storage on the board is a of nand flash memory .currently , this space is enough to install the required operating system and all the necessary software in order to perform highly sophisticated quantum computations ( see section [ compute ] ) .furthermore , the storage capacity can be extended through the available microsd card slot , which can be used to increase the virtual memory of the cluster or to add more storage capacity . in the current work, we use a generic sdhc flash memory with a minimum write speed of .we have used part of the additional storage as a virtual memory and the rest as a storage space .the purpose of the latter is to reduce the size of the software that is needed by each node and instead provide an option to compile it dynamically .therefore , we have implemented a samba share to provide access to all client nodes in the system .the decision to choose samba is _ by no means _ a limiting factor and one might instead use nfs or rsync in similar manner . in this way , s we provide access to the software and data to all available nodes .additionally , we can implement upgrades and changes without increase of workload time or wasting memory in every board .if specific libraries ( such as mpi ) are needed for certain computations , they are installed on the external drive , with hard links from every node pointing to their storage location .the board provides also two usb ports , a ethernet adapter , built - in _ wifi _ module at with support for the protocol and a module .the graphical processing unit is mali- running at , with one hdmi output . [cols="^ " , ]in this work , we suggested a novel cluster architecture based on systems on a chip , which was recently presented at the linux conference 2015 .we have successfully validated our practical hardware implementation over a set of standard benchmarking test ( nasa , npb ) .furthermore , in spite of the common believe of the scientific computational communities , we have been able to show that this machine can actually perform _ real life _related simulations , in particular in the field of cmos and post - cmos device design .the reader should note that our current implementation consists of a _ homogeneous _ structure based on radxa rock boards only , connected by means of ethernet interface .it is clear that these practical decisions do not represent a restriction . as a matter of fact, we plan to design and implement a _ heterogeneous _ cluster , which is not soc and ethernet dependent . inspired by the very encouraging results shown in this paper ,the authors believe that this direction could represent a practical solution to the problem of having to recur to expensive and power consumptive supercomputer every time a scientific and/or engineering numerical task is required . .the authors would like to thank prof .i. dimov for the valuable conversations .this work has been supported by the project ec acomin ( fp7-regpot-20122013 - 1 ) , as well as by the bulgarian science fund under grant dfni i02/20 .becker , t. sterling , d. savarese , j.e .dorband , u.a .ranawak , c.v . packer , beowulf : a parallel workstation for scientific computation , in proceedings , international conference on parallel processing , vol .( 1995 ) .freeh , f. pan , n. kappiah , d.k .lowenthal , r. springer , exploring the energy - time tradeoff in mpi programs on a power - scalable cluster , in parallel and distributed processing symposium , proceedings .19th ieee international , pp .ieee , ( 2005 ) .n. rajovic , p.m. carpenter , i. gelado , n. puzovic , a. ramirez , m.r .valero , supercomputing with commodity cpus : are mobile socs ready for hpc ?, international conference in high performance computing , networking , storage and analysis ( sc ) , pp . 1 - 12 .ieee , ( 2013 ) . *kristina g. kapanova * is from the institute of information and communication technologies at the bulgarian academy of sciences , where she is currently working to obtain her phd degree .her research interests include numerical methods , artificial neural networks , optimization , system on a chip development to enable science and engineering research on a smaller scale .* jean michel sellier * is currently an associate professor at the bulgarian academy of sciences .he is also the creator and maintainer of several gnu packages for the simulation of electron transport in cmos devices ( archimedes ) , and the simulation of single- and many - body quantum systems occurring in the field of quantum computing , spintronics , quantum chemistry and post - cmos devices ( nano - archimedes ) .
today , many scientific and engineering areas require high performance computing to perform computationally intensive experiments . for example , many advances in transport phenomena , thermodynamics , material properties , computational chemistry and physics are possible only because of the availability of such large scale computing infrastructures . yet many challenges are still open . the cost of energy consumption , cooling , competition for resources have been some of the reasons why the scientific and engineering communities are turning their interests to the possibility of implementing energy - efficient servers utilizing low - power cpus for computing - intensive tasks . in this paper we introduce a novel approach , which was recently presented at linux conference europe 2015 , based on the beowulf concept and utilizing single board computers ( sbc ) . we present a low - energy consumption architecture capable to tackle heavily demanding scientific computational problems . additionally , our goal is to provide a low cost personal solution for scientists and engineers . in order to evaluate the performance of the proposed architecture we ran several standard benchmarking tests . furthermore , we assess the reliability of the machine in real life situations by performing two benchmark tools involving practical tcad for physicist and engineers in the semiconductor industry . parallel computing , system on a chip , beowulf , performance assessment , technology computer aided design
though privacy has been defined as _ the claim of individuals , groups , or institutions to determine for themselves when , how , and to what extent information about them is communicated to others _ , the right of individuals , rather than companies , to protect their personal data has so far been the focus of most privacy studies .this is especially true in the context of marketplaces , where consumers may release , deliberately or not , details about themselves and the items they purchase .algorithms and platforms have been devised to enforce customers privacy requirements ( see , e.g. , ) , but little attention has been paid to the other side of trading , i.e. , companies selling their products .however , a company may wish to select the level of information it provides to its customers , but the widespread adoption of e - shops divulges a lot of details about company s operations , not just to prospective customers but to everyone accessing the e - commerce platform , including competitors .for example , companies may wish to keep their level of stock for a given product secret .the issue is even more delicate when traders are actually prosumers , who , acting as sellers , may reveal personal data ; an example of such a context is that of smart grids . generally speaking , individuals actingas suppliers may wish not to be profiled and rather keep secret the products they happen to own . the definition of a marketplace where suppliers can sell their products while retaining privacy is then a relevant issue .such a definition has been proposed in , where the information to be protected is the identity of the sellers and their level of stock , and the role of a broker is envisaged .it was shown that such a marketplace may be set up with benefits for all the stakeholders ( a broker / producer , privacy - aware suppliers , and end customers ) through the use of differential privacy mechanisms and option contracts subscribed by end customers .a general formula for option pricing has been derived in , where nothing at all is known about the actual availability of items by the suppliers . in this paper , we embrace the marketplace definition provided in and consider the case where the broker , though it does nt know the actual number of available items , may adopt a hypothesis concerning their probability distribution .in particular , we consider the cases where the stocks owned by suppliers exhibits either full or null correlation ( i.e. , respectively perfectly correlated stocks and independent suppliers ) and a third case where a uniform distribution applies ( representing a mild correlation ) .for each case we derive a formula for the option price .we show that the using such an option price allows the broker to transfer its risk to end customers .the paper is organized as follows . in section [ market ]we define the marketplace and the stakeholders , while the option contract between the broker and end customers is described in section [ opzioni ] . in section [ ava ]we describe the three models for the availability of items , which are used to derive the option pricing formulas in section [ formulas ] .let s consider a database of suppliers where information can be obtained about the availability of a set of items , but suppliers are somewhat screened .suppliers could be vendors whose typical line of business does not include those products or who wish to get rid of some remainders , or individuals ( prosumers ) who happen to have those products in their availability .for example , the database could contain the number of items available for sale at each supplier , so that the vertical sum across all suppliers included in the database would tell us the overall number of items available .such a database , providing statistics about the entities included in it , is called a statistical database . however , in a statistical database , releasing statistical information may compromise the privacy of individual contributors . butsuppliers may wish not to divulge those information ; for example they do not want competitors ( who could access the database ) to know their level of stock , or , as individuals , they do not wish to be profiled about the items they own . if suppliers wish to be screened , a curator may sit between the users , posing the query , and the database .the responses to these queries may be modified by the curator in order to protect the privacy of the contributors , for example so as not to tell us exactly either which supplier can provide us with those items or how many items in the set are available . instead of providing the exact number , the database provides us with an obfuscated number , which is more or less close to the exact figure .a mechanism to achieve differential privacy is the use of noisy sums : the response to a counting query is the sum of the true figure and some noise .the use of a statistical database plus the use of noisy sums may therefore protect the private information of suppliers .when end customers demand for a number of items , the uncertainty surrounding the actual availability of those items does nt allow to close deals . in the presence of such privacy constraints , we postulate that a market can develop through the introduction of a broker / producer and the use of option contracts .let s consider the case where end customers demand for items .a broker commits to provide them with the number of items required .in fact , the broker may procure those items either by producing them itself ( at a unit production cost ) or by resorting to _ privacy - aware suppliers _ , whose availability is known through the statistical database previously mentioned . asalready said , privacy - aware suppliers do not release full information about the availability of their products , but release instead an obfuscated number , which is generally different from the true number of items that they can provide ( though the broker may obtain a refined estimate of the true number through bayesian analysis ) .the privacy enjoyed by privacy - aware suppliers is reflected in the price they advertise .prices set by privacy - aware suppliers depend on the level of obfuscation ( i.e. privacy protection ) : the higher the level of obfuscation ( embodied by the variance of the added noise ) , the lower the price . assuming , the broker has a real advantage to procure as many items as it can through privacy - aware suppliers at the reduced price , and transfer part of that benefit to end customers by setting a lower end price .if the availability of items is not enough to satisfy the demand ( ) , the broker / producer produces the remaining items ( but does not enjoy the full benefit of the reduced price ) . in order to exploit the offer by privacy - aware suppliers ,the broker submits a query to the statistical database containing information about the availability of items and pays a fixed amount and receives the noisy response .it commits to buy all the items available , though they may exceed the actual demand .when the actual number of available items is disclosed ( at delivery ) , it pays the privacy - aware suppliers the overall amount .if the demand is fully met ( ) the broker does not have to produce any item ; otherwise , the broker has to produce items at the unit cost .the resulting supply chain is shown in [ fig : sup ] .it is to be noted that both curator and the broker in the end know the exact number of items available by the suppliers , but they have ( different ) reasons to keep it private .in fact , the database curator does not have a direct contact with end customers and is not in the business of retailing .instead , the broker s business relies on the privacy of those data for its intermediary role .in addition , the roles of the curator and the broker may have to be kept separate due to regulatory constraints .however , such a procedure is not free of risks for the broker / producer , which , on the one hand commits to provide its customers with the required items , but on the other hand is subject to the uncertainty determined by the unknown availability of items delivered by privacy - aware suppliers , with the risks deriving from the commitments to buy all the items available and , if required , to produce the remaining ones at a higher cost .the broker / producer has therefore to hedge against such risks .a way we suggest is to resort to option contracts , which are described in the next section .as seen in the previous section , the broker / producer undergoes a risk when resorting to privacy - aware suppliers in order to meet customers demand at a reduced cost , a benefit which it transfers to end customers through a reduced price .it needs however to hedge against such a risk . in this sectionwe describe a mechanism , based on option contracts , by which it can achieve such protection . since the stakeholders that ultimately benefit from resorting to privacy - aware suppliers are end customers , the broker / producer may transfer some of that risk to them , asking them to pay a price to get the right to buy the desired number of items at a predetermined lower price ( i.e. , a booking fee ) .in the language of financial markets , this is a _ call _ option , since it endows the end customer with the right to buy .end customers are then required to subscribe a call option to be sure to get the right number of items they wish at a reduced price .a critical issue in option contract is setting the right price . in the typical scenario ,the amount to be paid for the option contract is expected to depend on the current value of the items for sale , the predetermined price to be paid if the option is exercised , and the expected behaviour of the item s value in the period from the option contract underwriting to the exercise time. a simple form of pricing is given by the black - scholes formula , but a form tailored for the context is to be derived here . the general expression for the option price is ,\ ] ] which considers the risk due to the extra - cost of buying the excess items provided by privacy - aware suppliers . in , the risk of having to buythe items exceeding the demand has been analysed , and the following pricing formula has been derived for the case where laplacian noise is added to form the noisy sum : ,\ ] ] where is the shape parameter of the laplace distribution : the smaller , the greater the differential privacy . in that formula , the declared value considered as the best estimate of the actual availability . however ,if we have some _ a priori _ information about the actual availability , we can gain a better estimate for it and obtain a more accurate formula for the option price . in , we have shown that bayesian analysis mat be used to obtain a better estimate of the figure previously obfuscated through the addition of laplace noise .here we exploit the same mechanism to obtain a better estimate of the actual availability . in the next sectionwe describe three models that may provide the _ a priori _ information we need to apply bayes estimation .the extra - cost incurred by the broker to get the items through privacy - aware suppliers depends on the number of items actually available .this is unknown to the broker till the disclosure by those suppliers when setting the deal .however , models may be adopted for the envisaged availability that allow us to obtain a formula for the option price once the declared availability is made known . in this section ,we describe three models , which represent three paradigmatic situations : unit correlation , independent suppliers , and uniform distribution ( which represents a mild correlation ) .the case of unit correlation applies when either all the suppliers have the item or none of them have it .their behaviour shows therefore full correlation , hence the name given to the model .the number of availabile items may therefore take just either of two values : 0 or .the probability associated to the two cases is & = 1-p\\ \mathbb{p}[k = n ] & = p. \end{split}\ ] ] the case opposite to full correlation is that of no correlation at all , where the availability of the item by any supplier is independent of any other suppliers .if we now indicate the probability of any supplier to have the item by , the probability of the number of available items follows a binomial distribution with parameters and : = \binom{n}{i } p^{i}(1-p)^{n - i } \qquad i=0,1,\ldots , n.\ ] ] finally , as an intermediate case between those of no correlation and unit correlation , we can consider the case of uniform distribution = \frac{1}{n+1 } \qquad i=0,1,\ldots , n.\ ] ]after defining the three paradigmatic models for the availability of items , in this section we derive the pricing formulas for the three cases . in the case of unit correlation , the number of actually available items may take either of two values and , as defined in equation ( [ probunit ] ) .the extra - cost is then & = c_{\textrm{s}}\ { \mathbb{p}[k=0\vert \hat{k}=x](0-k^{*})^{+ } + \mathbb{p}[k = n\vert \hat{k}=x](n - k^{*})^{+}\}\\ & = c_{\textrm{s}}\mathbb{p}[k = n\vert \hat{k}=x](n - k^{*})\\ \end{split}\ ] ] this expression is still dependent on the conditional probability $ ] , which we may obtain through bayes theorem & = \frac{\mathbb{p}[k = n]\mathbb{p}[\hat{k}=x\vert k = n]}{\mathbb{p}[\hat{k}=x]}\\ & = \frac{\mathbb{p}[k = n]\mathbb{p}[\hat{k}=x\vert k = n]}{\mathbb{p}[\hat{k}=0]\vert\mathbb{p}[\hat{k}=x\vert k=0 ] + \mathbb{p}[\hat{k}=n]\vert\mathbb{p}[\hat{k}=x\vert k = n]}\\ & = \frac{p\frac{\lambda}{2}e^{-\lambda \vert n - x \vert}}{(1-p)\frac{\lambda}{2}e^{-\lambda \vert x \vert } + p\frac{\lambda}{2}e^{-\lambda \vert n - x \vert } } \\ & = \frac{1}{1+\frac{1-p}{p}e^{-\lambda[\vert x \vert - \vert n - x \vert ] } } \end{split}\ ] ] by replacing the expression ( [ bayesunicor ] ) in the extra - cost expression ( [ eunicorr ] ) we finally obtain = \frac{c_{\textrm{s}}(n - k^{*})}{1+\frac{1-p}{p}e^{-\lambda[\vert x \vert - \vert n - x \vert]}}\ ] ] if we assume that the declaration falls within the limits of the range of suppliers , i.e. , , we have }}\\ & = \frac{c_{\textrm{s}}n(1-k^{*}/n)}{1+\frac{1-p}{p}e^{-\lambda ( 2x - n ) } } \end{split}\ ] ] if we consider the price as a function of the declared number of available items , we see that the price switches quite abruptly between two values . when , the normalized price is instead , when we are on the opposite side of declared values , , we have the turning point between the two values is a set of price curves are shown in [ fig : prixcorr ] for suppliers , , and . as we can see the normalized option price is practically zero when the declared availability is and nearly when the declared availability is .the only parameters that actually impact the price are the demand and the declared availability .the probability of the suppliers having all the items plays a negligible role , just in the small range around . in practice , this means that the end customer switches from paying nothing ( when the declared availability is less than half the number of suppliers ) to paying almost the full price of the items available but not required ( when the declared availability is more than half the number of suppliers ) . in the latter casethere is practically a complete transfer of risk . in the case of independent suppliers ,each one having the item with probability , the number of actually available items follows a binomial distribution with parameters and , as described in section [ ava ] .since the broker will incur an extra - cost for the items that it has to buy from the suppliers in excess of the actual demand , the extra - cost suffered by the broker is = \sum_{i = k^{*}+1}^{n } c_{\textrm{s}}(i - k^{*})\mathbb{p}[k = i\vert x<\hat{k}<x+dx]\ ] ] again by bayes theorem we have & = \frac{\mathbb{p}[k = i]\mathbb{p}[x<\hat{k}<x+dx \vert k = i]}{\mathbb{p}[x<\hat{k}<x+dx]}\\ & = \frac{\binom{n}{i } p^{i}(1-p)^{n - i}\frac{1}{2}\lambda e^{-\lambda \vert x - i \vert}}{\sum_{j=0}^{n}\binom{n}{j } p^{j}(1-p)^{n - j}\frac{1}{2}\lambda e^{-\lambda \vert x - j \vert } } , \end{split}\ ] ] that , when replaced in equation ( [ ind1 ] ) , provides we now examine the dependence of the option price on the following parameters : * declared availability * demand * probability of individual availability we first plot the normalized option price for three different values of demand ( again , with suppliers , , and ) in [ fig : prixind1 ] .we see again the option price transitioning from a very low value to a high one as the suppliers declare a higher availability . from equation ( [ ind2 ] ) the low and high value result but there are two important differences with respect to the case of perfectly correlated suppliers : * the low value is not always practically zero , but rises over zero when the demand is low ; * the transition from low to high is quite smooth rather than abrupt as in the cases of perfect correlation . in order to examine the impact of the probability of individual availability , we can consider the set of curves in [ fig : prixind2 ] , where , , and .we observe a similar behaviour as in the previous curves , where now higher values of the individual probability push the price up . summing up, we can conclude that low demand , high declared availability , and high individual availability probability lead to higher option prices .the third model we consider is the uniform distribution , which is tantamount to assuming that we have no specific hypothesis for the availability of items . as in the previous two models ,we proceed to define the extra - cost = \sum_{i = k^{*}+1}^{n } c_{\textrm{s}}(i - k^{*})\mathbb{p}[k = i\vert x<\hat{k}<x+dx],\ ] ] and to evaluate the conditional probability through bayes theorem ( in this case the uniform distribution is a non informative prior ) & = \frac{\mathbb{p}[k = i]\mathbb{p}[x<\hat{k}<x+dx \vert k = i]}{\mathbb{p}[x<\hat{k}<x+dx]}\\ & = \frac{\frac{1}{n+1}\frac{1}{2}\lambda e^{-\lambda\vert x - i \vert}}{\sum_{j=0}^{n}\frac{1}{n+1}\frac{1}{2}\lambda e^{-\lambda\vert x - j \vert}}\\ & = \frac { e^{-\lambda\vert x - i \vert}}{\sum_{j=0}^{n } e^{-\lambda\vert x - j \vert } } , \end{split}\ ] ] which , replaced in equation ( [ uni1 ] ) , provides us with the final expression of the extra - cost now the only parameters are the demand and the declared availability .we plot three sample curves for the normalized price in [ fig : prixunif ] , again for suppliers and .we see that we obtain a piecewise linear curve with a knee in , that can be approximated by the formula in this case the risk transfer may be partial or excessive .if the noise added is negative , so that the suppliers declare an availability lower than real ( ) , the price of the option is so that not the whole risk is transferred to end customers .the opposite case occurs when , which may make end customers pay more than the actual risk .the issue of option contracts in a privacy - aware market has been analysed , where the identity and level of stock of suppliers are kept hidden from end customers and potential competitors through a differential privacy scheme by the addition of laplace noise .the scheme employs a broker that acts as an intermediary between suppliers and end customers .pricing formulas for the option have been derived under three different models for the availability of items , which respectively assume a perfect correlation between suppliers , their independence ( hence , perfect uncorrelation ) , or a uniform distribution ( hence , a mild correlation ) .the option contract allows the broker to transfer part of its risk to end customers .naldi , m. , dacquisto , g. : option contracts for a privacy - aware market . in : ki 2015 , 38th german conference on artificial intelligence , workshop on privacy and inference .dresden ( sept 21 , 2015 ) , http://arxiv.org/abs/1509.06524 shoshani , a. : statistical databases : characteristics , problems , and some solutions . in : proceedings of the 8th international conference on very large data bases .. 208222 .morgan kaufmann publishers inc .
a marketplace is defined where the private data of suppliers ( e.g. , prosumers ) are protected , so that neither their identity nor their level of stock is made known to end customers , while they can sell their products at a reduced price . a broker acts as an intermediary , which takes care of providing the items missing to meet the customers demand and allows end customers to take advantages of reduced prices through the subscription of option contracts . formulas are provided for the option price under three different probability models for the availability of items . option pricing allows the broker to partially transfer its risk on end customers .
computer vision techniques have advanced video processing and intelligence generation for several challenging dynamic scenarios , research in computer vision for maritime is still in nascent state and several challenges remain open in this field .this paper presents some of the challenges unique to the maritime domain . a simple block diagram for processing of maritime videos is given in fig . [ fig : simpleblock ] , where the objective is to track foreground objects and generate intelligence and situation - awareness .foreground objects are the objects anchored , floating , or navigating in water , including sea vessels , small personal boats and kayaks , buoys , debris , etc .air vehicles , birds , and fixed structures , such as in ports , qualify as outliers or background .also , wakes , foams , clouds , water speckle , etc .qualify as background .the first four blocks form the core of video processing and the performance of these blocks directly affect the attainment of the objective .the challenges specific to these four blocks are discussed in the sections [ sec : horizon ] to [ sec : foreground ] , respectively .the challenges due to weather are discussed in section [ sec : weather ] .we use 3 datasets from three different sources to illustrate the challenges .two datasets are from the external sources , buoy dataset and mar - dct dataset .the camera in the buoy dataset is mounted on a floating buoy which is subject to significant amount of motion from one frame to another .the camera used in mar - dct dataset is mounted on a stationary platform on - shore .sometimes , zoom operations are used while capturing the videos .the third dataset singapore - marine - dataset is created by the authors using canon 70d camera .videos are acquired in two scenarios , namely at sea ( videos captured on - board a vessel in motion ) and on - shore ( videos captured with camera on a stationary platform on - shore ) .the details of the datasets are presented in table [ tab : datasets ] .we represent horizon using two parameters , the vertical position of the center of the horizon from the upper edge of the image , and the angular position made by the horizon with the horizontal axis .this is illustrated in fig .[ fig : horizonrep ] . in the case of cameras mounted on mobile platform ,the vertical and angular position is subject to large amount of motion , as noted in table [ tab : datasets ] . in table[ tab : datasets ] , e( ) and e( ) represent the mean values of and for a video . the ground truth for horizonis generated for each frame of these videos manually using independent volunteers .& buoy & & mar - dct + number & 10 & 11 & 28 & 9 + of videos & & & & + number & 998 & 2772 & 12604 & 7410 + of frames & & & & + + min(-e( ) ) & -281.68 & -436.30 & -13.54 & -52.32 + ( pixels ) & & & & + max(-e( ) ) & 307.82 & 467.86 & 9.95 & 35.69 + ( pixels ) & & & & + std . dev .& 107.98 & 145.10 & 1.52 & 9.98 + of ( pixels ) & & & & + min(-e( ) ) & -15.72 & -26.34 & -0.99 & -1.25 + ( degree ) & & & & + max(-e( ) ) & 20.72 & 12.99 & 0.51 & 1.75 + ( degree ) & & & & + std .dev . & 4.40 & 1.11 & 0.04 & 0.22 + of ( degree ) & & & & + + min number & 0 & 0 & 0 & 1 + of objects & & & & + max .number & 3 & 10 & 20 & 2 + of objects & & & & + and . ]we discuss two state - of - the - art methods , which we succinctly refer to as fgsl ( abbreviation derived from the first alphabets of the authors names ) and eniw ( abbreviation derived from the first alphabets of the authors names ) , in the context of the present datasets .they are chosen as they both use a combination of two main approaches used for horizon detection , as discussed next .one popular approach is to detect the most prominent line feature through parametric projection of edges in the image space to the parametric space of line features , such as hough transform ( ht ) .this approach assumes that horizon appears as a long line feature in the image .we note that this approach uses projective mappings and parametric space and is different from another line of research on line fitting on edge maps .although we do not exclude the utility of dominant point detection and line fitting for horizon detection in the on - board maritime detection problems , we note that no research work on horizon detection has so far employed these techniques .the second popular approach is to select a candidate horizon solution that maximizes the statistical distances between the color distributions of the two regions created by the candidate solution .this approach assumes that sea and sky regions have color distributions with large statistical distance between them and that the candidate solution separates the regions into sea and sky regions . while they are similar in using statistical distribution as the main criterion and using prominent linear features as candidate solutions , they are different in the choice of statistical distance measures . + & + eniw & 0.92 & 71.82 & 15.30 & 1.38 + fgsl & 0.72 & 72.06 & 7.30 & 4.29 + & + eniw & 1.93 & 117.81 & 115.25 & 37.43 + fgsl & 1.59 & 118.14 & 115.25 & 198.58 + + & + eniw & 0.24 & 0.47 & 0.18 & 0.26 + fgsl & 0.20 & 0.49 & 0.18 & 0.64 + & + eniw & 0.46 & 1.10 & 0.38 & 1.18 + fgsl & 0.38 & 1.19 & 0.35 & 1.00 + + the performance of these methods is presented in table [ tab : horizonresults ] .it is seen that the methods perform extremely well for buoy dataset but perform poorly for the other datasets in terms of the vertical position of the horizon . in fig .[ fig : horizon ] , we show that the assumption behind the statistical approach used by both methods may not apply .we present one image from each dataset ( 3rd row ) , the horizon ground truth ( red solid line ) , the most prominent ht candidate ( green dashed line ) , and the color distributions of the regions created by them in fig . [ fig : horizon ] .for the first image , it is seen that the ht candidate for the horizon matches the ground truth and indeed the color distributions corresponding to the sea and sky regions match well . however , for the other three images , the hough transform candidates do not match with the ground truth .let us first consider the upper regions created by the ground truth and the hough candidates .for the singapore - marine dataset , the upper region created by the hough candidate includes the sky region and part of the sea region .this causes some change in the color distribution at lower color values .nevertheless , the distribution is clearly dominated by sky and statistical distance metrics may not be effective in distinguishing sea and sky regions effectively .for example , the mean values ( shown using vertical lines in the color distribution plots ) of the distributions corresponding to the incorrect horizon are not significantly different from the mean values of the distributions corresponding to the ground truth .numerically , the means show the same shift for all the color channels between the incorrect horizon and the ground truth .the maximum shift of 25 value ( between 0 to 256 digital values ) is observed for the third image for the sky region .the shift is caused by the inclusion of part of the sea in the upper region .in the other cases , the typical shift is 0 value to 5 values .the same observation applies to the example from mar - dct dataset as well , however with the shift observed in the bottom region .further , we note some frames from singapore - marine - dataset in fig . [fig : horichallenging ] , which are challenging due to reasons such as absence of line features of horizon , presence of competing line features ( such as through ships and vegetation ) , adverse effects of conditions such as haze and glint , etc .for all these images , we show below them their edge maps where red edges are long edges and green edges are the edges of medium length .the dearth of line features representing horizon is evident in these edge maps .also notable is that in conditions such as haze , the color distributions of sea and sky regions may be practically inseparable . the statistical distance between sea and sky distributionsmay be increased by adding extra spectral channels and abstract statistical distance metrics may be used through machine learning techniques . however, these approaches require sensor modification or their performance depends upon the diversity of the training dataset . +registration refers to the situation where different frames in a scene correspond to the same physical scene with matching world coordinates . in marine scenario , especially for sensors mounted on sea vessels and buoys , the unpredictable motion of the sensors often result in a complicated registration problem where even the consecutive frames are not registered and may have a large angular and positional shift , as noted in table [ tab : datasets ] .the angular difference between the two consecutive frames may have all the three angular components , viz .yaw , roll , and pitch .if the horizon is present , roll and pitch can be significantly corrected for since they result in the change of angle and position of the horizon , respectively .however , yaw can not be corrected for .this is illustrated using two consecutive frames from a video in the buoy dataset are used in fig .[ fig : registrationprobelm3 ] .it is seen that horizon based registration does reduce the differences ( see middle row , 3rd image ) but the zoom - ins shown in the bottom row clearly indicate that the boat and cloud have unequal horizontal difference between them . in this scenario , it is impossible to say if the cloud was stationary and the boat moved , or the boat was stationary and the cloud moved , or both of them moved .+ in order to correct for the yaw , we need some additional features that allow the detection of the horizontal staggering between two consecutive frames . the availability and possibility to detecting the stationary features is important for yaw correction .buildings , landmarks , and terrain features may serve this purpose , if they are present in the scene .for example , we consider two consecutive frames in fig .[ fig : registrationprobelm2 ] taken from another video in buoy dataset which does have stationary features .the result of registration using horizon only is shown in the middle row . however , using just a few manually selected stationary points on the shoreline , accuracy in registration is significantly enhanced , as seen in the third row .+ notably , although a ship may be stationary and can be easily detected , it is difficult to conclude whether the ship is stationary or not .also , it is discussed in that the line features in a scene with moving vessels and absence of stationary cues may enable registration only if the vessels in the scene are not rotating .thus , for a general maritime scenario , registration of frames is still a challenge . strictly speaking ,the best possible way of dealing with this scenario is the use of the ship s motion sensors and gyro sensors .nevertheless , some help can be derived from texture - based features for registration across frames , assuming that the generalized shapes of texture boundaries might not change significantly over few consecutive frames .another related approach is used in for registration , where a narrow horizontal strip is taken around the horizon in both the images and the shift at which the two images have maximum correlation is determined .this shift is used for registration .an example is shown in fig .[ fig : fefilatyev ] .optical flows may also be useful , although at significant computation cost . .( b ) the difference image after horizontal shift of 48 pixels , identified as the peak of the cross - correlation function.,title="fig : " ] +there are several useful surveys on the topic of background suppression in video sequences .water background is more difficult than other stationary as well as dynamic backgrounds because of several reasons .one reason is that water background is continuously dynamic both in spatial and temporal dimensions due to waves , whereas the background subtraction methods typically address dynamic backgrounds that where dynamics are either spatially restricted ( such as rustle of trees ) or temporally restricted ( such as a parked car ) .second reason is that waves have a high spatio - temporal correlations while the dynamic background subtraction methods implicitly infer high spatio - temporal correlations as patterned ( i.e. non - random ) movement of foreground objects .an associated difficulty in marine background detection is that the electro - optical sensor mounted on a mobile platform is subject to a lot of motion .most background learning methods learn background by assuming that a pixel remains background or foreground for at least a certain period of time .thus , background modelling depends upon the accuracy of registration , which is a challenging problem as discussed in the previous section .third reason is that wakes , foams , and speckle in water are inferred as foreground by typical background detection method whereas they are background in the context of maritime object detection problem . to illustrate the need for new algorithms addressing maritime background, we applied the 34 algorithms that participated in the change detection competition .this competition was conducted in 2014 as a part of a change detection workshop at a prestigious computer vision conference .it used a dataset of 51 videos comprising of about 140,000 frames separated into 11 categories of background challenges such as dynamic background , camera jitter , intermittent object motion , shadows , infrared videos , snow , storm , fog , low frame rate , night videos , videos from pan - tilt - zoom camera , and air air turbulence .since the dataset addressed several background challenges encountered in maritime videos as well and the submitted algorithms represented the state - of - the - art for these challenges , we tested their performance on singapore - marine - dataset . here , we show in fig . [fig : background ] the result of three methods for one frame of a video from on - shore singapore - marine dataset .the three methods are gaussian mixture model ( gmm ) , which models background s color distribution as mixture of gaussian distributions , gaussian background model of pfinder , which models the intensity at each background pixel as a single gaussian function and then clusters these gaussian functions as representing the background , and the self - balancing sensitivity segmenter ( subsense ) , which uses local binary similarity patterns at pixel levels for modeling background .it is seen that these methods are ineffective through producing false positives in the water region or through producing false negatives while suppressing water background .[ sec : foreground ] even with proper dynamic background subtraction , such that wakes , foams , clouds , etc . are suppressed , it is notable that further foreground segmentation can result in detection of mobile objects only . however , as noted in table [ tab : datasets ] , there are several stationary objects as well in the videos . in table[ tab : datasets ] , the ground truth for stationary and dynamic objects have been generated for each video manually by independent volunteers .the segmented background has to be further analysed for detecting the static foreground objects .since the general dynamic background subtraction and foreground tracking problems do not require the detection of static objects , no integrated approaches exist that can simultaneously detect the stationary and mobile foreground objects .this is an open challenge for the maritime scenario .research for the problem of object detection in images may be applied for detection of objects in individual images , thus catering for both static and mobile objects .however , the complicated maritime environment with potential of occlusion , orientation , scale , and variety of objects make it computationally challenging .further , complicated motion patterns imply that frame to frame matching of objects for tracking is challenging if detection is performed independently for each frame .a maritime scene is subjected to a vast variety of weather and illumination conditions such as bright sunlight , twilight conditions , night , haze , rain , fog , etc .further , the solar angles induce different speckle and glint conditions in the water .tides also influence the dynamicity of water . the situations that affect the visibility influence the contrast , statistical distribution of sea and water , and visibility of far located objects .effects such as speckle and glint create non - uniform background statistics which need extremely complicated modelling such that foreground is not detected as the background and vice versa .also , the color gamuts for illumination conditions such as night ( dominantly dark ) , sunset ( dominantly yellow and red ) , and bright daylight ( dominantly blue ) , and hazy conditions ( dominantly gray ) also vary significantly . as a consequence ,the suitable methods and models for one weather and illumination condition is not effective for other conditions .seamless selection of approaches and transition between one approach to another with varying conditions is important for making maritime processing practically useful .as discussed above , maritime video processing problem poses challenges that are absent or less severe in other video processing applications .it needs unique solutions that address these challenges .it also needs algorithms with better adaptability to the various conditions encountered in maritime scenario .thus , the field is rich with possibilities of innovation in maritime video processing technology .we hope that the discussion here motivates the researchers to pursue maritime video processing challenges with enthusiasm and vigour . 10 d. k. prasad , d. rajan , l. rachmawati , e. rajabaly , and c. quek , `` video processing from electro - optical sensors for object detection and tracking in maritime environment : a survey , '' _ intelligent transportation systems , ieee transactions on _ , 2017 .d. d. bloisi , l. iocchi , a. pennisi , and l. tombolini , `` argos - venice boat classification , '' in _ advanced video and signal based surveillance ( avss ) , 2015 12th ieee international conference on _ , 2015 , pp . 16 .d. k. prasad , m. k. leung , c. quek , and m. s. brown , `` deb : definite error bounded tangent estimator for digital curves , '' _ ieee transactions on image processing _ ,23 , no .42974310 , 2014 .d. cheng , d. k. prasad , and m. s. brown , `` illuminant estimation for color constancy : why spatial - domain methods work and the role of the color distribution , '' _ josa a _ , vol . 31 , no . 5 , pp . 10491058 , 2014 .s. fefilatyev , v. smarodzinava , l. o. hall , and d. b. goldgof , `` horizon detection using machine learning techniques , '' in _ international conference on machine learning and applications _ , 2006 , pp .1721 .d. dusha , w. boles , and r. walker , `` attitude estimation for a fixed - wing aircraft using horizon detection and optical flow , '' in _ digital image computing techniques and applications _ , 2007 , pp .485492 .s. y. elhabian , k. m. el - sayed , and s. h. ahmed , `` moving object detection in spatial domain using background removal techniques - state - of - art , '' _ recent patents on computer science _, vol . 1 , no . 1 ,pp . 3254 , 2008 .a. sobral and a. vacavant , `` a comprehensive review of background subtraction algorithms evaluated with synthetic and real videos , '' _ computer vision and image understanding _ , vol .122 , pp . 421 , 2014 .y. wang , p .-jodoin , f. porikli , j. konrad , y. benezeth , and p. ishwar , `` cdnet 2014 : an expanded change detection benchmark dataset , '' in _ proceedings of the ieee conference on computer vision and pattern recognition workshops _ , 2014 , pp .387394 . c. r. wren , a. azarbayejani , t. darrell , and a. p. pentland , `` pfinder : real - time tracking of the human body , '' _ ieee transactions on pattern analysis and machine intelligence _ , vol. 19 , no . 7 , pp . 780785 , 1997 .st - charles , g .- a .bilodeau , and r. bergevin , `` subsense : a universal change detection method with local adaptive sensitivity , '' _ image processing , ieee transactions on _ , vol .24 , no . 1 ,359373 , 2015 .d. bloisi , l. iocchi , m. fiorini , and g. graziano , `` automatic maritime surveillance with visual target detection , '' in _ proc . of the international defense and homeland security simulation workshop _, 2011 , pp . 141145 .
this paper discusses the technical challenges in maritime image processing and machine vision problems for video streams generated by cameras . even well documented problems of horizon detection and registration of frames in a video are very challenging in maritime scenarios . more advanced problems of background subtraction and object detection in video streams are very challenging . challenges arising from the dynamic nature of the background , unavailability of static cues , presence of small objects at distant backgrounds , illumination effects , all contribute to the challenges as discussed here .
global constraints are one of the distinguishing features of constraint programming .they capture common modelling patterns and have associated efficient propagators for pruning the search space . for example , is one of the best known global constraints that has proven useful in the modelling and solving of many real world problems .a number of efficient algorithms have been proposed to propagate the constraint ( e.g. ) . whilst there is little debate that is a global constraint ,the formal definition of a global constraint is more difficult to pin down .one property often associated with global constraints is that they can not be decomposed into simpler constraints without impacting either the pruning or the efficiency of propagation .recently progress has been made on the theoretical problem of understanding what is and is nt a global constraint .in particular , whilst a bound consistency propagator for the constraint can be effectively simulated with a simple decomposition , circuit complexity lower bounds have been used to prove that a domain consistency propagator for can not be polynomially simulated by a simple decomposition . in this paper, we turn to a strict generalization of the constraint .counts the number of values used by a set of variables ; the constraint ensures that this count equals the cardinality of the set . from a theoretical perspective , the constraint is significantly more difficult to propagate than the constraint since enforcing domain consistency is known to be np - hard .moreover , as is a generalization of , there exists no polynomial sized decomposition of which achieves domain consistency .nevertheless , we show that decomposition can simulate the polynomial time algorithm for enforcing bound consistency on but with a significant space complexity .we also prove , for the first time , that range consistency on can be enforced in the same worst case time complexity as bound consistency .this contrasts with the constraint where range consistency takes time but bound consistency takes just time . the main value of these decompositions is theoretical as their space complexity is equal to their worst case time complexity .when domains are large , this space complexity may be prohibitive . in the conclusion ,we argue why it appears somewhat inevitable that the space complexity is equal to the worst case time complexity .these results suggest new insight into what is and is nt a global constraint : a global constraint either provides more pruning than any polynomial sized decomposition or provides the same pruning but with lower space complexity .there are several other theoretical reasons why the decompositions studied here are interesting .first , it is technically interesting that a complex propagation algorithm like the bound consistency propagator for can be simulated by a simple decomposition .second , these decompositions can be readily encoded as linear inequalities and used in linear programs . in fact , we will report experiments using both constraint and integer linear programming with these decompositions . since global constraints are one of the key differentiators between constraint and integer programming , these decompositions provide us with another tool to explore the interface between constraint and integer programming .third , the decompositions give insights into how we might add nogood learning to a propagator .a constraint satisfaction problem ( csp ) consists of a set of variables , each with a finite domain of values , and a set of constraints .we use capitals for variables and lower case for values .we assume values are taken from the set 1 to .we write for the domain of possible values for , for the smallest value in , for the greatest , and for the interval ] ensures that .this generalizes several other global constraints including ( which ensures that the number of values taken by a set of variables equals the cardinality of the set ) and ( which ensures a set of variables take more than one value ) .enforcing domain consistency on the constraint is np - hard ( theorem 3 in ) even when is fixed ( theorem 2 in ) .in fact , just computing the lower bound on is np - hard ( theorem 3 in ) . in addition , enforcing domain consistency on the constraint is not fixed parameter tractable since it is [2]-complete .however , several polynomial propagation algorithms have been proposed that achieve bound consistency and some closely related levels of local consistency .global constraints can often be decomposed into simpler , more primitive and small arity constraints .for example , the constraint can be decomposed into a quadratic number of binary inequalities .however , such decomposition often hinders propagation and can have a significant impact on the solver s ability to find solutions .we can decompose the constraint by introducing 0/1 variables to represent which values are used and posting a sum constraint on these introduced variables : note that constraint [ dec3 ] is not a fixed arity constraint , but can itself be decomposed to ternary sums without hindering bound propagation .unfortunately , this simple decomposition hinders propagation .it can be bc whereas bc on the corresponding constraint detects disentailment .bc on is stronger than bc on its decomposition into ( [ dec1 ] ) to ( [ dec3 ] ) .* proof : * clearly bc on is at least as strong as bc on the decomposition . to show strictness , consider , , for , and . constraints ( [ dec1 ] ) to ( [ dec3 ] ) are bc .however , the corresponding constraint has no bound support and thus enforcing bc on it detects disentailment .we observe that enforcing dc instead of bc on constraints ( [ dec1 ] ) to ( [ dec3 ] ) in the example of the proof above still does not prune any value . to decompose without hindering propagation, we must look to more complex decompositions .our first step in decomposing the constraint is to split it into two parts : an and an constraint .,n) ] holds iff .consider a constraint over the following variables and values : suppose we decompose this into an and an constraint .consider the constraint .the 5 variables can take at most 4 different values because , and can only take values and .hence , there is no bound support for .enforcing bc on the constraint therefore prunes .consider now the constraint .since and guarantee that we take at least 2 different values , there is no bound support for . hence enforcing bc on an constraint prunes .if , or , or then any complete assignment uses at least 3 different values . hence there is also no bound support for these assignments .pruning these values gives bound consistent domains for the original constraint : to show that decomposing the constraint into these two parts does not hinder propagation in general , we will use the following lemma . given an assignment of values , denotes the number of distinct values in .given a vector of variables , and . [ nvalue : prop_1 ]consider , n) ] , then the bounds of have bound supports . * proof : * let be an assignment of in with and be an assignment of in with . consider the sequence where is the same as except that has been assigned its value in instead of its value in . because they only differ on . hence, for any ] .therefore , and have a bound support .we now prove that decomposing the constraint into and constraints does not hinder pruning when enforcing bc .[ t : decom_nvalue ] bc on , n) ] and on , n) ] . by lemma [ nvalue : prop_1 ], the variable is bound consistent .consider a variable / bound value pair .let be a bound support of in the constraint and be a bound support of in the constraint .we have and by definition of and .consider the sequence where is the same as except that has been assigned its value in instead of its value in . because they only differ on .hence , there exists with .we know that and belong to because they belong to bound supports . thus , and is a bound support for on , n) ] , and `` pyramid '' variables , with domains ] . to constrain these introduced variables , we post the following constraints : & \ \ \\forall \ ; 1 \leq i \leq n , 1 \leq l \leq u \leq d \label{eqn::firstatmostnvalue } \\ & a_{ilu } \leq m_{lu } & \ \ \ \forall \ ; 1 \leq i \leq n , 1 \leq l \leq u \leq d \label{eqn::lb_atmostnvalue } \\ & m_{1u } = m_{1k } + m_{(k+1)u } & \ \\ \forall \ ; 1 \leq k < u\leq d \label{eqn::pyram_atmostnvalue}\\ & m_{1 d } \leq n & \label{eqn::lastatmostnvalue}\end{aligned}\ ] ] consider the decomposition of an constraint over the following variables and values : observe that we consider that value 5 for has already been pruned by , as will be shown in next sections .bound consistency reasoning on the decomposition will make the following inferences . as , from we get . hence by , .similarly , as , we get and . now . by and , , , , , .since , we deduce that and hence .this gives . by , . finally ,from , we get and .this gives us bound consistent domains for the constraint .we now prove that this decomposition does not hinder propagation in general .[ thm : atmost - bc ] bc on constraints ( [ eqn::firstatmostnvalue ] ) to ( [ eqn::lastatmostnvalue ] ) is equivalent to bc on , n) ] be the corresponding ordered set of disjoint ranges of the variables in .it has been shown in that .consider the interval \in i_y ] are greater than or equal to and constraints ( [ eqn::pyram_atmostnvalue ] ) ensure that the variable is greater than or equal to the sum of lower bounds of variables , ] are disjoint . therefore , the variable is greater than or equal to and it is bound consistent . we show that when is bc and , all variables are .take any assignment such that .let ] because only one variable has been flipped .hence , any assignment with is a bound support . contains such a value by assumption .the only case when pruning might occur is if the variable is ground and . constraints ( [ eqn::pyram_atmostnvalue ] ) imply that equals the sum of variables .the lower bound of the variable is greater than one and there are of these intervals .therefore , by constraint , the upper bound of variables that correspond to intervals outside the set are forced to zero .there are constraints ( [ eqn::firstatmostnvalue ] ) and constraints ( [ eqn::lb_atmostnvalue ] ) that can be woken times down the branch of the search tree .each requires time for a total of down the branch .there are constraints ( [ eqn::pyram_atmostnvalue ] ) which can be woken times down the branch and each invocation takes time .this gives a total of .the final complexity down the branch of the search tree is therefore .the proof of theorem [ thm : atmost - bc ] also provides the corollary that enforcing range on consistency on constraints [ eqn::firstatmostnvalue ] enforces range consistency on .note that theorem [ thm : atmost - bc ] shows that the bc propagator of is not algorithmically global with respect to time , as bc can be achieved with a decomposition with comparable time complexity .on the other hand , the space complexity of this decomposition suggests that it is algorithmically global with respect to space .of course , we only provide upper bounds here , so it may be that is not algorithmically global with respect to either time or space .there is a similar decomposition for the constraint .we introduce 0/1 variables , to represent whether uses a value in the interval ] to count the number of times values in ] exceeds the number of values in ] for ,from we get for .hence , by , . by , , .since we deduce that . finally , from and the fact that , we get .this gives us bound consistent domains for the constraint .we now prove that this decomposition does not hinder propagation in general .[ thm : atleastnvalue - bc ] bc on the constraints to is equivalent to bc on , n) ] into a set of maximal saturated intervals \}]such that .let \mid j \in [ 1\ldots k ] \} ] is smaller than ] , ] such that | < \sum_{i \in x \setminus x_i } min(a_{ibc}) ] than variables whose domain is contained in ] , so that is the number of values inside the interval ] is greater than or equal to . the total number of variables inside the interval ] is .therefore , we obtain the inequality or . by construction of , , otherwise the intervals in that are subsets of ]. the value can not be in any interval in , because all values in \in i ] .in addition , can not be in an interval ] be the assignment where the value of in has been replaced by , one of the bounds of .we know that )\in [ card(s)-1 , card(s)+1 ] = [ card_\uparrow(x)-1 , card_\uparrow(x)+1] ] are forced to 0 as well .thus , by constraints and , all variables in have their bounds pruned if they belong to a hall interval of other variables in .this is what bc on the constraint does .there are constraints that can be woken times down the branch of the search tree in , so a total of down the branch .there are constraints which can be propagated in time down the branch for a .there are constraints which can be woken times each down the branch for a total cost in time down the branch .thus a total of .the final complexity down the branch of the search tree is therefore .the complexity of enforcing bc on can be improved to in a way similar to that described in section [ sec : atmost : faster ] and in . as with, enforcing rc on constraints ( [ eqn::atleastnvalue-1 ] ) enforces rc on , but in this case we can not reduce the complexity below . similarly to , theorem [ thm : atleastnvalue - bc ] shows that the bound consistency propagator of is not algorithmically global with respect to time and provides evidence that it is algorithmically global with respect to space .we can improve how the solver handles this decomposition of the constraint by adding implied constraints and by implementing specialized propagators .our first improvement is to add an implied constraint and enforce bc on it : this does not change the asymptotic complexity of reasoning with the decomposition , nor does it improve the level of propagation achieved . however , we have found that the fixed point of propagation is reached quicker in practice with such an implied constraint .our second improvement decreases the asymptotic complexity of enforcing bc on the decomposition of section [ sec : atmost ] .the complexity is dominated by reasoning with constraints which channel from to and thence onto ( through constraints ) .if constraints are not woken uselessly , enforcing bc costs per constraint down the branch . unfortunately ,existing solvers wake up such constraints as soon as a bound is modified , thus giving a cost in .we therefore implemented a specialized propagator to channel between and efficiently . to be more precise , we remove the variables and replace them with boolean variables .we then add the following constraints these constraints are enough to channel changes in the bounds of the variables to .there are constraints , each of which can be propagated in time over a branch , for a total of .there are clausal constraints and each of them can be made bc in time down a branch of the search tree , for a total cost of . since channeling dominates the asymptotic complexity of the entire decomposition of section [ sec : atmost ] , this improves the complexity of this decomposition to .this is similar to the technique used in to improve the asymptotic complexity of the decomposition of the constraint .our third improvement is to enforce stronger pruning by observing that when , we can remove the interval ] .the following constraint ensures that . ) \label{eq : domainbitmap}\end{aligned}\ ] ] clearly we can enforce rc on this constraint in time over a branch , and for all variables .we can then use the following clausal constraints to channel from variables to these variables and on to the variables .these constraints are posted for every and integers such that : the variable , similarly to the variables , is true when ] are removed from the domains of all variables .suppose ] .then , by ( [ eq : channel - range - first])([eq : channel - range - last ] ) , we get and . from and ( [ eq : channel - b - first])([eq : channel - b - last ] )we get , , , and by , the interval ] to be removed from , so \cup \ { 10 \} ] constraint .the value belongs to iff there exists an edge in the queen s graph or . for , all minimum dominating sets for the queen s problem are either of size or .we therefore only solved instances for these two values of .we compare our decomposition with the simple decomposition of the constraint in ilog solver and ilog cplex solvers .the simple decomposition is the one described in section [ sec : nvalue : simple ] except that in constraint , we replace `` '' by `` '' .we denote this decomposition and in ilog solver and cplex , respectively . to encode this decomposition into an integer linear program , we introduce literals , ] .we use the direct encoding of variables domains to avoid using logic constraints , like disjunction and implication constraints in cplex .the default transformation of logic constraints in cplex appears to generate large ilp models and this slows down the search .the bc decomposition is described in section [ sec : atmost ] , which we call and in ilog solver and cplex , respectively . in ilog solver , as explained in section [ sec : atmost : faster ] , we channel the variables directly to the pyramid variables to avoid introducing many auxiliary variables and we add the redundant constraint to the decomposition to speed up the propagation across the pyramid .we re - implemented the ternary sum constraint in ilog for a 30% speedup .to encode the bc decomposition into an integer linear program , we use the linear encoding of variables domains .we introduce literals for the truth of , and the channeling inequalities of the form .we again add the redundant constraint . finally , we post constraints as lazy constraints in clpex .lazy constraints are constraints that are not expected to be violated when they are omitted . these constraints are not taken into account in the relaxation of the problem and are only included when they violate an integral solution . .[t: t1 ] backtracks and rumtime ( in seconds ) to solve the dominating set problem for the queen s graph . [ cols="^,^,>,>,>,>,>,>,>,>",options="header " , ] results of our experiments are presented in table [ t : t1 ] . our bc decomposition performs better than the decomposition , both in runtime and in number of backtracks needed by ilog solver or cplex .cplex is slower per node than ilog solver .however , cplex usually requires fewer backtracks compared to ilog solver .interestingly cplex performs well with the bc decomposition .the time to explore each node is large , reflecting the size of decomposition , but the number of search nodes explored is small .we conjecture that integer linear programming methods like cplex will perform in a similar way with other decompositions of global constraints which do not hinder propagation ( e.g. the decompositions we have proposed for and ). finally , the best results here are comparable with those for the bounds consistency propagator in .bessiere _ et al . _ consider a number of different methods to compute a lower bound on the number of values used by a set of variables .one method is based on a simple linear relaxation of the minimum hitting set problem .this gives a propagation algorithm that achieves a level of consistency strictly stronger than bound consistency on the constraint .cheaper approximations are also proposed based on greedy heuristics and an approximation for the independence number of the interval graph due to turn .decompositions have been given for a number of other global constraints .for example , beldiceanu _ et al . _ identify conditions under which global constraints specified as automata can be decomposed into signature and transition constraints without hindering propagation . as a second example , many global constraints can be decomposed using and which can themselves be propagated effectively using simple decompositions . as a third example, the and constraints can be decomposed without hindering propagation . as a fourth example , decompositions of the constraint have been shown to be effective .most recently , we demonstrated that the and constraint can be decomposed into simple primitive constraints without hindering bound consistency propagation .these decompositions also introduced variables to count variables using values in an interval .for example , the decomposition of ensures that no interval has more variables taking values in the interval than the number of values in the interval . using a circuit complexity lower bound , we also proved that there is no polynomial sized sat decomposition of the constraint ( and therefore of its generalizations like ) on which unit propagation achieves domain consistency .our use of `` pyramid '' variables is similar to the use of the `` partial sums '' variables in the encoding of the constraint in .this is related to the cumulative sums computed in .we have studied a number of decompositions of the constraint .we have shown that a simple decomposition can simulate the bound consistency propagator for with comparable time complexity but with a much greater space complexity .this supports the conclusion that the benefit of a global propagator may often not be in saving time but in saving space .our other theoretical contribution is to show the first range consistency algorithm for , that runs in time and space .these results are largely interesting from a theoretical perspective .they help us understand the globality of global constraints .they highlight that saving space may be one of the important advantages provided by propagators for global constraints .we have seen that the space complexity of decompositions of many propagators equals the worst case time complexity ( e.g. for the , , , , , and constraints ) . for global constraints like, the space complexity of the decompositions does not appear to be that problematic .however , for global constraints like , the space complexity of the decompositions is onerous .this space complexity seems hard to avoid .for example , consider encodings into satisfiability and unit propagation as our inference method .as unit propagation is linear in time in the size of the encoding , it is somewhat inevitable that the size of any encoding is the same as the worst - case time complexity of any propagator that is being simulated .one other benefit of these decompositions is that they help us explore the interface between constraint and integer linear programming .for example , we saw that an integer programming solver performed relatively well with these decompositions. * acknowledgements .* nicta is funded by the department of broadband , communications and the digital economy , and the arc .christian bessiere is supported by anr project anr-06-blan-0383 - 02 , and george katsirelos by anr unloc project : anr 08-blan-0289 - 01 .we thank lanbo zheng for experimental help .bessiere , c. , hebrard , e. , hnich , b. , kiziltan , z. , walsh , t. : filtering algorithms for the nvalue constraint . in : proc .2nd int . conf . on integration of ai and or techniques in constraint programming for combinatorial optimization problems .( 2005 ) beldiceanu , n. : pruning for the minimum constraint family and for the number of distinct values constraint family . in : proc . of 7th int .conf . on principles and practice of constraint programming ( cp2001 ) , ( 2001 ) 211224 ohrimenko , o. , stuckey , p. , codish , m. : propagation = lazy clause generation . in : proc . of 13th int .conf . on principles and practice of constraint programming ( cp-2007 ) , ( 2007 ) beldiceanu , n. , carlsson , m. , debruyne , r. , petit , t. : .constraints * 10 * ( 2005 ) 339362 brand , s. , narodytska , n. , quimper , c.g . , stuckey , p. , walsh , t. : encodings of the sequence constraint . in : proc . of 13th int .conf . on principles and practice of constraint programming ( cp-2007 ) , ( 2007 ) van hoeve , w.j . ,pesant , g. , rousseau , l.m . , sabharwal , a. : revisiting the sequence constraint . in : proceedings of the 12th int .conf . on principles and practice of constraint programming ( cp-2006 ) , ( 2006 )
we study decompositions of the global constraint . our main contribution is theoretical : we show that there are propagators for global constraints like which decomposition can simulate with the same time complexity but with a much greater space complexity . this suggests that the benefit of a global propagator may often not be in saving time but in saving space . our other theoretical contribution is to show for the first time that range consistency can be enforced on with the same worst - case time complexity as bound consistency . finally , the decompositions we study are readily encoded as linear inequalities . we are therefore able to use them in integer linear programs .
we use the term `` varentropy '' as an abbreviation for `` variance of the conditional entropy random variable '' following the usage in . in his pioneering work, strassen showed that the varentropy is a key parameter for estimating the performance of optimal block - coding schemes at finite ( non - asymptotic ) block - lengths .more recently , the comprehensive work by polyanskiy , poor and verd ' u further elucidated the significance of varentropy ( under the name `` dispersion '' ) and rekindled interest in the subject . in this paper , we study varentropy in the context of polar coding . specifically , we track the evolution of average varentropy in the course of polar transformation of independent identically distributed ( i.i.d . )bdes and show that it decreases to zero asymptotically as the transform size increases . as a side result, we obtain an alternative derivation of the polarization results of , .our setting will be that of binary - input memoryless channels and binary memoryless sources .we treat source and channel coding problems in a common framework by using the neutral term `` binary data element '' ( bde ) to cover both .formally , a bde is any pair of random variables where takes values over ( not necessarily from the uniform distribution ) and takes values over some alphabet which may be discrete or continuous .a bde may represent , in a source - coding setting , a binary data source that we wish to compress in the presence of some side information ; or , it may represent , in a channel - coding setting , a channel with input and output . given a bde , the information measures of interest in the sequel will be the _ conditional entropy random variable _ the _ conditional entropy _ and , the _ varentropy _ throughout the paper , we use base - two logarithms .the term _ polar transform _ is used in this paper to to refer to an operation that takes two _ independent _ bdes and as input , and produces two new bdes and as output , where , , and .the notation `` '' denotes modulo-2 addition .the main result of the paper is the following .[ theoremvarentropy ] the varentropy is nonincreasing under the polar transform in the sense that , if , are any two independent bdes at the input of the transform and , are the bdes at its output , then with equality if and only if ( iff ) either or . for an alternative formulation of the main result ,let us introduce the following notation : theorem [ theoremvarentropy ] can be reformulated as follows .theoremvarentropy[theoremvarentropy2 ] the polar transform of conditional entropy random variables , , produces positively correlated output entropy terms in the sense that with equality iff either or .this second form makes it clear that any reduction in varentropy can be attributed entirely to the creation of a positive correlation between the entropy random variables and at the output of the polar transform . showing the equivalence of the two claims and is a simple exercise .we have , by the chain rule of entropy , hence , . since and are independent , ; while .thus , the claim , which can be written in the equivalent form is true iff holds . a technical question that arises in the sequel is whether the varentropy is uniformly bounded across the class of all bdes .this is indeed the case .[ lemma : bound ] for any bde , .it suffices to show that the second moment of satisfies the given bound . \le \max_{0\le x\le 1 } [ x\log^2(x)+(1-x)\log^2(1-x)]\\ & \le 2\max_{0\le x\le 1 } [ x\log^2(x ) ] = 8 e^{-2}\log^2(e ) \approx 2.2434.\end{aligned}\ ] ] ( a numerical study shows that a more accurate bound on is , but the present bound will be sufficient for our purposes . ) this bound guarantees that all varentropy terms in this paper exist and are bounded ; it also guarantees the existence of the covariance terms since by the cauchy - schwarz inequality we have .we will end this part by giving two examples in order to illustrate the behavior of varentropy under the polar transform .the terminology in both examples reflects a channel coding viewpoint ; although , each model may also arise in a source coding context .[ examplebsc ] in this example , models a binary symmetric channel ( bsc ) with equiprobable inputs and a crossover probability ; in other words , and take values in the set with fig .[ figurebscvarentropy ] gives a sketch of the varentropy and covariance terms defined above , with denoting the common value of and ) .( formulas for computing the varentropy terms will be given later in the paper . )the non - negativity of the covariance is an indication that the varentropy is reduced by the polar transform .[ examplebec ] here , represents a binary erasure channel ( bec ) with equiprobable inputs and an erasure probability . in other words , takes values in , takes values in , and in this case , there exist simple formulas for the varentropies . the covariance is given by .the corresponding curves are plotted in fig .[ figurebecvarentropy ] .the rest of the paper is organized as follows . in section [ representations ] , we define two canonical representations for a bde that eliminate irrelevant details from problem description and simplify the analysis . in section [sect : correlation ] , we review some basic facts about the covariance function that are needed in the remainder of the paper . section [ sectionproof ] contains the proof of theorem [ theoremvarentropy2 ] .section [ higherorder ] considers the behavior of varentropy under higher - order polar transforms and contains a self - contained proof of the main polarization result of . throughout, we will often write to denote for a real number . for , we will write to denote the convolution .the information measures of interest relating to a given bde are determined solely by the joint probability distribution of ; the specific forms of the alphabets and play no role .we have already fixed as so as to have a standard representation for .it is possible and desirable to re - parametrize the problem , if necessary , so that also has a canonical form .such canonical representations have been given for binary memoryless symmetric ( bms ) channels in .the class of bdes under consideration here is more general than the class of bms channels , but similar ideas apply .we will give two canonical representations for bdes , which we will call the -representation and the -representation .the -representation replaces with a canonical alphabet ] ; it is `` lossy '' , but happens to be more convenient than the -representation for purposes of proving theorem [ theoremvarentropy2 ] . given a bde , we associate to each the parameter and define .the random variable takes values in the set , which is always a subset of ] , is the binary entropy function .likewise , the varentropy is given by ^ 2 \label{eq : varentropya},\ ] ] where and finally , we note that .thus , all information measures of interest in this paper can be computed given knowledge of the distribution of .although the -representation eliminates much of the irrelevant detail from , there is need for an even more compact representation for the type of problems considered in the sequel .this more compact representation is obtained by associating to each the parameter we define the -representation of as the random variable .we denote the range of by and note that ] .as it is evident from , the conditional entropy random variable can not be expressed as a function of . however ,if the cdf of is known , we can compute and by the following formulas that are analogous to and : ^ 2.\ ] ] to see that is less than a `` sufficient statistic '' for information measures , one may note that is not determined by knowledge of alone .for example , for a bde with , we have , independently of . despite its shortcomings, the -representation will be useful for our purposes due to the fact that the binary entropy function is monotone over ] .thus , the random variable is a monotone function of over the range of , but is not necessary so over the range of .this monotonicity will be important in proving certain correlation inequalities later in the paper .table [ table : classification ] gives a classification of a bde in terms of the properties of .the classification allows an erasing bde to be extreme as a special case ..classification of bdes [ cols="<,<",options="header " , ] [ table : extremepolar ] the following proposition states more precisely the way the -parameters evolve when one of the input bdes is extreme .[ prop : polarextreme ] if is extreme , then the -parameters at the output are given by if is extreme , then and hold after interchanging and .suppose ( perfect ) , then can only take the values and , and we obtain from that thus , , completing the proof of the first case in .we skip the proof of the remaining three cases since they follow by similar reasoning .in this part , we collect some basic facts about the covariance function , which we will need in the following sections .the first result is the following formula for splitting a covariance into two parts .[ lemmacovariancedecomposition ] let , be jointly distributed random vectors over and , respectively .let be functions such that ] .partial and conditional expectations and covariances will be denoted by , , , , etc .due to the 1 - 1 nature of the correspondence between and , expectation and covariance operators such as and will be equivalent to and , respectively .we will prefer to use expectation operators in terms of the primary variables and rather than the secondary ( derived ) variables such as , , , to emphasize that the underlying space is .we note that , due to the independence of and , and are independent ; likewise , and are independent . as the first step of the proof of theorem [ theoremvarentropy2 ], we use the covariance decomposition formula to write for brevity , we will use the notation to denote the two terms on the right hand side of .our proof of theorem [ theoremvarentropy2 ] will consist in proving the following two statements .[ proposition1 ] we have , with equality iff either or is an erasing bde .[ proposition2 ] we have .we note that iff , of the two bdes and , either one is extreme or both are pure .we note this only for completeness but do not use it in the paper .the rest of the section is devoted to the proof of the above propositions. for ] with equality iff or .we use to write where and .thus , instead of proving , it suffices to prove for .in fact , using , it suffices to prove for . assuming , it is straightforward to show that thus , if we write out the expression for , as in with in place of , we can see easily that each of the four factors on the right hand side of that expression are non - negative .more specifically , the logarithmic term is non - negative due to the first inequality in and the bracketed term is non - negative due to the second inequality in .this completes the proof that for all ] simplifies to substituting this in the preceding equation and writing out the sum over explicitly, we obtain .\end{aligned}\ ] ] expressing each factor on the right side of the above equation in terms of , , we see that it equals .taking expectations , we obtain .the alternative formula follows from the fact that due to the symmetries .proposition [ proposition1 ] now follows readily .we have since for all ] .these functions will be used to give an explicit expression for .first , we note some symmetry properties of the two functions . for , we have we omit the proofs since they are immediate . [ cov2formula ] we have , for , these results follow from , , and .we compute as follows . for the second term , we use the entropy conservation . the second form of the formulas in terms of follow from the symmetry properties . as a corollary to lemma [ cov2formula ] ,we now have .\ ] ] in order to prove that , we will apply lemma [ lemma : chebyshev ] to .first , we need to establish some monotonicity properties of the functions and .we insert here a general definition .[ def : increasing ] a function is called _ nondecreasing _ if , for all , whenever for all . [ cov2monotonicity ] ^ 2 \to \r^+ ] for fixed ] and consider as a function of ] is a strictly concave non - negative function , symmetric around , attaining its minimum value of 0 at , and its maximum value of at .it is readily verified that , for any fixed ] is nondecreasing . again , since , it suffices to show that is nondecreasing in ] .recall that . exclude the constant term andfocus on the behavior of over ] .since and , it follows from the convexity property that is decreasing in ] since and are nondecreasing functions of when is fixed .likewise , chebyshev s inequality implies that since and are , as a simple consequence of lemma [ cov2monotonicity2 ] , nondecreasing functions of .the covariance inequality is an immediate consequence of and propositions [ proposition1 ] and [ proposition2 ] .we only need to identify the necessary and sufficient conditions for the covariance to be zero . for brevity , let us define the present goal is to prove that the proof will make us of the decomposition that we have already established .let us define and note that appears in proposition [ proposition1 ] as the necessary and sufficient conditions for to be zero .note also that implies since `` extreme '' is a special instance `` erasing '' according to definitions in table [ table : classification ] .we begin the proof of with the sufficiency part . in other words , by assuming that holds .since implies , is sufficient for . to show that is sufficient for , we recall proposition [ prop : polarextreme ] , which states that , if is true , then either or is extreme . to be more specific , if or is p.r ., then and ; if or is perfect , then and . (the notation `` '' should be read as `` equals with probability one '' . ) in either case , .this completes the proof of the sufficiency part . to prove necessity in, we write as where denotes the complement ( negation ) of .the validity of follows from . to prove necessity, we will use contraposition and show that implies . note that .if is true , then either or is true. if is true , then by proposition [ proposition1 ] .we will complete the proof by showing that implies .for this , we note that when one of the bdes is erasing , there is an explicit formula for .we state this result as follows .[ lemma : beccov2 ] let be erasing with erasure probability and let be arbitrary with .then , this formula remains valid if is erasing with erasure probability and is arbitrary with .we first observe that now , the claim is obtained by simply computing the covariance of these two random variables .the second claim follows by the symmetry property .returning to the proof of theorem [ theoremvarentropy2 ] , the proof of the necessity part is now completed as follows .if holds , then at least one of the bdes is _ strictly _ erasing ( has erasure probability ) and the other is non - extreme . by proposition [ prop :extreme ] , the conditional entropy of a non - extreme bde is strictly between 0 and 1 .so , by lemma [ lemma : beccov2 ] , we have .this completes the proof .in this part , we consider the behavior of varentropy under higher - order polar transforms .the section concludes with a proof of the polarization theorem using properties of varentropy . for any , there is a polar transform of order .a polar transform of order is a mapping that takes bdes , as input , and produces a new set of bdes , where and is a subvector of , which in turn is obtained from by the transform the sign `` '' in the exponent denotes the kronecker power .we allow to take values in some arbitrary set , , which is not necessarily discrete .we assume that , , are independent but not necessarily identically - distributed .( an alternate form of the polar transform matrix , as used in , is , in which is a permutation matrix known as _ bit - reversal_. the form of that we are using here is less complex and adequate for the purposes of this paper .however , if desired , the results given below can be proved under bit - reversal ( or , any other permutation ) after suitable re - indexing of variables . )the first result in this section is a generalization of theorem [ varentropycontraction0 ] to higher order polar transforms .[ varentropygeneral ] let for some .let , , be independent but not necessarily identically distributed bdes .consider the polar transform and let , , be the bdes at the output of the polar transform .the varentropy is nonincreasing under any such polar transform in the sense that the next result considers the special case in which the bdes at the input of the polar transform are i.i.d .and the transform size goes to infinity .[ varentropyasymptotic ] let , , be i.i.d .copies of a given bde .consider the polar transform and let , , be the bdes at the output of the polar transform .then , the average varentropy at the output goes to zero asymptotically : we will first bring out the recursive nature of the polar transform by giving a more abstract formulation in terms of the -parameters of the variables involved .let us recall that a polar transform of order two is essentially a mapping of the form where and are the -parameters of the input bdes and , and and are the -parameters of the output bdes and . alternatively , the polar transform may be viewed as an operation in the space of cdfs of -parameters and represented in the form where and are the cdfs of and , respectively .let be the space of all cdfs belonging to random variables defined on the interval ] is a compact metric space .this follows from a general result about probability measures on compact metric spaces .theorem 6.4 in states that , for any compact metric space , the space of all probability measures defined on the -algebra of borel sets in is compact .our definition of above coincides with the with ] .[ lemma : continuous ] the mapping ] and \to [ 0,m] ] , to prove , let ^ 2 \to [ 0,1] ] is an open set since the function is a continuous and ( ii ) the product measure converges weakly to ( * ? ? ?* , thm .3.2 ) ; so , again by the portmanteau theorem , since the proof is complete . the second condition can be proved in a similar manner .we will sketch the steps of the proof but leave out the details .the relevant form of the density evolution equation is now we define and , and write next , we note that , by a general result on the preservation of weak convergence ( * ? ? ?* thm . 5.1 ) , ( the important point here is that the functions and are uniformly continuous and bounded over the domain ^ 2 ] .hence , by a general result about continuity ( ) , is closed ; and , being a subset of the compact set $ ] , it is compact ( ) .since is continuous and is compact , the `` inf '' in is achieved by some ( ) : . since , is not extreme , so by theorem [ theoremvarentropy2 ] , .this work was supported in part by the european commission in the framework of the fp7 network of excellence in wireless communications newcom # ( contract n.318306 ) .part of this work was done while the author was visiting the simons institute for the theory of computing , uc berkeley .s. h. hassani and r. urbanke , `` polar codes : robustness of the successive cancellation decoder with respect to quantization , '' _ proc .2012 ieee int .inform . theory _ , cambridge , usa , pp . 1962 - 1966 , july 1 - 6 , 2012 .m. alsan and e. telatar , `` a simple proof of polarization and polarization for non - stationary channels , '' _ proc .2014 ieee int .inform . theory _ , honolulu , usa , pp .301 - 305 , 29 june - 4 july , 2014 .
we consider the evolution of variance of entropy ( varentropy ) in the course of a polar transform operation on binary data elements ( bdes ) . a bde is a pair consisting of a binary random variable and an arbitrary side information random variable . the varentropy of is defined as the variance of the random variable . a polar transform of order two is a certain mapping that takes two independent bdes and produces two new bdes that are correlated with each other . it is shown that the sum of the varentropies at the output of the polar transform is less than or equal to the sum of the varentropies at the input , with equality if and only if at least one of the inputs has zero varentropy . this result is extended to polar transforms of higher orders and it is shown that the varentropy decreases to zero asymptotically when the bdes at the input are independent and identically distributed . polar coding , varentropy , dispersion .
integral field ( or 3d ) spectroscopy is a modern technique in astrophysical observing that was proposed by georges courts in the late 60 s .the idea is to get a spectrum for every point in the field of view of a spectrograph .several instrumental approaches in the optical domain ( as well as nir and near - uv ) exist : scanning fabry - perot interferometry , image slicing and transforming two - dimensional field of view into a slit using integral - field unit ( see review in pcontal - rousset et al . , 2004 for a description of different image slicing techniques ) . at present ,nearly all large telescopes in the world are equipped with 3d spectroscopic devices , and rapidly growing volume of data produced by them pose a number of questions regarding the data discovery and retrieval . in this paperwe demonstrate how 3d data are handled in a framework of the international virtual observatory .all 3d spectroscopic observations result in datasets having both spatial and spectral information .they are usually referred as `` datacubes '' , though sometimes ( in case of ifu ) they are not regularly gridded in spatial dimensions .there are three cornerstones for the 3d data support in the virtual observatory : 1 . data model an abstract , self - sufficient and standardised description of the data 2 .data access services archives , providing access to fully reduced science - ready datasets 3 .client applications data - model aware software that is able to search , retrieve , and display 3d data , as well as to give a possibility for sophisticated scientific data analysis all these blocks became available , and we will review them in the forthcoming sections .an abstract , self - sufficient and standardised description of the astronomical data is known as a data model . such a description is constructed in a way to become sufficient for any sort of data processing and analysis .the data modeling working group ( dm wg ) of the international virtual observatory alliance ( ivoa ) is responsible for definition of data models for different types of astronomical data sets , catalogues , and more general concepts e.g. `` quantity '' . to describe 3d spectroscopic data we use characterisation data model ( louys et al .one of the most abstract data models developed by the dm wg , it gives a physical insight to the dataset , i.e. describes where , how extended and in which way the observational or simulated dataset can be described in a multidimensional parameter space , having the following axes : * spatial * , * time * , * spectral * , * observed * ( e.g. flux , polarimetric ) , as well as other arbitrary axes . for every axisthe three characterisation properties are defined : * coverage * , * resolution * , and * sampling*. there are four levels of details in the description of the dataset : ( 1 ) * location * or * reference value * average position of the data on a given parameter axis ; ( 2 ) * bounds * , providing a bounding box ; ( 3 ) * support * , describing more precisely regions on a parameter axis as a set of segments ; * map * , providing a detailed sensitivity map . details about applying characterisation data model to the 3d spectroscopic datasets are given in chilingarian et al .the algorithm for the characterisation metadata computation is described there as well .we have developed two data archives providing access to fully - reduced `` science - ready '' ifu and ifp datasets : aspid - sr and giraffe archive . for both archives the ivoa simple spectral access ( ssa , tody et al .2007 ) interfaces are provided .aspid stands for the `` archive of spectral , photometric , and interferometric data '' .the world largest collection of raw 3d spectroscopic observations of galactic and extragalactic sources is provided .aspid - sr ( chilingarian et al . 2007 ) is a prototype of an archive of heterogeneous science ready data , fed by aspid , where we try to take full advantage of the ivoa characterisation data model .multi - level characterisation metadata is provided for every dataset .the archive provides powerful metadata querying mechanism ( zolotukhin et al .2007 ) with access to every data model element , vital for the efficient scientific usage of a complex informational system .aspid - sr is one of the reference implementation of the ivoa characterisation data model .the datasets are provided in several formats : stacked spectra , regularly - gridded data cubes , and euro3d fits .a high level of integration between the archive web interface and existing vo tools is provided ( see next section ) .giraffe archive ( royer et al . , this conference ) contents fully reduced data obtained with the flames / giraffe spectrograph at eso vlt .data obtained with all three observing modes of giraffe : medusa ( multi - object spectroscopy ) , ifu ( multi - ifu spectroscopy ) , and argus ( single ifu ) are provided .raw datasets are taken from the eso archive after the end of their proprietary period and reduced in an automatic way using the giraffe data processing pipeline .there is a possibility of accessing individual extracted 1d spectra from the multi - object spectroscopic observations , as well as full datasets in the euro3d fits format .presently , there is a number of vo tools available that deal with images ( such as cds aladin ) and 1-d spectra ( esa vospec , specview , splat ) .however , none of them is able to handle ifu datasets . in a framework of the vo paris project ( simon et al .2006 ) we have developed vo paris euro3d client specifically to deal with the datasets in the euro3d fits format in a vo context .this tool interacts with cds aladin to display position of the fibers ( or slit ) on the sky and display individual extracted spectra in esa vospec .catalogue of positions of fibers ( or slit pixels ) can be exported as votable .vo paris euro3d client is an open - source java package , including basic functions for the euro3d fits i / o and a graphical user interface .individual or co - added spectra can be extracted from the euro3d fits file and exported as votable serialization of the ivoa spectrum data model 1.0 ( mcdowell et al .all the interaction between applications is done using plastic ( platform for astronomical tool interconnection ) a prototype of the vo application messaging protocol .presently vo - paris euro3d client is used as an integrated data visualising software at aspid - sr - science - ready data archive at the special astrophysical observatory of russian academy of sciences . in fig .[ figaspidsrplastic ] we demonstrate how the interaction between vo client applications and the aspid - sr archive interface is implemented .there are several stages : 1 . querying the characterisation metadata using web - interface( see louys et al ., this conference ) 2 .light - weight java applet is integrated into the html pages , containing query response ; it detects a plastic hub , connects to it , and checks whether other tools ( aladin , vospec , vo paris euro3d client ) are registered within it .if the applications are not detected , they will be started using javascript and java webstart .3 . as soon as all the used applications have been started and registered within the plastic hub, a small script is sent to cds aladin to display the dss2 image of the area , corresponding to the position of the ifu spectrograph . at the same time , the ifu dataset in the euro3d fits format is loaded into vo paris euro3d client .positions of ifu fibers are sent from vo paris euro3d client to cds aladin and overplotted on the dss2 image .user can interactively select either groups of fibers or individual ones using cds aladin .an extracted spectrum ( or co - added spectra of several fibers ) is sent to esa vospec using plastic by clicking on the corresponding button in the user interface of vo paris euro3d client .in chilingarian et al . ( 2006 ) we concluded that `` all the necessary infrastructural components exist for building vo - compliant archives of science - ready 3d data and tools for dealing with them '' .since that time there was a substantial progress of vo standards and protocols .now we are able to provide access to first two such vo - compliant archives .this not only a `` proof - of - concept '' , but the services that can be used for real scientific purposes .another important conclusion that can be drawn is that the present state of vo standards ( including plastic a prototype of the vo application messaging protocol ) is totally sufficient for dealing with complex datasets in a vo framework without need to develop new client applications for every particular kind of data .ic is grateful to esac and vo - spain for providing financial support to attend the workshop .special thanks to john taylor ( astrogrid ) and isa barbarisi ( esac ) for help with many technical points related to plastic implementation in vo paris euro3d client .
three cornerstones for the 3d data support in the virtual observatory are : ( 1 ) data model to describe them , ( 2 ) data access services providing access to fully - reduced datasets , and ( 3 ) client applications which can deal with 3d data . presently all these components became available in the vo . we demonstrate an application of the ivoa characterisation data model to description of ifu and fabry - perot datasets . two services providing ssa - like access to 3d - spectral data and characterisation metadata have been implemented by us : aspid - sr at sao ras for accessing ifu and fabry - perot data from the russian 6-m telescope , and the giraffe archive at the vo paris portal for the vlt flames - giraffe datasets . we have implemented vo paris euro3d client , handling euro3d fits format , that interacts with cds aladin and esa vospec using plastic to display spatial and spectral cutouts of 3d datasets . though the prototype we are presenting is yet rather simple , it demonstrates how 3d spectroscopic data can be fully integrated into the vo infrastructure .
because they are such natural and beautiful structures , lattices and crystals appear throughout physics and mathematics , in many different ( and often unexpected ) ways . herewe introduce the idea of a choreographic crystal , a type of configuration that can be much more symmetrical than is revealed by a snapshot of it at any given time .we study this idea from several different angles , and suggest experimental diffraction signatures to identify and characterize such choreographic systems in the lab , whether they are naturally occuring or artificially engineered .let us start with a simple and beautiful example of a choreographic lattice .we can find our way to this example by comparing the following two elementary problems .the first problem is static : what is the most symmetrical arrangement of four points on the 2-sphere ?the solution is well known : the four points are the four vertices of a regular tetrahedron .we can express the positions of these four points neatly in cartesian coordinates as follows : we start from the 8 vertices of the cube and select the four vertices with an even number of minus signs ; in other words , the positions of the four points ( ) have cartesian components ( ) given by where is the kronecker delta function .the second problem is a natural dynamical analogue of the first : let us now imagine that we let the points flow along the geodesics of the sphere ( _ i.e _ the great circles ) , with angular velocities that are constant in time and all have equal magnitudes , like satellites in circular orbit around the sun what is the most symmetrical configuration of four such satellite trajectories ?once again , the answer may be neatly summarized in cartesian coordinates : we choose the four satellites to have trajectories with cartesian components given by where is given by eq .( [ q_hat ] ) .this solution has the following geometrical interpretation .each of the four satellites is orbiting in a different orbital plane : the trajectory is a circular orbit with its angular momentum in the direction ; in other words , there is one satellite orbiting around each of the four 3-fold symmetry axes of the regular tetrahedron .furthermore , to achieve maximal symmetry , the relative phases of the four orbits ( or , equivalently , the initial positions ) have been carefully chosen : for example , note that whenever the four satellites degenerate into a common plane [ which happens 6 times per orbit , whenever , for odd , they always form a perfect square containing the origin at its center .since the first problem is static , the corresponding solution ( [ q_hat ] ) has a static sort of symmetry : a group ( the full tetrahedral group ) of 24 spatial rotations and reflections that carry the configuration into itself .since the second problem is dynamical , the corresponding solution ( [ p_hat ] ) has a dynamic sort of symmetry . from a static standpoint , a still photograph " of ( [ p_hat ] ) at some instant will , at most , be symmetric under 16 rotations and reflections ( namely , the symmetry group of the square prism , at the special times described above ) ; but , from a dynamic standpoint , we can recall that the satellites four angular momenta point along the four diagonals of the cube with vertices , and note that any rotation or reflection that leaves this cube invariant ( there are 48 in total ) also leaves the 4-satellite orbit ( [ p_hat ] ) invariant , _ when combined with an appropriate overall translation and/or reflection in time_. in both solutions , ( [ q_hat ] ) and ( [ p_hat ] ) , the four particles are equivalent to one another , in the sense that any particle may be mapped into any other by one or more of the symmetries .while the symmetries of ( [ q_hat ] ) are intuitively clear , the symmetries of ( [ p_hat ] ) are considerably more subtle yet , as we have seen , ( [ p_hat ] ) is actually more symmetrical than ( [ q_hat ] ) !just as ( [ q_hat ] ) represents one of the simplest examples of a static lattice on the sphere , ( [ p_hat ] ) represents one of the simplest examples of a choreographic lattice on the sphere .solution ( [ q_hat ] ) was known in ancient times ; but , as far as we can tell , ( [ p_hat ] ) is new .we believe that choreographic crystals share a beauty and naturalness with ordinary crystals , and we hope that they may be of similarly broad interest and importance .in the previous section , we introduced choreographic crystals via an example based on a symmetrical configuration of four satellite orbits . in this section , we explain how to construct and classify _ all _ symmetrical satellite configurations .( for earlier work on symmetric satellite configurations , see . )in addition to being intrinsically ( and technologically ) interesting , this problem will set up our more general treatment of choreographic crystals in the subsequent section .let us start by establishing some notation , terminology and conventions . in this section, we can imagine for concreteness that each individual satellite moves on a keplerian ( elliptical ) orbit with unit period .the time translation operator maps each orbit to the orbit ( it shifts all of the satellites forward along their orbits by a common phase ) ; the time reversal operator maps each orbit to the orbit ( it reverses all velocities and angular velocities , so the satellites move backward along their orbits ) ; and an element of the orthogonal group maps each orbit to the ( rotated and/or reflected ) orbit .we can also combine these operations : _ e.g. _ or .[ to illustrate , consider two combinations ( i ) a time delay combined with a rotation around the axis , or ( ii ) zero time delay combined with a reflection through the plane : either combination leaves a circular orbit in the plane invariant , but acts non - trivially on an orbit which is tipped out of the plane . ]we refer to a set of satellite orbits as a swarm " .a symmetry " ( or symmetry operation " ) of the swarm is a combined transformation that leaves invariant : ; and a " of the swarm is a combined transformation that leaves invariant : . in other words ,a involves time reversal , while a symmetry does not .[ for example , a swarm of circular orbits has a symmetry consisting of a time shift by half a period combined with spatial inversion through the origin ( ) ; and a single elliptical orbit has two -symmetries : one which combines a time reversal with a rotation around the periapse direction , and another which combines a time reversal with a reflection in the plane spanned by the periapse and angular momentum directions .] let be a finite subgroup of : is -symmetric " if , for every , has a symmetry of the form ; and is - " if , for every , has a symmetry of the form or a of the form .any - swarm is also an -symmetric swarm , where is an index-2 subgroup of obtained by restricting to the symmetries of that do not involve time reversal .the most basic type of -symmetric swarm is a primitive -symmetric swarm . "every primitive -symmetric swarm may be constructed in the following two steps .first , choose a one - dimensional representation of ; _ i.e. _ a function that maps each element to a complex phase , and satisfies .second , choose an integer and a fiducial satellite orbit and construct the set of orbits =\{u_{[\tau(g)+m]/n}^{}g\bar{x}|g\in g , m\in\mathbb{z}_{n}\} ] as the union of two sets : /n}h\bar{x}|h\in h , m\in\mathbb{z}_{n}\} ] .if we take the union of two or more primitive - swarms based on the same , , , , and ( but different fiducial orbits , , ) , we obtain another - swarm , and any such swarm may be obtained this way .for example , the 4-satellite orbit described in the previous section is a primative - swarm , where is the achiral octahedral group ( _ i.e. _ the full symmetry group of the cube , including rotations and reflections ) ; the index-2 subgroup is ( the pyritotetrahedral group ) ; and the one - dimensional representation of is ( in mulliken notation ) .a primitive -symmetric or - swarm is generated by acting on a fiducial satellite orbit with distinct operations ( where is the order of ) ; this process will generically produce a swarm with satellites .but if the fiducial orbit and representation are chosen carefully , then will be invariant under some subgroup of the these operations , and we will instead generate a primitive -invariant swarm in which the number of satellites is only .such orbits , in which an especially small number of satellites manage to represent an especially large number of symmetries , are of special interest and importance : the natural figure of merit here is or , equivalently , the total number of symmetries divided by the total number of satellites in the -invariant swarm .we call this number the choreography " of the swarm : a swarm with large is like a delicately choreographed dance .since the finite subgroups ( _ i.e. _ the 3-dimensional point groups " ) have been completely classified , and the one - dimensional representations of these groups are all known , it is straightforward to systematically sift through all possible swarms : we have done this , and found that that the swarm of highest choreography is precisely the 4-satellite configuration ( [ p_hat ] ) introduced in the previous section , with choreography .- lattice where is the full octahedral group , is the pyritohedral group , , and , where is one of the two non - trivial 1d representations of , and is the non - trivial 1d representation of . ]the symmetric satellite swarms in the previous section are a special case of a more general class of object with choreographic order .it is natural to generalize the previous section in two different ways . _ choreographic crystals on various geometries . _ just as it is interesting to study static lattices , crystals and tilings on a wide variety of different spaces ( the 2-sphere , the 3-sphere , 2d euclidean space , 3d euclidean space , ) , it is interesting to study choreographic order on a wide variety of spaces .choose an underlying space or space - time ( _ i.e. _ an underlying riemannian or pseudo - riemannian geometry ) with isometry group ; choose a discrete subgroup ; choose a one - dimensional unitary representation of : ( ) ; and choose a fiducial curve in , where is a parameter along .( arguably the most natural case is when is a geodesic of , and is an affine parameter , but we need nt restrict ourselves to this case . )if forms a closed loop , with varying over a finite range , we rescale it so that ; and if is an infinite curve , with extending from to , we distribute an infinite number of points along the curve ( evenly spaced in ) and again rescale so that the spacing between successive points is unity . finally , we generate the primitive -symmetric choreographic lattice " =\{u_{[\tau(g)+m]/n}g\bar{x}|g\in g , m\in\mathbb{z}_{n}\} ] as the union of the two sets /n}h\bar{x}|h\in h , m\in\mathbb{z}_{n}\} ] .the union of two or more primitive - choreographic lattices based on the same , , , , , and ( but different fiducial orbits , , ) is another - lattice , and any such lattice may be obtained this way . in this generalized context, we calculate the choreography as follows . in a -symmetric ( or - ) lattice ,the isometry group folds " the underlying geometry down to an irreducible patch or orbifold ; or , in the other direction , the images of under the action of give a natural tiling of .the choreography is the number of orbifold tiles per point in ( or , equivalently , , where is the stabilizer of ) .this definition continues to be well - defined even when and the number of points in the are infinite .an example should help bring the preceding formalism to life .if we focus on the simple case where the background geometry is the two - dimensional euclidean plane , and the fiducial trajectory is a geodesic in the plane ( a straight line , with a particle moving along it at constant velocity ) , then the top panel in fig .( [ hexfigaandb ] ) shows what we believe to be the choreographic lattice of highest choreography ( ) , while the bottom panel shows another choreographic lattice with . _generalized choreographic order . _the choreographic lattices we have constructed thus far have come from simultaneously letting the isometry group of the background geometry act on the orbits in two different ways directly " ( ) and via a one - dimensional unitary representation ( ) and taking advantage of the interplay between these two actions .there are other interesting possibilities in this direction which make use of other higher dimensional representations . as a first example , imagine a swarm in which the individual particles are not featureless satellites , but rather spin particles : then it would be natural to consider systems generated by letting the isometry group simultaneously act in three different ways : via the direct action ( on the orbit s orientation , ) , via a one dimensional unitary representation ( that acts on the particle s orbital phase , ) , and via a dimensional representation ( that acts on the particle s spin ) . from a quantum mechanical standpoint, one might also consider -dimensional representations of that entangle collections of particles by mixing them ( or their associated wave functions ) at the same time as the isometry group acts on the underlying geometry directly .it seems that many interesting forms of generalized choreographic order may be possible here .an important open question is whether any actual many - body systems exhibit choreographic order , either in their ground state , or when appropriately prepared and/or driven .whether or not such systems occur naturally , it should be possible to engineer them in the lab . in either case, they should exhibit distinctive signatures in diffraction experiments , as we shall now explain . _ modified bragg law . _to get the idea , first recall that , in ordinary bragg diffraction ( from a static crystal ) , the diffraction peaks obey two rules : ( i ) the difference between the initial and final wave vector is a point in the crystal s reciprocal lattice , while ( ii ) the difference between the initial and final frequency vanishes . in the case of choreographic order, the crystal may be divided into congruent sub - lattices , each moving with a different velocity .( for example , the lattices in fig. [ hexfigaandb ] may be decomposed into three sub - lattices with different velocities . )we can calculate the diffraction separately for each sub - lattice , and then superpose the results .the diffraction due to sub - lattice has the following properties : is a point in the sub - lattice s reciprocal lattice , while is _ non - vanishing _ , and given by , where is the velocity of the sub - lattice .let us see the effect of this modified bragg law in two standard experimental configurations : von laue diffraction and powder diffraction . _von laue diffraction and powder diffraction ._ in von laue diffraction , a beam of particles of mass ( with a range of energies , but a single fixed direction ) is scattered off a single crystal of fixed orientation .first focus on a particular diffraction peak due to sub - lattice ( corresponding to a particular point in its reciprocal lattice ) .if sub - lattice were at rest , this peak would lie in the direction , with wavenumber and frequency ; but when we give the sub - lattice a small velocity , the direction and frequency of the peak changes ( to first order in ) as follows : the perturbed peak still lies in the unperturbed scattering plane spanned by and , but the scattering angle shifts from its unperturbed value to the perturbed value , while the frequency shifts by .when we superpose the diffraction pattern from the different sub - lattices , we see that for the most part , the peaks from different sub - lattices do not overlap , but instead group into -tuplets with small angular and frequency splittings described by the preceding formulae .but for certain values of , it can happen that and are the same , for two different sub - lattices and : in this case , these two peaks will interfere with one another , with the interference phase , where and are arbitrarily chosen points in sub - lattices and , at some arbitrary time . in powder diffraction, a beam with a single fixed energy and direction is scattered off a powder made up of crystals with all possible orientations . in this case , when we give sub - lattice a small velocity , the scattering angle is shifted from its unperturbed value by to $ ] , while the unperturbed frequency is shifted by .otherwise , the story is the same as in the von laue case : the peaks group into -tuplets , with small splittings given by the preceding formulae ; and interference when , with interference phase .the possible crystals in 2d and 3d euclidean space , and their corresponding diffraction patterns , will be explored further in subsequent work . in the future, it will be interesting to reconsider the vibrational modes of an ordinary crystal , by thinking of them as a special type of choreographic crystal ; or to explore the possible connections with previous work on generating higher harmonics of laser fields ( which also relies crucially on the combined space - time symmetries of the system in question ) ; or to consider the possibility of choreographic _ quasi_crystals .it would , of course , be wonderful to engineer an example of a choreographic crystal in the lab , and even more wonderful to find a condensed matter system that has intrinsic choreographic order .we thank dmitry abanin , paul steinhardt and xiao - gang wen for discussions , and the anonymous prl referees for their valuable comments .research at the perimeter institute is supported by the government of canada through industry canada and by the province of ontario through the ministry of research & innovation .lb also acknowledges support from an nserc discovery grant .a. shapere and f. wilczek , phys . rev. lett . * 109 * , 160402 ( 2012 ) [ arxiv:1202.2537 [ cond-mat.other ] ]. f. wilczek , phys .* 109 * , 160401 ( 2012 ) [ arxiv:1202.2539 [ quant - ph ] ]. t. li , z .- x .gong , z .- q . yin et al .* 109 * , 163001 ( 2012 ) [ arxiv:1206.4772 [ quant - ph ] ] .
in this paper , we introduce a natural dynamical analogue of crystalline order , which we call choreographic order . in an ordinary ( static ) crystal , a high degree of symmetry may be achieved through a careful arrangement of the fundamental repeated elements . in the dynamical analogue , a high degree of symmetry may be achieved by having the fundamental elements perform a carefully choreographed dance . for starters , we show how to construct and classify all symmetric satellite constellations . then we explain how to generalize these ideas to construct and classify choreographic crystals more broadly . we introduce a quantity , called the choreography " of a given configuration . we discuss the possibility that some ( naturally occurring or artificial ) many - body or condensed - matter systems may exhibit choreographic order , and suggest natural experimental signatures that could be used to identify and characterize such systems .
qualitative understanding in physics usually comes from the same calculations which give quantitative knowledge , but in the case of special relativity this connection is weakened .lorentz invariance enables computation of many results with little or no consideration of microscopic details , hence there is usually no practical benefit ( and often great difficulty ) in analyzing a given scenario in more detail .complete reliance on lorentz invariance can , nevertheless , leave the uncomfortable sensation of a gap in understanding .generally one expects concrete effects to have concrete causes , yet in relativity one finds very concrete - sounding effects ( slowing clocks , shortening objects ) whose cause is variously attributed to an abstract principle ( invariance of the speed of light ) or to an abstract entity ( spacetime ) or to conventions ( how things are measured , how simultaneity is defined ) .we do nt claim that these explanations are incorrect but it seems reasonable to look also for more ordinary physical explanations and try to understand how they connect to the abstract notions .adding to the sense of abstraction in special relativity is the lack of straightforward experimental demonstration for some of the central phenomena .time dilation has long been observed directly in experiments using moving clocks, and mass / energy equivalence has been similarly verified, but length contraction and the synchronization discrepancy for separated clocks are more challenging . forthese effects one must appeal to indirect observations ( e.g. , the michelson - morley negative result ) and the overall consistency of the framework .the `` dynamical '' approach to relativity aims to shrink these gaps by tracing relativistic effects to their underlying physical mechanisms , at least qualitatively .this helps to flesh out the abstract arguments and makes the hard - to - demonstrate effects more believable by showing that they have straightforward causes rooted in familiar physics .we emphasize that seeking mechanistic accounts for relativistic effects does not mean postulating a preferred reference frame or medium .the point is rather that one can , in principle , compute any relativistic quantity ( e.g. , the size of a moving molecule ) directly from the underlying theories of matter without invoking relativity at all . in practicethis is very difficult but a mechanistic analysis still helps to show qualitatively how the effects arise .the mechanisms most relevant to relativity are those of waves and fields .relativity developed simultaneously with electromagnetism , the first fundamental theory based on fields and waves , and this paradigm now extends to all known matter ( in the `` standard model '' of particle physics ) .many relativistic phenomena that at first seem strange or opaque become quite natural when viewed in a field / wave context .it is worth noting that many elements of the dynamical viewpoint pre - date relativity . indeed, many relativistic phenomena were first discovered through dynamical considerations , starting over two decades before einstein s work .fitzgerald , lorentz , larmor , and thompson were all motivated by dynamics in introducing the seminal notions of length contraction , time dilation , and relativistic mass. following einstein and minkowski the dynamical view languished , displaced by the principle - based treatment that was more efficient and also did not require a microscopic understanding of matter , which was not available at that time .the viewpoint was revived by j.s .bell in his 1976 essay `` how to teach special relativity , '' which , however , had little impact on pedagogy , possibly because it proposed rather opaque numerical computations . in the 1990 sthe baton was picked up by h.r .brown , often in collaboration with o. pooley , who mounted a vigorous philosophical defense of bell s viewpoint and extended it to cover general relativity as well. fully constructive examples have been presented by d.j .miller , who also makes the suggestion , correct in our view , that the primary aim of the dynamical approach should be to supplement the customary one with increased qualitative understanding. the dynamical viewpoint has also been presented to laypeople , somewhat briefly by n. d. mermin, and more fully by the present author. the main aim in what follows is to catalog and illustrate some of the principle mechanisms that underlie relativistic phenomena .our focus is narrower than that of brown and pooley in that we do not discuss general relativity nor ( in any detail ) the historical development of the theories ; also , we have tried to avoid taking positions on philosophical questions such as the primacy of spacetime .our approach differs from that of miller ( and bell ) in that we do not attempt a full constructive derivation but instead emphasize qualitative behavior .the manuscript is organized as follows .section [ sec : motive ] describes in more detail what the dynamical view means , at least in the approach taken here .section [ sec : mechanism ] enumerates mechanisms that give rise to relativistic effects and shows models to demonstrate them .section [ sec : frames ] discusses further how mechanistic explanations connect to the more customary approach , and sec .[ sec : conc ] provides a brief conclusion .the dynamical viewpoint aims to connect the phenomena of relativity to underlying physical aspects of the universe as currently understood .the phrase `` underlying physical aspects '' could be interpreted many ways , but we mean here the generic characteristics of current state - of - the - art theories , namely field theories , _ excluding _ lorentz invariance . in the most simplistic ( but still useful ) formulationthis means starting with a stipulation that everything in the world is `` made from waves.'' particles are really wave packets , and composite objects consist of wave packets moving under the influence of other fields whose effects are also transmitted by propagating waves .the main goal is then to translate generic wave and field knowledge into intuition about relativistic phenomena .taking a dynamical view it is natural to think not just about lorentz - invariant theories but also about related theories that share the same dynamical mechanisms seen in our universe .a generic change in the parameters of a lorentz - invariant field theory leads to a theory that is still a field theory , hence shares the same types of mechanisms and the same `` relativity - like '' effects , but which is no longer precisely lorentz - invariant .for example , one might alter the wave speeds of the different fields to be direction - dependent and/or unequal to each other .considering this wider context of theories helps to illuminate lorentz invariance by contrast , much as one understands rotation invariance in mechanics by considering both symmetric and non - symmetric potentials . we certainly do not wish to suggest that either lorentz invariance or minkowski space are not fundamental ; the point is merely to provide a broader picture of where these concepts come from and why the prior newtonian picture , in which motion _ per se _ entails no real effects ,is not compatible with a world consisting of fundamental fields .we describe some of the main mechanisms giving rise to relativistic effects and show elementary models that illustrate them .although the models do sometimes produce quantitatively correct relativistic answers , the intent is never to independently prove lorentz invariance but rather to illustrate generic mechanisms that create the _ possibility _ of lorentz invariance in a world composed of fields and waves . for the momentwe take a naive view of concepts such as motion , reference frame , and measurement , defering a deeper discussion to sec .[ sec : frames ] .we will sometimes refer to relatively - moving observers as `` moving '' and `` stationary , '' however , these are meant merely as convenient labels .one immediate consequence of the field / wave paradigm is non - rigidity of objects .rigid objects can exist in newtonian theories because forces propagate at infinite speed , but in a field theory universe all forces must propagate via waves , and waves inherently move at finite speed . for everyday objects in our world , which are built from atoms , the forces binding them are of course mainly electrical and the waves electromagnetic . butfinite speed of force propagation means that all objects will deform under an applied force , simply because one part starts to move before the other parts experience any force at all .this fundamental non - rigidity is independent of the strength or organization of bonds within the object .this certainly does not prove lorentz contraction , and additional study is needed to see whether a deformation persists after the acceleration stops ( see sec .[ lcdpf ] below ) .nevertheless , the failure of rigidity does at least create the possibility of contraction. not only do waves propagate at finite speed but wave speed is also generically independent of the motion of the source .the speed of sound waves from a jet does nt depend on the speed of the jet ; the speed of water waves in a wake does nt depend on the speed of the boat .we stress that this is _ source - independence _observer - independence _, the vastly stronger assertion that implies complete lorentz invariance .source - independence is a generic feature of wave physics , while observer - idependence is an assertion not about the waves themselves but about complex physical effects occurring within every possible apparatus that could be used to measure wave speed .source - independence of wave speeds causes changes to the tick rates of moving clocks , as demonstrated most clearly within the venerable `` light clock . ''the light clock consists of a light pulse bouncing back and forth between two mirrors , as shown in fig .[ fig : lightclock](a ) .if we now imagine the clock placed on board a spaceship with glass walls and flown by us at high speed , it will look as in fig .[ fig : lightclock](b ) ( taking the clock to be oriented transverse to the motion of the ship ) .the light pulse now travels a longer distance for each cycle , hence the tick rate is slower .source - independence prevents the clock from making any kind of automatic adjustment to preserve its rate when moving ; source motion alters the spatial pattern of the waves ( doppler effect ) but this does not help the clock to maintain its rate .all clocks will be affected by this effect to some degree because their subcomponents can only interact through wave transmission . in this casea simple calculation using the pythagorean theorem does give the correct lorentz time dilation factor. it should , however , be recognized that this calculation relies on additional implicit assumptions . indeed , fig .[ fig : lightclock ] could also be drawn within a non - lorentz - invariant theory having , for example , different wave speeds in different directions , and the naive calculation would then be wrong . what remains generically true , however , is that the moving clock will change its rate due to source - independence of wave speed .another possibility is that shape changes as discussed above ( and in sec .[ lcdpf ] below ) could counteract the rate change due to wave propagation .if the light clock housing were to shrink , thus reducing the vertical travel distance , then the rate could remain unchanged ; however , there is no reason to expect the effects to conspire in this way. the relativistic mass increase where , seems ad hoc when introduced in the context of particle mechanics .it is difficult to understand why an elementary and indivisible piece of matter should become harder to accelerate when moving faster .the effect is , however , quite natural for particles viewed as wave packets , as in modern quantum theories .close analogies to such `` matter waves '' can be constructed using simple wave - on - string models , allowing the effect to be understood in a simple setting . for simplicitywe start by considering matter moving only in the -direction .we then suppose that the matter is actually described by a wave obeying the standard wave equation for transverse waves on a string : where we have set all the constants to one for simplicity .all traveling waves in this model move with fixed speed regardless of frequency , hence the model corresponds to `` massless '' waves such as light . to model massive particles one needs an extra restoring force at each pointthis can be visualized as placing hooke s law springs on either side of the the string at frequent intervals , hence we will refer to the model as `` . ''the equation we will use is the continuous limit in which the restoring force acts at each point : here the restoring force constant is labeled as anticipating that will be the mass of a `` particle '' in this model .the massive case with differs qualitatively from the massless case ( ) in two important ways .first , the presence of the springs obstructs the waves and slows them down ( e.g. , a single spring attached to a string will reflect some fraction of incident waves ) .second , the massive waves can approximately sit still , because they can oscillate in place under the spring restoring force .the massless waves can not sit still because their only restoring force comes from neighboring parts of the string .the elementary traveling wave solutions take the usual form of and , where the angular velocity and wavenumber satisfy these solutions extend over all space and do nt look much like particles , but this can be remedied by building a `` wave packet''a superposition of waves having wavenumbers in a narrow range , such as looking first at , we see that the waves are all in phase at , but they interfere increasingly destructively away from this point .this creates a localized packet having approximate location and approximate wavenumber ( and corresponding angular velocity ) . as changes , the location of the in - phase maximum moves , and with it the wave packet . by substituting into eq .( [ packet ] ) , and using eq .( [ omegak ] ) , one sees that the packet moves with approximate velocity given by the `` group velocity '' we henceforth drop the bar notation and use and for the packet s central values .the model is not so far from the real description of particles in modern quantum theories .the non - particle - like extended solutions do exist in nature , but they are converted to more localized wave packets through interaction with other clumps of matter ( such as measuring devices). quantization also crucially prevents the waves from dissipating away to zero .a wave packet , like a particle , can be accelerated by a potential field .for example , letting the potential be one can add a coupling term to eq .( [ m>0 ] ) ; this corresponds to letting the `` spring tension '' vary with position , which is essentially the action of the standard model higgs field. the qualitative origin of relativistic mass can be seen immediately since the ( absolute value of ) group velocity satisfies for all values of .the packets can never attain the `` spring - free '' speed because their propagation is hindered by the springs . as the limiting speed is approached , energy applied to accelerate a packet goes instead into vibrations of the field . indeed the energy _always _ goes into field vibrations and the packet acceleration is merely a side effect that occurs for low speeds . to see this in more detail we start with the standard expressions for energy and momentum of a vibrating string , with the hooke s law energy added: \\ \mathcal{p } & = -\int dx\ , \dot{\xi}\xi^\prime . \label{p}\end{aligned}\ ] ] here , the dot and prime indicate derivatives with respect to time and space . evaluating these for a wave packet that is narrow in space gives to good accuracy where is the squared norm: . to go further in the program of constructing particles out of wave packets one has to decide what value of constitutes a single particle .not just any arbitrary convention will do , but it should be preserved , at a minimum , under slowly - varying ( `` adiabatic '' ) conditions . under adiabatic conditions the potential fields vary weakly in space and time , creating only small forces that change slowly .a single particle moving in such a weak field should remain as a single particle , although its amplitude may change .similar questions were studied in the early days of quantum mechanics and it was shown that certain quantities are invariant under adiabatic changes .the most well - known occurs in the harmonic oscillator and takes the form . this invariant also applies to wave packets because the vibrating string is just a collection of harmonic oscillators , one for each , as can be seen by fourier - transforming eq .( [ m>0 ] ) in the spatial variable . making use of eq ., we see that the wavepacket norm will evolve such that and the single - particle normalization definition should be consistent with this .we choose the simplest option , which is also the normalization arrived at through quantization .applying eq .( [ nomega ] ) to eqs .( [ ebar ] ) and , one finds for the single - particle energy and momentum which are the well - known relations proposed by einstein for photons and by de broglie for matter waves ( in units with ) .a force arising from some potential by definition acts to change the wave packet momentum by and hence from eq .( [ pbar2 ] ) which is exactly as expected for a force acting on the relativistic mass eq .( [ relmass ] ) , since eqs . andimply and thus , the relativistic mass and its associated force law are embedded in the physics of a vibrating string , which also ( with many additional complications ) is the physics of the actual fields giving rise to `` particles '' in nature .we should recognize that eqs .( [ e ] ) and give a rather oversimplified description of a real string ; indeed , strictly transverse mechanical waves can not carry longitudinal momentum. eqs .( [ e ] ) and apply to small - amplitude oscillations where the string is assumed not to be stretched by the wave ( in which case the wave can not be strictly transverse ) .the momentum of a field as defined by a formula like eq .( [ p ] ) really represents energy flux , and it only becomes connected to the velocity of an object through the construction of wave packets .we note for future reference that factors of in wave packet expressions are directly proportional to relativistic factors , as seen from eq .( [ omegagamma ] ) .we note also that the results extend to packets moving in two or three dimensions by simply replacing with a vector . for two dimensions the modelcan still be visualized reasonably easily as a vibrating sheet .the relativistic mass effect creates a second important mechanism contributing to time dilation . as an object accelerates in one direction , the relativistic mass increase of its subcomponents results in a slowing of transverse motions within the object .this phenomenon can be understood directly in terms of wave packets .the velocity of a wave packet is its group velocity , eq .( [ gv ] ) , which depends on the overall frequency .but the frequency measures overall energy , eq .( [ ebar2 ] ) , and hence is changed by an applied force .acceleration of the wave packet in one direction increases its frequency and this reduces the group velocity of the packet in transverse directions .the different components of velocity in a wave packet are thus interrelated in a way that would seem quite unintuitive for a pure point particle .a simple example is an orbiting particle accelerated slowly perpendicular to the orbital plane ( other orbital orientations become very complex , hence bell s original suggestion to study them through numerical simulation). there is no tangential force , hence the orbital momentum does nt change , but since the effective mass does increase one finds that the orbital speed is reduced : implying , or assuming that the orbital radius ( or shape , if not circular ) stays the same, this result implies that the orbital period increases by the time dilation factor .this calculation was slightly oversimplified since the particle s -factor differs from that of the overall system ; however , taking this into account leads to the same result. also we note that the slower orbit implies a reduced centripetal force , so the field providing the central force needs to behave accordingly .this is not trivial , e.g. a transverse electric field actually grows with velocity ( cf .[ fig : movingcharge ] below ) , but then a magnetic field also emerges whose lorentz force more than offsets it .a similar case is the massive analog of the light clock , namely a``bouncing ball '' clock in which a ball bounces between two plates ( with bounce direction again oriented transverse to the motion of the ship that carries the clock ) . as the ship accelerates perpendicular to the bounce directions ,the same acceleration must be applied to the ball to keep it bouncing between the two plates .the clock s caretakers on board the ship must do this without applying any force parallel to bounce directions , because that would invalidate the system s timekeeping function even as seen by themselves .hence they will maintain the clock through small impulses perpendicular to its bounce direction , leading to the same slowing effect seen with the accelerated orbit .the frequency - changing mechanisms described here and in sec . [ wptd ] above will affect almost every type of clock , but they are certainly not an exhaustive list .most clocks will also be affected by the shape changes described in sec .[ wpr ] above and sec .[ lcdpf ] below ; some clocks will also be affected by changes in macroscopic field values , e.g. , the electric and magnetic fields within an lc circuit .length contraction more correctly , shape deformation first occurred to fitzgerald upon seeing heaviside s solution showing the deformed electric field about a moving charged particle ( see fig .[ fig : movingcharge]). this famous result showed that the moving electric field becomes `` compressed '' along the direction of motion , which certainly suggests that any objects constructed using such forces should undergo at least _ some _ shape change when moving . .field lines indicate field strength on a circle about the charge center , as measured in a stationary frame ., width=321 ] rather than reproduce this computation we note a qualitative way to understand why such changes are inevitable .the field of a moving particle establishes itself through the emission of electromagnetic waves during acceleration .these waves are doppler - shifted like any other , having a spatial pattern that is asymmetrical about the charge center .the asymmetry in wave pattern then results in an asymmetrical final field . for a scalar fieldthis is the only effect of steady motion , but for the electromagnetic field a magnetic field is also produced . to go further and show that the altered fields actually lead to contraction is worthwhile but we focus here on the qualitative lesson that _some _ shape change is inevitable in a field theory .indeed we assert , with harvey brown , that `` shape deformation produced by motion is far from the proverbial riddle wrapped in a mystery inside an enigma.'' another way to see the role of internal fields in length contraction is to think of how the shape of an object changes as it undergoes acceleration . for concretenesswe imagine a barbell with two identical weights connected by a rod , being accelerated in the direction along the rod . as it accelerates it also contracts , so the distance between the weights decreases .hence the acceleration of the two weights is not identical ; the rear weight accelerates slightly faster than the forward one .this in turn implies that the two weights feel slightly different forces during the acceleration .what is the origin of this force difference ?it can only arise from the changing intermolecular forces within the connecting rod , which occurs due to the mechanism of fig .[ fig : movingcharge]. mass / energy equivalence implies that the electric field within a capacitor adds to its inertia ( makes the capacitor harder to accelerate ) , but how does this come about ? if one applies force to the capacitor , the force acts on the atoms forming the plates and housing , not on the electric field , so how does the electric field also contribute to the inertia ?likewise when an atom emits a photon and one electron drops to a lower energy level , the atom must become lighter and hence easier to accelerate .part of this change is due to the reduced electrical field energy inside the atom , but how does this changed field translate into reduced difficulty accelerating the atom? the answer is back - reaction , the process by which a field acts back on its source to ( usually ) resist the acceleration of the source .when the relevant field is electromagnetic , this often amounts to ordinary self - induction .it is worth noting that almost all of the mass of everyday matter actually arises through back - reaction , as manifested in the strong nuclear field. back - reaction provides a good illustration of the strengths and weaknesses of both dynamic and symmetry viewpoints . using the mass - energy formula immediately gives the mass contributed by the electric field of a capacitor to be where is the standard energy of the electric field between the plates .this , however , provides no understanding of how the field actually contributes to the inertia .viewing it mechanistically one sees that self - induction provides at least part of the answer , because accelerating the charges on the plates induces a changing field which in turn creates an field that acts back on the charges to resist the acceleration .students can easily calculate this for simple cases such as a parallel - plate capacitor or uniformly charged sphere , but a small problem appears : the results do nt match the relativistic prediction .indeed the dynamically computed inertia not only disagrees but ( in the case of a capacitor ) depends strongly on the direction of acceleration. the problem is that one must also include the fields inside the material of the plates , since it is these fields that contact the charges directly and exert the back - reaction .one must also then consider the motion of the charges inside the material , including those bound within atoms . attempting to dothis leads to the even more daunting problem of infinite self - fields of the particles .ultimately one can not complete the dynamical calculation except using fully renormalized quantum electrodynamics .nevertheless the naive calculations do provide valuable insight into the inertia of field energy ; indeed this is how it was first discovered , by j.j .thomson in 1881. particle decay lifetimes are the most commonly observed manifestations of time dilation , and the essence of their mechanism can be captured using models .in fact the crucial factor is already visible in the driven harmonic oscillator we consider a resonant driving force that turns on at ; namely , , where is the step function . using this driving force , eq .has solution , showing that the rate of amplitude increase is damped by a factor .the same factor is seen more generally in the green function the suppression carries over to particle decays because , as noted above eq .( [ nomega1 ] ) , the particle fields can be viewed as collections of harmonic oscillators , one for each .decays occur when one field drives one or more other fields at resonance , and the factor of contains the relativistic dilation factor for moving wavepackets , as noted at the end of sec .[ wpgvrm ] above .( by resonance here is meant that both and should satisfy eq . for the field being driven . )the most tractable example is not a true decay but rather oscillation between two particles having the same mass .this can be modeled by taking two parallel , identical systems [ eq . ] and attaching them to each other by additional springs running between them .we start with a wave packet only on one of the strings , moving at its group velocity .the packet will then oscillate between the strings and the oscillation frequency will be `` time dilated , '' i.e. , it will become slower for faster - moving packets .letting and be the displacements of the two strings , the springs running between them create a hooke s law interaction energy . expanding this , we find that the and terms just shift the mass of each string , leaving the effective interaction energy . the coupled equations of motion then take the form ( using the shifted value of ) a basis of solutions is found by taking and exactly in or out of phase : .the out - of - phase modes stretch the springs connecting the two strings , and hence have higher oscillation frequencies than the in - phase modes ; one finds where the last line is the approximation to first order in .a packet that starts out only on the string can be built by combining identical packets made with the in - phase and out - of - phase modes .the packets initially cancel on the string , but because they have slightly different angular velocities the initial `` particle '' will oscillate between the two strings with angular velocity equal to the difference : the last line uses eq .( [ omegagamma ] ) and shows the time dilation effect : faster - moving packets oscillate more slowly .[ the two packets will also separate over time due to their differing group velocities , however , this effect is of order . ]true multiparticle decays can also be understood along these lines but the analysis is more involved. the relativity of simultaneity is one of the more persistently confusing pieces of the relativity puzzle , owing perhaps to its nonlocal character , connecting observations made at separated locations . herewe give a mechanistic account that builds on mechanisms already shown .the model is again the transversely - oriented light clock , but this time we consider two of them .the two clocks start out together at the back end of a moving spaceship , and they are synchronized . due to their close proximitythey are seen to be synchronized both by observers onboard the ship and also by external `` stationary '' observers ( who we assume , as usual , to have some way to view the bouncing beams inside the clocks ) .now a scientist onboard the ship carries one of the clocks to the front of the ship .this is done very slowly in order to avoid disrupting the clock s function .after this `` slow transport '' is complete the onboard observers possess two separated clocks which they can presumably consider to be synchronized. this , however , is not how it appears to the stationary observer , as can be seen by geometrical analysis similar to that of fig .[ fig : lightclock ] .we note first that the mirror reflections have no effect on the calculation , or equivalently one may give the light clock a height such that it completes exactly one upward bounce during the time taken to carry it the distance .the situation as seen by the stationary observer is then as shown in fig .[ fig : lctrans ] .the line tilted at angle shows the light pulse of a clock aboard the ship that stays in the same place ( is not carried ) .it completes one vertical pulse of height , traversing a distance as seen from the stationary frame . the line with additionaltilt shows the light pulse of the carried clock , which begins at the same location as the non - carried clock , but covers an additional horizontal distance , as measured in the stationary frame , and traverses total distance , also as seen the stationary frame .the slow - transport limit is then the limit with and held constant , and the question is whether the difference in pulse times goes to zero , or equivalently whether the extra distance goes to zero . from the figure one sees that the limiting value is in fact , so it approaches zero only when the ship s speed is also zero .the carried clock gets out of synch with the non - carried clock , as seen by the external observer , no matter how slowly it is carried .the virtue of the slow - transport derivation is that it shows that any mechanism causing motion - dependent rate change also creates motion - dependent synchronization differences .if a given clock has rate change factor when moving at speed , then one shows easily that slow - carrying in the same direction as the base motion leads to a synchronization difference , where is the carry distance seen by the stationary observer .one might have thought that the extra effect would vanish for a clock carried extremely slowly , but this expectation fails because the slower the clock is carried the more time there is for the rate difference to accumulate. comparing this derivation to the more customary `` einstein train '' thought experiment , one sees that _ for the light clock _the mechanisms are the same. in both cases the cause of the synchronization discrepancy is source - independence of wave speed .however , the clock - carrying derivation also extends to other types of clocks whose rate variation arises from different mechanisms , and it also makes sense within theories that have no massless fields at all available for signaling . for these reasonswe feel that it captures the underlying mechanism of simultaneity discrepancies between observers , and deserves greater emphasis .one of the most common questions asked about relativity is why nothing can exceed the speed of light .it is difficult to answer this question in a concise and satisfying way .one standard answer is that superluminal travel combined with lorentz invariance implies time travel , and hence is paradoxical .however , this reasoning is quite formal and one would hope for a more physical understanding .a second answer is that the relativistic mass effect makes it impossible to accelerate objects to light speed , let alone beyond .this explanation is more physical but still begs the question of why mass acts this way , and also does not address massless objects .if one accepts the premise that everything is `` made from waves '' then one can give a more elementary answer .waves simply can not be sped up by applied forces .attempting to push on a wave , which one can easily try at a beach or pool , does nt make the wave go any faster but only creates more waves .speaking more forcefully does not create faster sound waves but only louder ones .waves can be slowed down by obstacles that hinder their motion , such as the springs studied in sec .[ wpgvrm ] above , but they can not be sped up . in a universe constructed from fields and waves a cosmic speed limit is inevitable by the very nature of wave motion .the mechanisms shown above make it clear that in a wave - based world the behavior of moving objects can not follow the expectations of newtonian physics . moving objects will generically change shape , while processes within moving objects will not occur at the same rate as when stationary .these changes affect all objects and processes , including those used for measurement .this means that observers in different states of motion will generically measure different values for almost every quantity, which could produce an extremely complicated situation .indeed , behaviors could be so complex that neither distance nor time , nor any other customary physical quantities can even be meaningfully defined ( observers probably could not exist under these conditions either ) .such generic , very complex field theories still formally possess one time and three space coordinates , but it could well be impossible to relate those coordinates to usable operational measurements made within those systems. hence the generic field theory , although still exhibiting relativity - like mechanisms arising from wave behavior , is not likely to be physically interesting .what is needed is a subset of these theories having some degree of regularity , say , enough for life to evolve .if one began with knowledge of field theory but not of lorentz invariance one might well have looked for theories in which the motion - dependent effects are organized in such a way that arbitrary movements ( e.g. , orbital or galactic motion ) are not fatally disruptive . in view of the variety and pervasiveness of the mechanisms described above, this is no small order. looked at this way it is really rather surprising that there does exist a class of field theories in which the effects are beautifully organized and tuned in exactly such a way that not only can observers exist , but moving observers can not even tell they are moving .these are , of course , the lorentz - invariant field theories . in this very restrictive class of theoriesone has concise and operationally meaningful definitions of time , distance , mass , energy , and momentum , and their relationships are captured in the elegant formalisms of minkowski spacetime and relativistic kinematics .hence the relationship between mechanism and symmetry has something of a chicken - and - egg character .the lorentz symmetry can ( apparently ) not be realized without the wave- and field - based mechanisms described above , and yet a generic universe built upon these mechanisms would likely be barren and uninteresting without the symmetry. time is obviously limited and one may question whether it is productive to spend time on discussions that mainly add qualitative understanding .we feel that in the case of relativity it should be seriously considered in view of the absolute centrality of the concepts involved .there is no other part of the curriculum that deals primarily with the core concepts of distance , time , energy , mass , and measurement that permeate all physical thinking .the surprising ease of deriving the lorentz transformation can act , paradoxically , as a barrier to full understanding .abstract explanations based solely on postulates or symmetry hide the true complexity of the underlying processes and do not provide a complete foundation for reasoning about the fundamental concepts involved .many students are left with lingering sensations of circularity or incompleteness in the derivations as well as serious uncertainties about what the theory covers , what the alternatives to lorentz invariance are , and how effects such as length contraction relate to more familiar physical effects . consideration of the mechanisms of relativity unifies it more closely to other areas of physics and should help to forestall these sorts of confusions .the author gratefully acknowledges helpful communications with and suggestions from dean welch , larry hoffman , michael lennek , francis everitt , bryan galdrikian , shirley pepke and kirk mcdonald .h. r. brown , `` michelson , fitzgerald and lorentz : the origins of special relativity revisited , '' e - print pitt - phil - sci 00000987 ; < http://philsci-archive.pitt.edu/987/1/michelson.pdf>[<http://philsci-archive.pitt.edu/987/1/michelson.pdf > ] .h. r. brown , o. pooley,``the origins of the spacetime metric : bell s lorentzian pedagogy and its significance in general relativity , '' in _physics meets philosphy at the planck scale _ , edited by c. callender and n. huggett ( cambridge university press , cambridge , 2001 ) , p. 256 .( arxiv : gr - qc:9908048 ) one can still define rigidity within relativity as a contrast to the malleability of , e.g. , a box of gas . physically speaking a `` rigid '' objectis one whose shape is determined by a quantized wavefunction which is separated from others by a significant energy gap ; cf .also < http://en.wikipedia.org/wiki/born_rigidity>[<http://en.wikipedia.org/wiki/born_rigidity > ] .of course there is also no reason _ a priori _ to expect that the effects could fit together to produce the lorentz transformation .ultimately it is simply a mathematical fact that some field theories are lorentz invariant , while none ( to the author s knowledge ) exist in which effects conspire to cancel rate changes .another possibility for the coupling involves one time derivative , e.g. , which is similar to the coupling of an electric potential .a coupling with two time derivatives would be analogous to a gravitational potential .for a recent treatment see c. wells and s. siklos , `` the adiabatic invariance of the action variable in classical mechanics , '' eur. j. phys .* 28 * , 105112 .also goldstein , _ classical mechanics _ , 2nd ed .( addison wesley , 1980 ) , ch . 11 - 7. an alternate form of the calculation is described in v. p. dmitriyev , `` the easiest way to the heaviside ellipsoid , '' am .* 70 * , 717718 ( 2002 ) and b. y. hu , `` comment on ` the easiest way to the heaviside ellipsoid ' by valery p. dmitriyev , '' am .* 71 * , 281 ( 2003 ) ideas for calculating explicit shape change can be found in ref . and ref .( the latter proposes numeric calculations of orbiting systems undergoing acceleration ) .calculations to lowest order in are also feasible .the other part of the mass change is actually an increase , because the electron in a lower energy state moves faster and hence has a larger relativistic mass .the virial theorem guarantees that this increase is more than offset by the decrease in field energy .one can question whether slow carrying is a valid method of synchronization , but if it is not , then there is no clear reason to believe that observers have _ any _ useful way to define synchronization ( a real possibility in non - lorentz - invariant scenarios , cf .section iv ) . for einstein s trainsee , e.g. , ref . , ch . 5 ;< http://en.wikipedia.org/wiki/relativity_of_simultaneity>[<http://en.wikipedia.org/wiki/relativity_of_simultaneity > ] ; a. einstein , _ relativity : the special and general theory _ , springer , 1916 , available in the public domain at < http://en.wikisource.org/wiki/relativity:_the_special_and_general_theory>[<http://en.wikisource.org/wiki/relativity:_the_special_and_general_theory > ] . for further discussion of these `` perspectival ''changes between observers , see ref . .the fact that formal quantities , especially the metric , may not have their expected operational meaning within the theory , has also been discussed by brown ( ref . ) and brown and pooley ( ref . ) .we do nt claim that no other interesting classes of field theory exist ; examples are lorentz invariant theories with background field values .lorentz violation is an active topic of research , e.g. t. jacobsen , s. liberati and d. mattingly , `` lorentz violation at high energy : concepts , phenomena , and astrophysical constraints , '' ann .* 321 * , 150196 ( 2006 ) .this helps understand the question of conventionality of the definition of simultaneity ( see , e.g. , ref . ) . if lorentz invariance holds then there is a preferred definition of simultaneity and little reason to consider any other ; otherwise one will need to consider conventions , butprobably no really ideal convention will exist . in no casecan a field / wave world be converted back to newtonian behavior by means of conventions .
arguments are reviewed and extended in favor of presenting special relativity at least in part from a more mechanistic point of view . a number of generic mechanisms are catalogued and illustrated with the goal of making relativistic effects seem more natural by connecting with more elementary aspects of physics , particularly the physics of waves .
one of the fundamental conceptual difficulties in modern physics bears the name quantum measurement problem , and comprises the result of a measurement when quantum effects are taken into account . in the first decades of the xx centurythe feasibility of analyzing this problem in the experimental realm was an impossible task , but the technology in the 1980s finally allowed the possibility of performing repeated measurements on single quantum systems , in which some of the most striking features of quantum measurement theory could be tested [ 1 ] . in the context of the orthodox quantum theory a measurementis described by von neumann s reduction postulate , nevertheless , as we already know , this proposal shows conceptual difficulties [ 2 ] . in order to solve these problems some formalisms , which are equivalent to each other ,have been proposed [ 3 ] . in this directionsome works , stemming from the aforementioned proposals , have been done .they render theoretical predictions that could be tested against the experiment [ 4 , 5 , 6 ] .neverwithstanding , all these theoretical predictions have been done in the realm of the so called quantum demolition ( qd ) measurements , in which an absolute limit on the measurability of the measured quantity is always present .this limit is a direct consequence of heisenberg s uncertainty principle [ 1 ] .as we already know , there are another kind of measuring processes , in which this standard quantum limit can be avoided , they are called quantum nondemolition ( qnd ) measurements [ 1 ] .the idea here is to measure a variable such that the unavoidable disturbance of the complementary one does not disturb the evolution of the chosen variable , this idea was pioneered by braginsky , vorontsov , and khalili [ 7 ] .the relevance of the understanding of the measurement problem has not only academic importance , it possesses also practical importance , for instance , it plays a relevant role in the comprehension of the measurement of the position of the elements of a gravitational wave antenna [ 1 ] . another point in connection withthe quantum measurement problem is its possible relation with gravity , namely it could also shed some light upon another fundamental problem in modern physics , i.e. , the question around the validity at quantum level of the equivalence principle [ 8 , 9 ] . in this workwe find a family of qnd variables for the case of a particle immersed in an inhomogeneous gravitational field .afterwards , its propagator , when these qnd variables are measured , is calculated , and the probabilities , corresponding to different measurement outputs , are deduced .finally , we compare this case with the results of a quantum demolition situation [ 6 ] , namely to the case in which the position of our quantum particle is continuously monitored .consider the lagrangian of a particle , with mass , moving in the earth s gravitational field , and located a distance from the center of it the corresponding hamiltonian reads at this point we introduce an approximation , namely the particle is located at a distance above the earth s surface , where the condition is fulfilled , here denotes the radius of the earth . the intention with this approximation is to obtain a path integral , and to be able to calculate it analytically , even if we consider the effects of a measuring process .mathematically this simplification is important , since as is already known [ 10 ] , even the path integral ( here path integral means feynman s original time sliced formula ) of a quantum particle moving in a coulombian potential ( without the inclusion of a measuring process ) shows mathematical problems , for instance , the so called path collapse .this aforementioned simplification allows us to avoid this problem , and also to find in a , more or less , simple way the analytical expression for the corresponding propagator .therefore , in a first approach to this problem we consider the case in which the particle remains near the earth s surface . under these conditionsthe hamiltonian becomes where , , and the constant , that appears in ( 2 ) , has been chosen equal to .let us now introduce the operator where .the differential equation that determines if this last operator defines a qnd variable reads [ 11 ] defining , we find that expression ( 5 ) becomes its solution reads here is a constant .hence .\ ] ] the solution given by expression ( 8) defines a family of functions , and each element of it is a qnd variable .indeed , any choice for our function renders a qnd variable , and this function is not determined by the differential equation and remains as a free parameter in our model .let us now introduce a measuring process , namely the observable related to will be measured continuously . in order to analyze the propagator of our system we will resort to the so called restricted path integral formalism , in which the measuring process is taken into account through a weight functional in the path integral [ 11 ] .this means that now we must consider a weight functional , which has to take into account the effect of the measuring device upon the measured system .mathematically , this weight functional restricts the integration domain , i.e. , the integration is performed only over those paths that match with the measurement output .the problem at this point is to choose a particular expression for this weight functional . hereour choice is a gaussian form .a justification for this election stems from the fact that the results coming from a heaveside weight functional [ 12 ] and those coming from a gaussian one [ 13 ] coincide up to the order of magnitude . at this pointit is noteworthy to add that even the possibility of having measuring processes with this kind of functionals has already been discussed [ 14 ] .this kind of weight functional has been employed to analyze the response of a gravitational wave of weber type [ 11 ] , the measuring process of a gravitational wave in a laser interferometer [ 15 ] , or even to explain the emergence of the classical concept of time [ 16 ] .therefore , in a first approach to this topic we may consider a measuring device whose weight functional has a gaussain form .hence under these conditions the propagator becomes }(z '' , z ' ) = \int_{z'}^{z''}d[l]d[p]\exp{\big[}{i\over\hbar}\int_{\tau ' } ^{\tau ' ' } ( { \vec{p}~^2\over 2 m } - mgl - { m\omega^2\over 2}l^2)dt{\big ] } \times\nonumber\\ \exp{\big ( } -{1\over t\delta a^2}\int _ { \tau ' } ^{\tau ' ' } [ a(t ) - a(t)]^2dt{\big)},\end{aligned}\ ] ] where is the experimental error associated with the measuring of , and denotes the time that the experiment lasts .expression ( 9 ) may be rewritten as follows }(z '' , z ' ) = \exp\bigl[-{1\over t\delta a^2}\int _ { \tau ' } ^{\tau ' ' } a(t)^2dt\bigr]\times\nonumber\\ \int_{z'}^{z''}d[l]d[p]\exp{\bigg(}{i\over\hbar}\int_{\tau ' } ^{\tau ' ' } { \bigg[}({1\over 2 m } + { i\hbar\over t\deltaa^2}\sigma^2)\vec{p}~^2 \nonumber\\ - 2{i\hbar\sigma\over t\delta a^2}\bigl(a + m\sigma\sqrt{{2g\over r}}\tanh[\sqrt{{2g\over r}}(t - \tau')]l\bigr)p{\bigg]}dt{\bigg)}\times\nonumber\\ \exp{\bigg(}{i\over\hbar}\int_{\tau ' } ^{\tau ' ' } { \bigg[}\bigl(2{i\hbar m^2\sigma^2g\over tr\delta a^2}\tanh^2[\sqrt{{2g\over r}}(t - \tau ' ) ] - { m\omega^2\over 2}\bigr)l~^2\nonumber\\ + \bigl(2{i\hbar am\sigma\over t\delta a^2}\sqrt{{2g\over r}}\tanh[\sqrt{{2g\over r}}(t - \tau ' ) ] - mg\bigr)l{\bigg]}dt{\bigg)}.\end{aligned}\ ] ] the path integral on the momenta is readily calculated [ 17 ] \exp{\bigg(}{i\over\hbar}\int_{\tau ' } ^{\tau ' ' } { \bigg[}({1\over 2 m } + { i\hbar\over t\delta a^2}\sigma^2)\vec{p}~^2 \nonumber\\ - 2{i\hbar\sigma\over t\delta a^2}\bigl(a + m\sigma\sqrt{{2g\over r}}\tanh[\sqrt{{2g\over r}}(t - \tau')]l\bigr)p{\bigg]}dt{\bigg)}=\nonumber\\ \exp{\bigg(}{4i\hbar m\overt\delta a^2(t\delta a^2 + 2im\hbar)}\int_{\tau ' } ^{\tau ' ' } { \bigg[}a^2 + m^2\sigma^2{2g\over r}\tanh^2[\sqrt{{2g\over r}}(t - \tau')]l^2\nonumber\\ + 2ma\sigma\sqrt{{2g\over r}}\tanh[\sqrt{{2g\over r}}(t - \tau')]l{\bigg]}{\bigg)}.\end{aligned}\ ] ] this last result allows us to rewrite the propagator as follows }(z '' , z')= \exp{\bigg(}-{1\overt\delta a^2}{(t\delta a^2 - 2im\hbar)^2\over ( t\delta a^2)^2 + ( 2m\hbar)^2}\int _ { \tau ' } ^{\tau ' ' } a(t)^2dt{\bigg)}\times\nonumber\\ \int_{z'}^{z''}d[l]\exp{\bigg(}{i\over\hbar}\int_{\tau ' } ^{\tau ' ' } { \bigg[}\bigl(-2{m^2\sigma^2g\over tr\delta a^2}\tanh^2[\sqrt{{2g\over r}}(t - \tau ' ) ] - { im\omega^2\over 2\hbar}\nonumber\\ + { 8i\hbar m^3g\sigma^2\over tr\delta a^2(t\delta a^2 + 2im\hbar)}\tanh^2[\sqrt{{2g\over r}}(t - \tau')]\bigr)l^2\nonumber\\ + \bigl(-2{am\sigma\over t\delta a^2}\sqrt{{2g\over r}}\tanh[\sqrt{{2g\over r}}(t - \tau ' ) ] - { img\over\hbar } \nonumber\\ + { 8i\hbar m^2a\sigma\overt\delta a^2(t\delta a^2 + 2im\hbar)}\sqrt{{2g\over r}}\tanh[\sqrt{{2g\over r}}(t - \tau')]\bigr)l{\bigg]}{\bigg)}.\end{aligned}\ ] ] once again we find a gaussian path integral , which can be easily calculated } = \exp{\bigg(}\int _ { \tau ' } ^{\tau ' ' } { \bigg[}-\alpha a^2(t ) + { { 4a^2(t)gm^2\alpha^2\over r } - { m^2g^2\beta^2\over 2\hbar^2 } + { 2ia(t)m^2\alpha\beta g\sqrt{{2g\over r}}\over\hbar}\over { 4m^2g\alpha\over r } + i{m\omega^2\beta^2\over \hbar}}{\bigg]}dt{\bigg)}.\end{aligned}\ ] ] in this last expression , in order to simplify it , we have introduced two definitions , namely , } ] .employing ( 13 ) we may calculate the probability ( it is related to the total probability of ) , }(z '' , z')\vert^2 ] , in other words , all the possible measurement outputs have the same probability .this is a peculiar situation , indeed , if we remember how the effects of the measuring device have been introduced , we may notice that we have considered a weight functional , which , as has already been mentioned , restricts the integration domain [ 11 ] , i.e. , only those paths matching with the measurement output are taken into account in the path integral .nevertheless , if the resolution of the measuring device satisfies the condition , then all paths have the same probability . in this sensethis case is as if there were no measuring process .indeed , in the situation in which there is no measurement , all the paths have the same probability [ 18 ] .this quantum feature appears only in connection with qnd measurements , clearly , in the case of qd measurements the effects of the weight functional do not disappear in the expression for the probability [ 6 ] .as we already know , a qnd measurement could be classified as a classical measuring process , in the sense that there is no standard quantum limit [ 11 ] .but as it has already been shown , there are qnd cases in which all the possible trajectories have the same probability , a situation which does not match with a classical case , in which only one trajectory has non vanishing probability. this feature should be no surprise , if we consider a one dimensional harmonic oscillator ( see page 106 , equation ( 6.32 ) of [ 11 ] ) , then considering the limit of zero experimental resolution we obtain , once again , this behaviour .the contribution of the present work comprises the statement that this characteristic appears , not only if the resolution vanishes , but also when it satisfies a certain relation , the one involves the duration of the measuring process , and also the mass of the particle .looking at ( 14 ) we may notice that the mass of the test particle , , appears once again in the probability , i.e. , gravity is at quantum level not a purely geometric effect [ 19 ] .another point comprises the fact that this mass appears but not always in the relation , as happens in the collela , werner , and overhauser experiment [ 20 ] , or in the case of a qd experiment [ 6 ] . clearly, we may notice that there is no standard quantum limit , in other words , we may measure with an arbitrarily small error , and in consequence all the necessary information can be extracted . another feature that we may notice from expression ( 15 ) involves the fact that this probability does not depend upon the measurement output , i.e. , upon .we have two experimental preparations in which all the trayectories have the same probability , i.e. , if , or if .in other words , if we know that in a certain experiment all the trayectories have the same probability , it could not be determined , without further infomation , if the experiment was carried out with a vanishing experimental error , or with . we could have a very small experimental error , but the case is an idealization , and this limit has to be understood in the sense that if the experimental resolution is much smaller that all the relevant physical variables , then we could expect to have a probability independent of the measurement outputs . experimentally this case could be a very difficult one , consider , for example , the current experimental resolution in the case of paul or penning traps [ 21 ] , which lies very far from this idealization . at this pointit is noteworthy to comment what else can be learned from the case .it is readily seen from expression ( 15 ) that this expression contains information about the potential , namely , about the coupling constant ( newton s constant ) , , the source of the gravitational field , , and also about the geometry of this source , . in other words ,if we had , instead of a gravitational field , an electric field , then we would have an expression containing the source of the electric field , of the coupling constant between the involved charges , and also about the radius of the source of this field .an additional interesting characteristic of expression ( 15 ) comprises the fact that it does not include planck s constant , , but it does include the mass of the test particle , .here we have a radical difference with respect to the situation in which we measure a qd variable [ 6 , 19 , 20 ] . indeed , in the qd measurement case , mass and planck s constant appear always as a function of . regarding this last remarkwe must add that in order to see nontrivial quantum mechanical effects of gravity it is not neccesary , as was believed [ 19 ] , to study effects in which appears explicitly . at mostwe could assert that this is necessary in the case of qd experiments , but not if the corresponding measurements are carried out in the realm of qnd proposals . at this pointa word must be said concerning the operational meaning of the concept of continuous quantum measurement . as we know, measurements can not be considered instantaneous ( they have a finite duration ) , and this characteristic endows the idea of continuous measurement with an important physical meaning .a more elaborated model comprises a continuous measurment as a series of instantaneous measurements with the interval between them tending to zero [ 22 ] .nevertheless , this is a controversial issue , but the work in the direction of paul and penning traps [ 21 ] could be , in a near future , a way to perform this kind of experiments [ 23 ] . to finish ,let us comment an additional characteristic of expression ( 14 ) .we know that there are some qnd measurements in which an absolute limit may appear .for instance , if the linear momentum of a free particle is monitored , this aforementioned limit may emerge , when the instantaneous measurement of position of the test particle , before and after the monitoring of the linear momentum , is done ( see page 99 of reference [ 11 ] ) . clearly , position and linear momentum are canonical conjugate variables to each other , and that is why this absolute limit appears . at this pointwe may wonder why ( 14 ) has no absolute limit , in our case , position , before and after monitoring of , has also been instantaneously measured , i.e. , and are present from the very begining in our mathematical expressions , see ( 9 ) .the answer to this question stems from the fact that in our case the measured quantity , $ ] , is not the canonical conjugate variable of position , .we may understand better this point noting that in the present case we deal with a one dimensional harmonic oscillator ( it has a complex frequency but this feature plays in this discussion not role at all ) subject to a measuring process . in the case of a one dimensional harmonic oscillator ,the canonical conjugate variable of , is , and not [ 19 ] .if we choose , then we obtain a very similar situation to a one dimensional harmonic oscillator case .of course , trigonometric functions have to be substituted with the corresponding hyperbolic ones , but this change is only a consequence of the fact that in the case of a particle moving in an inhomogeneous gravitational field ( with the approximation done after ( 2 ) ) the frequency of the emerging harmonic oscillator is complex . these last remarks explain why expression ( 14 ) contains no absolute limit , it can emerge only if the canonical conjugate variable associated to were measured , before and after the monitoring of . to make this statement more precise ,let us choose , therefore an absolute limit could appear in our case if the variable were measured , instantaneously , before and after the monitoring of .at this point we must underline that the appearance of this absolute limit does not mean that is a qd variable [ 11 ] .the author would like to thank a. a. cuevas sosa and a. camacho galvn for their help , and d .- e .liebscher for the fruitful discussions on the subject .the hospitality of the astrophysikalisches institut potsdam is also kindly acknowledged .this work was supported by conacyt ( mxico ) posdoctoral grant no. 983023 .m. b. mensky , class .quantum grav .* 7 * , 23172324 ( 1990 ) ; a. camacho , decoherence and time emergence , in `` proceedings of the international seminar on mathematical cosmology , potsdam 1998 '' , h. j. schmidt and m. rainer eds . , world scientific co. , singapore ( 1998 ) ; a. camacho and a. camacho galvn , nuovo cimento * b114 * , 923938 ( 1999 ) .l. a. orozco , in `` laser cooling and trapping of neutral atoms '' , in latin american school of physics xxxi elaf : new perspectives on quantum mechanics , s. hacyan , r. juregui , and r. lpez pea , eds ., aip conference proceedings , american institute of physics , new york ( 1999 ) .
_ dedicated to heinz dehnen in honour of his birthday . _ in this work we obtain a family of quantum nondemolition variables for the case of a particle moving in an inhomogeneous gravitational field . afterwards , we calculate the corresponding propagator , and deduce the probabilities associated with the possible measurement outputs . the comparison , with the case in which the position is being monitored , will allow us to find the differences with respect to the case of a quantum demolition measuring process . = 7.8 in = 6 in = -1 cm
quantum entanglement is a basic resource in quantum information processing .while its role in quantum communication tasks is quite well understood the same can not be said about quantum computation . while it is generally believed that entanglement is necessary to achieve an exponential speedup of a quantum algorithm over a classical algorithm the exact mechanism by which this may happenis unclear .in fact , so far it has not been proven strictly whether entanglement is really necessary for an exponential speedup .generally , the argument for the power of quantum computation relies on the assumption that any algorithm that simulates the time evolution of a quantum system will be exponential in the number of quantum bits involved , if it explores the whole state space .the reason is that the dimension of the total state space of a quantum system grows exponentially with the number of subsystems . for pure states, this argument implies that the quantum system needs to evolve into an entangled state to be difficult to be simulated . if it is always in a product state , then the number of parameters required to describe it grows only polynomially with the number of subsystems . for mixed states ,however , the situation changes significantly .the set of disentangled states , i.e. the separable states , has the same dimension as the set of all states , although its relative size with respect to the total state space decreases rapidly .therefore , one could imagine that there are dynamics that always leave the system in a separable state , but which are nevertheless difficult to simulate on a classical computer simply by the fact that the number of parameters that is required to describe the quantum system grows exponentially . as a consequence it may conceivably be possible to have efficient quantum computations on separable states as the algorithm is able to efficiently simulate itself .furthermore this points to the possibility that efficient quantum computation is possible on mixed states .recently knill and laflamme investigated the power of quantum computations using one pure qubit and a supply of maximally mixed qubits and were able to construct a problem that could be solved more efficiently using these resources than any known classical algorithm . also schulman and vazirani were able to show that given a supply of thermal states one could produce a single pure qubit together with many maximally mixed qubits .the latter could then be discarded and the pure qubit combined with other pure qubits in a quantum algorithm .indeed nmr quantum computation does start with initially thermally mixed qubits , although the computation efficiency falls off exponentially with the number of qubits .however , we still have the possibility that these mixed qubits could be used in a useful computation like that of knill and laflamme .it is therefore of interest to explore this idea further and see what degree of mixing a quantum computer can tolerate before it loses it s efficiency .this paper is an exploration of this idea .we study the efficiency of shor s algorithm when the quantum computer is in a highly mixed state .we arrive at the conclusion that shor s algorithm can be run on extremely mixed states without significant loss of computational efficiency .nevertheless , it turns out that despite the significant degree of mixedness shor s algorithm runs through some weakly entangled states , leaving the question open as to whether a quantum computer really requires entanglement to be efficient .the sections of this paper are organized as follows : in section i we give an outline of shor s algorithm together with possible gate layouts and interpretations ; in section ii we examine what is known about simulating quantum algorithms ; section iii looks at entanglement measures for mixed states and section iv multipartite entanglement and it s quantification ; sections v and vi give details of the simulations used in this work and some of the results obtained and finally section vii gives concluding remarks .shor s algorithm for factoring an integer , where and are prime , relies on finding the period , , of the function , where is some integer less than and coprime to chosen at random .then , with a sufficiently high probability , at least one of the unknown factors of is given by which can be calculated efficiently using euclid s algorithm .classically all known algorithms are unable to solve the period finding problem in time polynomial in , the length of the number being factorized . the quantum period finding algorithm works as follows :two quantum registers are required whose state spaces are of size at least and respectively .as we will be using qubits we will require and of these two level systems for the two registers respectively .we initially prepare the first register in an equal superposition of all possible states by preparing each of the qubits of the register in state .the second register is prepared in state , where all the individual qubits are in state except the first which is in state .we now unitarily transform the two registers with the transformation so that where .now , an inverse quantum fourier transform on the first register of eq .[ u ] yields the state and the first register now contains information about the period of the function .we access this information by simply measuring the first register in the number , or _ computational _ basis obtaining the result , say .it was then shown in that the fraction is , with a sufficiently high probability , most closely approximated ( using the continued fractions method ) by a fraction ( with ) which in lowest terms has , the period we are trying to find and will therefore sufficiently often give us a factor of .these are the essential details of shor s algorithm as it was first formulated .we must now , of course , be sure that the algorithm runs in _ time _ polynomial in for general as well as using polynomial space as has been shown above . the time taken to perform the algorithmis generally assessed by counting the number of basic operations involved . to do thiswe must have a decomposition of the and transformations into elementary gates acting on a small numbers of qubits , usually one , two or three .one condition on these elementary gates is that each of them is reversible , that is , given the output states we could work out the input states ( and obviously vice versa ) .the condition is imposed by the unitarity , and therefore the reversibility of the transformations and .examples of such reversible gates are cnot s ( 2 qubits ) , toffoli s ( 3 qubits ) and an infinite variety of 1-qubit gates and 2-qubit gates where a single qubit transformation is controlled by a second qubit .detailed polynomially efficient gate layouts for the modulo exponentiation transformations can be found in .they highlight one other complication : the efficient decomposition of general unitary transformations into small basic gates seems to require auxiliary qubits .these are used during the transformation to store quantum information temporarily but are left in their initial states at the end .some of these must be prepared in a known state ( , say ) , others may be prepared in a completely unknown , or maximally mixed state which may or may not be entangled to other systems outside the computer .either way they must be returned to their initial state after use during the computation .if they are not returned to their initial states ( or some other known state that is disentangled from the rest of the computer ) these qubits will be holding information about the states of the non - auxiliary qubits so that the transformation as viewed on the _ non - auxiliary qubits alone _ can not be unitary or reversible and quantum information will have leaked out of the quantum computer . a polynomially efficient gate decomposition of the ( inverse ) fouriertransform into 1-qubit hadamard transformations and 2-qubit controlled phase rotations can be found in fig .[ diag1 ] and is all the gates to the right of , and including , the first hadamard transform ( h ) .it requires 1- and 2-qubit gates to perform the transformation and is therefore again polynomially efficient in time .the careful ( or experienced ) reader will notice that for increasing the conditioned phase rotations are of increasingly small values ( controlled transformations are used ) which would require that the accuracy of the gate implementation is exponential in .this would require exponential resources ( in terms of time / energy etc . ) but it clear that the fourier transform implemented without performing the controlled phase shifts to such high accuracy does not affect the transformation too much and so does not reduce the efficiency too much . in return, however , in a practical situation involving decoherence it is an advantage not to carry out these small phase shifts as the computation will then suffer less errors due to it s shorter computation time . in fig .[ diag1 ] the controlled modulo exponentiation of the classical number has been decomposed into successive controlled modulo multiplications by .we will write these as , where modulo multiplications can be written in this way , as powers of the gate , because multiplication by modulo is equivalent to multiplying by modulo , times . actually performing the modulo multiplications in this waywould of course require exponentially many repetitions of the basic gate and is therefore a highly inefficient method but each of the controlled modulo multiplications can be performed in time polynomial in after classical precalculation of the numbers .= 6.20truein 2 viewing the algorithm as such leads us to another interpretation of the period finding algorithm that is just a change of basis away from shor s original formulation , as outlined above .the operation has a set of eigenstates ( ) with eigenvalues .applying a controlled- gate to the state ( aside form normalization ) kicks the acquired phase onto the control qubit : we can not prepare the eigenstate as this would require knowledge of .however , using the result gives a measurement of the control qubit , in some chosen basis , will now yield information about the fraction for some selected at random , although only one bit of information will be acquired .more information can be acquired about the phase if we perform the controlled- gates , using different control qubits for each , the inverse fourier transform on the control qubits and a projective measurement on each control qubit .this will sufficiently often allow us to obtain as described above : by finding the fraction closest to where c is the result of the measurements on the control qubits .shor s algorithm , then , can be seen in terms of the production and measurement of relative phase information which is related to . in was shown that also has other sets of eigenstates ( ) with eigenvalues and that in fact nearly all of these ( at least of them ) have . consequently the lower qubits ( the 2nd register ) in fig .[ diag1 ] need not be prepared in the initial state but can be prepared in the completely unknown state ( here we are equating ) .this mixed state algorithm is run exactly as before and the period is found at least times as efficiently as the original pure state algorithm , this factor approaching unity as .this also , in fact , means that any randomly selected state , whether pure or mixed , entangled or not , may be used as an input state for the lower qubits and , on average , the algorithm will run efficiently .these lower qubits may also be mixed because they are entangled to systems outside the computer . in terms of the number of pure qubits that are needed in the algorithmwe should once again address the matter of the decomposition of the controlled- transformations into basic gates ( section [ abcdecomp ] ) .it is well known that polynomially efficient decompositions of these transformations do exist but they require auxiliary qubits which it seems need to be prepared in a pure state . with some alterations to these decompositions , however , it has recently been shown that , of those auxiliary qubits that can not be removed from the algorithm , only one need be prepared in a pure state ( this is not to be confused with the ` one ' pure qubit of the abstract and the next section ) .the rest can be prepared in maximally mixed states , although most can no longer be considered as being auxiliary to the computation . for simplicity, however , we will not consider these auxiliary qubits or this altered form of the controlled- transformations for the rest of this work .one further modification to the algorithm is to use a _ semi - classical fourier transform _ - notice that the gates of the fourier transform , both one and two qubit , occur sequentially on the qubits .we could thus replace all of the first register of qubits with a single control qubit and perform the gate operations as follows ( see fig .[ diag2 ] ) : we implement the first controlled- gate and the hadamard transformation and measure the control qubit ; after resetting the state of this qubit to we implement the next controlled- gate and replace the 2-qubit controlled phase shift with a _single _ qubit phase shift _ if the result of the first measurement was _ ; we continue in this manner with a hadamard transformation , single qubit phase shifts given the results of _ all previous measurements _ , and another measurement and resetting . at the end of the algorithm we have a set of measurement results that have an identical probability distribution to the algorithm using control qubits in the first register , as in fig .[ diag1 ] .= 6.50truein 2we saw in the previous section how a quantum algorithm consisting of quantum gates on quantum systems can factorize an integer in time and number of qubits polynomial in .the question then arises as to whether or not we can turn this into a classical algorithm by writing out the effect of the gates and measurements on a classical system such as a computer or piece of paper .if we then find that we can do this efficiently ( in time polynomial in the number of qubits ) then we have an efficient classical algorithm for factoring . as no efficient algorithmis known we might be fairly certain that no such efficient simulation is possible and that all simulations of shor s algorithm will be difficult .of course we already have a simulation of shor s algorithm as we have written down the equations ( [ u ] ) and ( [ final ] ) but we can not in general derive any of the properties of the computer , or indeed the probability distribution of the measurement results , without writing out the density matrix for the whole system ( or doing something else equivalently difficult ) at the relevant point .we would like to look at the system after certain gates or sets of gates .we can easily write out the effect of a single - qubit gate on a single qubit , whether pure or mixed , by writing down the unitary matrix for and working out its effect on the state vector or density matrix : however , we can only do this if the qubit is completely disentangled from other systems which we may later wish to use for information processing .this is because we clearly can not , for example , simulate the operation of two single qubit gates on two qubits in an entangled state by tracing out the opposite qubit , implementing the gates on each and taking the combined state after the operations as the tensor product of the two states ( this is not even the case if the two operations are the identity ) .that is , there are entries in the density matrix of the combined system which refer to the combined system rather than to the individual systems themselves .consider some examples in shor s algorithm .the single - qubit gates in the fourier transform are likely to be acting on qubits which are entangled to other qubits in the computer ( possibly many of them ) so to simulate the algorithm correctly we must not consider just the state of the single qubit ( which will be mixed in general as it is entangled to other qubits ) .in contrast we noted in section [ ssmix ] that we may use mixed states in the lower register of shor s algorithm and that these may be mixed because they are entangled to systems _ outside _ the computer . in this casewe _ can _ address only those qubits which are within the computer , even though they may be entangled to outside systems , because we will not later be concerned with these outside systems .let us consider the problem in more detail , considering pure states first .the state vector of an -qubit system in a general entangled state requires complex numbers to be written down and stored .if we wish to classically simulate the application of the single - qubit gate to a qubit that is entangled to other qubits one method of simulation would be to apply the 2 by 2 matrix representing the gate to each of the pairs of amplitudes in state vector corresponding to different states of the remaining qubits .this involves 2 by 2 matrix multiplications and therefore requires operations so to just simulate the effect of a single - qubit transformation in general takes classical resources exponential in , if entanglement exists across the qubits .of course this argument also follows for mixed states - we can not simulate the effect of even one single - qubit gate efficiently if it is entangled to many other qubits within the computer . using a method similar to that described above we can perform the 2 by 2 matrix multiplication on blocks ( pre - multiplication by and post - multiplication by ) of the density matrix for the whole computer , thereby requiring operations .but the situation here is more complicated : for pure states it is relatively easy to see if entanglement exists in a simulated algorithm ( although it is by no means trivial ) and whether therefore the above general method needs to be used to simulate gates acting just on parts of the computer . for mixed states ,however , it is harder to determine which qubits are separable . for two qubits to be separable there must _ exist _ a decomposition into pure states where the pure states are all separable : for almost all states there will be a decomposition into non - separable states .finding a separable decomposition however , and indeed finding if it exists , is a difficult task .a mixed state algorithm , then , may be seen as a mixture of pure state algorithms and even if at each stage these pure state algorithms are entangled there may exist other sets of pure state algorithms which are not entangled which mix to give the same mixed state algorithm . andthese disentangled sets of pure algorithms will be different after each gate so we can not easily preclude one existing .let us first deal with how we would measure entanglement between two quantum systems whose combined state is pure .many entanglement measures in some way view entanglement as a resource .the process of teleportation is an important example of a phenomenon that requires entanglement to be observed at all , as it is required in many other applications of quantum information processing including entanglement swapping , dense coding , precision measurements and hiding classical information and in the violation of bell s inequalities . to do perfect teleportation of an unknown single qubit staterequires one of the four bell states ( or epr pairs ) ( eq .[ bellstate1 ] and [ bellstate2 ] ) to be shared between the two separated parties 1 and 2 : any other state of two qubits ( which can not be transformed into one of the above by unitary transformations performed by the two parties separately ) can not be used to perform teleportation perfectly . to quantify the entanglement of a general state could look at how well they perform the teleportation process ( with some sort of fidelity measure between the input state and the output state or the maximum probability for perfect teleportation ) .in fact , we would rather ask how many bell states can be obtained from the given state using only local quantum operations ( such as transformations , addition and removal of local separable systems and measurements ) and classical communication ( locc for short ) .the bell states , therefore , have entanglement of value 1 because you can only obtain one bell state from each bell state supplied ( you can not on average increase the number of bell states with locc ) . for other states of two qubits ,say ( with ) let us consider first what we can do with just one copy of the state .the most efficient method of obtaining a bell state from this state is to use the procrustean method .this only creates a bell state with probability , the rest of the time creating a completely separable state .we can do better than this efficiency if we are supplied with more copies , say , of the state held by the separated parties .now we can increase the number of bell states obtained between the parties _ per copy of _ _ held _ by allowing each of the separated parties to perform _ joint _ operations on those parts of the entangled states each holds .so say is the number of bell pairs obtained , then on average , and with a suitable method , .the exact results for any finite number of copies are known and asymptotically , in the limit of large n ( provided we allow an arbitrarily small probability of error ) , the average number of bell pairs obtained per copy for pure states is given by where , is the von neumann entropy of the density matrix , and and are the partial density matrices of the first and second of the entangled particles . for the state of eq .( [ genpsi ] ) this gives .these , and future results also hold for two party systems composed of individual systems of more than two levels . in the asymptotic limit ( only ) the process is also true in reverse : given bell pairs and taking we can create from them , using locc , copies of the state .we say , then , that for pure states the asymptotic _ entanglement of formation _ , ( the number of bell states we need to pay per copy of state we get in return ) , is the same as the asymptotic _distillable entanglement _ ( the number of bell pairs we can distill out of the state per copy of paid ) . for mixed states the situation is far less clear . and can be defined in the same way but for general mixed states they are not equal - you can not in general get as many bell pairs out of a state as you would put in ( you certainly can not obtain more or you would be able to locally create entanglement _ ad infinitum _ ) . this is due to the fact that a mixed state is to some extent unknown and the randomness inserted on the formation of the state from bell states can not be eliminated unless extra information about the state is obtained .another problem with and defined in this way is that at present there are no analytical methods for calculating either for general mixed states .the only example where an analytical expression does exist for more than some specific subclasses of states is the entanglement of formation of a single two - qubit system .these entanglement measures are not the only possibilities we could produce .others exist such as the relative entropy .so what are the conditions for a mathematical object to be called an entanglement measure ?one set of conditions that are generally accepted as being sensible for an entanglement measure of a state are as follows : \(i ) if is separable .\(ii ) local unitary transformations on leave invariant , i.e. . \(iii ) can not , on average , increase under local operations and classical communication .we may also wish to add one more condition to this list .this says that \(iv ) the entanglement measure should be equal to the pure state entanglement measure ( eq .( [ entdef ] ) ) for all pure states .of course not all entanglement measures need obey this condition , indeed the one we will be using below does not .there are many candidates for entanglement measures and all these measures will not agree with each other for mixed states .more importantly , the measures will put a different _ order _ on the states , that is , according to one measure of entanglement the state may be more entangled that but according to another measure of entanglement the state could be more entangled than : indeed , it was shown in that any two entanglement measures that agree for pure states but not for mixed states _ must _ put a different order on the states .we must accept , then , that entanglement measures may put different orders on states .the only alternative is to declare one entanglement measure as the correct one , which immediately prevents us from examining how we might prepare entanglement and how we might use it .for the present work we need a measure that is easy to calculate and has an analytical form for general mixed states . we will call this measure the _ logarithmic negativity _ .it is defined as follows : first denote the matrix elements of the density matrix _ in some tensor product basis _ by the _ partial transpose _ with respect to system 2 is then defined in this notation as ( the labels and have swapped places ) .it has been shown that the positivity of this new matrix ( that is , the positivity of all its eigenvalues ) is a necessary condition for the state to be separable .therefore , is now defined as the log of the sum of the absolute values of the eigenvalues of the new matrix or ( the eigenvalues and therefore this measure are also independent of the particular tensor product basis in which the state is considered ) .this can be written in more compact form as as mentioned above this measure does not agree with the pure state entanglement measure of eq .( [ entdef ] ) .however , one particularly useful property of is that it is an upper bound for : what is more important is that if we can be sure that the two party system does not have distillable entanglement , although it is not known whether the reverse statement is true or not : we can not say that distillable entanglement does exist if .also the fact that does not mean that the state is separable ( there exist states with that are inseparable , which are known as _ bound entangled states _ as the entanglement can not be distilled into bell states but is somehow bound from us ) .entanglement does not only exist between two - party ( bipartite ) systems , it can also exist between three or more parties .one example is the three - party ghz state : one particular property of this state is that it has no bipartite entanglement in the sense that if we trace out the third party , say , the remaining bipartite state is given by the entanglement in this state is of different nature to the one contained in an epr state because it is impossible to inter - convert ghz states and epr pairs reversibly . while one can create a ghz state from two epr pairs , one can only ever obtain one epr pair from one ghz state .what is not known for three party systems is whether ghz states and epr pairs are the only different kinds of entanglement .what is known is that there are in fact more types of multipartite entanglement for systems of _ more _ than three parties .three - party entanglement , then , may contain two - party entanglement and under certain conditions we can locally reversibly ( under locc ) transform between the two .how then do we measure the entanglement of a multipartite system ?because of the different types of entanglement involved having one general measure for all these types is difficult ( the relative entropy suitably redefined for multipartite systems is perhaps one exception although there are still many problems ) .our approach will be to use a bipartite entanglement measure to verify the existence of entanglement between all the different qubits in the computer .let us suppose that we have an entanglement measure for bipartite systems , or a way of verifying whether a state is separable or not between the two parts of the bipartite system .we now use this measure on all possible bipartite partitionings of an system .for example , for a system of 4 qubits labeled as 1 , 2 , 3 , and 4 the possible bipartite partitionings are where means system 1 is considered as being partitioned from systems 2 , 3 and 4 . for general are such partitionings . clearly ,if there is entanglement between the two sides for at least one partitioning , then the state is entangled .the question remains as to whether the converse is true , that is , if all bipartite partitionings are separable is the state completely separable i.e. the multipartite state can be written as a mixture of product states ?for pure states it is clear that this is also true ( in fact , you need only look at some particular subset of the possible partitionings ) .but for mixed states it is _ not _ true - there are states that are separable across all bipartite partitionings ( and therefore have positive partial transpose ( ppt ) ) but that are not completely separable .however this entanglement can not be distillable entanglement - if we could distill it into pure multipartite entanglement by local operations it could in turn be changed into bipartite entanglement between qubits . our method in the simulations , then , will be to use the _ logarithmic negativity _measure across all bipartite partitionings of the qubits in the algorithm .this will tell us if any distillable bipartite entanglement exists in the algorithm but will also show us if distillable multipartite entanglement of any form exists .this follows from the fact that this measure in effect verifies whether the state is ppt and is therefore not distillable across any bipartite partitioning .if this is the case then no multipartite distillable entanglement can exist ( we can not distill the entanglement into pure state entanglement ) otherwise we would again be able to distill this into ( pure ) bipartite entanglement of some form .so , we can indeed verify in this way whether any distillable entanglement of any form exists although we can not preclude that there exist non - distillable entangled states .let us first introduce the basic method of our simulations .the states and gates are all stored in matrix form , the former as density matrices ( because we will in general be using mixed states ) and the gates as specific unitary transformations . qubits therefore require density matrices and unitary transformations .after inputting the initial state of the computer we simulate the effect of each gate by pre- and post - multiplying the density matrix by the unitary transformation and its hermitian conjugate representing the gate to obtain the new state of the computer .we will not simulate the effect of all single- , two- or three - qubit gates but will only simulate the algorithm as it appears in figure [ diag2 ] where we have convenient points at which to examine the computer i.e. we will only simulate the controlled modulo multiplication gates as a single gate and the gates of the fourier transform .we will also be using this version of the algorithm as it reduces the number of qubits needed ( by about 2/3 ) which will result in a great decrease in time and space resources the algorithm requires to be simulated .it was noted in section [ ssimalg ] that , in general , because of the potential entanglement across all the qubits in the algorithm , the simulation of even a single qubit gate requires an exponential number of operations .for example , if the single qubit hadamard transform is acting on the first qubit in a computer consisting of 2 qubits then the unitary transformation required is and if it is acting on the second qubit to simulate the effect of these gates on the 2 qubit state by straight matrix multiplication however is not optimal . from the form of the above transformations it is clear that it is better to act with the hadamard transform on blocks . in the case of acting on the first qubitthe density matrix written in block form evolves as here of course ( saving us some calculation time ) as it is a density matrix and also for the hadamard transformation .the same is true for acting on the second qubit except the elements in the 2 by 2 blocks upon which acts ( within the 4 by 4 density matrix ) are separated as is shown diagrammatically here : ( lower case letters correspond to elements of blocks ) . for a general by density matrixthe simulation of a single qubit gates thus requires operations . for gates acting on this number rises to operations .first of all we may need to trace out one ( or more ) of the qubits ( labeled by ) of the computer to leave the density matrix in matrix form this tracing step effectively involves adding two by sub - matrices together and zeroing the rest of the by matrix .thus the tracing step requires operations ( although these operations are additions rather than multiplications and are therefore considerably quicker ) .we can simulate single - qubit projective measurements in a similar way to that of simulating gates .we will assume , without loss of generality , that all measurements are done in the computational basis , as to perform a measurement in a different basis ( even an entangled basis ) we can unitarily transform the system ( with entangling gates if necessary ) and measure in the computational basis .first of all we must calculate the probabilities of a measurement on the qubit yielding the result ( ) or ( ) .this is done by summing those elements on the diagonal of the density matrix for the whole computer which correspond to the measurement result . for a monte carlo simulation of measurement resultswe may now generate a random number , , from a uniform linear distribution between 0 and 1 , and if take the simulated measurement result to be , otherwise it is .the measurement changes the state of the computer . to simulate thiswe must use two projection operators and act on the state with either ( if the measurement result was ) or ( if the result was ) just as we would act with a single qubit gate .this will , of course , result in just setting 3/4 of the element of the density matrix to zero .this gives us the ( subnormalised ) state of the measurement collapsed computer , including the ( partial or total ) collapse of any qubits that are entangled to the measured qubit .we must then be sure to renormalize the state of the whole computer .this can be done easily by dividing each entry of the density matrix for the collapsed computer by the trace of the density matrix ( or equivalently the measurement probabilities , or ) .the simulation of the measurement therefore takes operations but notice that the quantum computer does this in linear time , as it only needs to find qubit and measure it . for the full monte - carlo type simulation we must of course repeat the simulation of the algorithm a number of times and average any properties of the system we obtain during each simulation .we can also re - prepare the measured qubit in the required state using this method by another application of single qubit gates . if we wish to prepare the state thenif the measurement result was we apply and if it was we apply a state flip followed by .we can now simulate any algorithm we wish with any set of gates and any type of measurement ( a positive operator valued measure ( povm ) can be simulated by adding auxiliary systems , performing unitary transformations on these systems together with the computer and measuring the auxiliary systems ) .we are mainly interested in the degree on entanglement at each stage .as mentioned in section [ ssmixentmeas ] the degree of entanglement does not change under local unitary operations so we need only examine the entanglement after operations on more than one qubit . from fig .[ diag2 ] this will be after the controlled- operations ( where entanglement may increase of decrease ) as well after the measurements ( where the entanglement may also increase or decrease but _ on average _ it should never increase - local projective operations can never increase the amount of entanglement on average ) .as in section [ smulti ] we will use the logarithmic negativity measure of section [ ssmeasmix ] , calculated and averaged across all possible bipartite partitionings of the system .if we are using a monte carlo simulation these must of course be averaged across repeated simulations of the algorithm as different entanglements will be observed with different measurement results .let us now check the time efficiency ( or lack of it ) of shor s algorithm simulated by the above method : \(i ) to form those gates acting on all qubits requires steps and there are of these giving steps altogether \(ii ) for these gatesthe matrix multiplications for the simulation of the gate require operations .there are of these gates giving operations in all .\(iii ) each application of a single qubit gate ( including measurement projections ) takes steps and there are of these , which is operations altogether .\(iv ) at of the steps we wish to examine the entanglement times .this requires steps each for the partial transposition , for the numerical eigenvalue routine and steps for the remaining calculation of our entanglement measure . from the most inefficient process ( numerical eigenvalue calculation )this gives operations for the whole .this then is the most important stage of the simulation in terms of the efficiency of the algorithm . in the above accounting we have not included any contribution from the fact that we have to repeat the whole simulation many times for the monte carlo method ( section [ ssssimmeas ] ) .let us examine this more carefully .if we wish to estimate the probability distribution with two possibilities to a certain accuracy , , we must repeat the monte carlo simulation approximately some number times . at the measurement of the algorithmthere would be states the computer could be in at the point ( from the previous possible measurement results ) . for each of these we would need to estimate the probability distribution giving simulations we need to perform , this number getting exponentially worse at each step .so , given a fixed number of simulations the accuracy of the probability estimate and therefore the estimate of any properties of the system can get exponentially worse at each measurement step .this leads us to an alternative simulation method , a tree type simulation where every possibility of each measurement is considered .we have measurements during the algorithm and at each one the algorithm is given two possible paths to use _ for each of the paths already available_. so the computer can take two different paths ( be in two different states ) at the first measurement , four different paths at the second and ( ) different paths after the last measurement .this gives us possible states of the computer through the algorithm .and we want to sample the entanglement in the computer at two stages between each measurement , after the measurement and after the controlled modulo multiplication gates .we can calculate the probability of each of the measurement results at each stage of the algorithm and find the probability for each possible path leading to that stage .having calculated these we can calculate exact averages for any properties of the system at any stage if we know the properties we wish to examine at each branch of the tree .this gives us , of course , another exponential overhead in our simulation but , as we saw , this was the same for the monte carlo simulation . the high susceptibility of quantum computers to noise from the environment in which the computer is run has been known for some time .this noise comes from entanglement of the computer with the environment suffered during the execution of the algorithm .further sources of error are inaccuracies in the measurements and implementation of the gates .fortunately it has been shown that _error - correcting codes _exist .these encode the state of a qubit into the joint ( entangled ) state of many qubits such that random errors occurring independently on the qubits below a certain ( reasonable ) threshold can be corrected back to the correct state using measurements and unitary transformations based on the measurement results .it has also been shown that these codes can be used in _fault tolerant _ quantum error correcting schemes , that is , the measurements and transformations that implement the error correcting code itself need not be implemented perfectly .the fact that shor s algorithm can be implemented with noisy mixed states leads us to asking if this in any way increases the algorithm s robustness to noise , or whether the coherence within the pure state decomposition of the mixed states needs to be preserved .it will be interesting , then , to simulate the effects of random noise injected into the computer and it s effect on the efficiency with which the number is factorized .there are many ways of simulating noise and many types of noise we could use in the simulation .we have selected two types , noise by measurement and noise by random pauli operations .it should be noted , however , that different types of noise , whether in quantum or classical scenarios , tend to have similar qualitative effects andso we need not worry too much about the precise nature of each . here, we address each qubit in turn and with a given probability apply a measurement on that qubit in the computational basis ( although it need not be in this basis for general noise ) . the probability for the two outcomesis governed by the state of the particle , as described in section [ ssssimmeas ] .this collapses the state of the computer in some way .we will , with the given probability , apply the noise step to every qubit after each gate during the algorithm . of course, the controlled modulo multiplication gate consists of many gates acting on small numbers of qubits so would have more time to be effected by noise .we should also carefully note that when running the algorithm we would not know the result of the measurement taken by the measurement noise and the algorithm would at that stage become more mixed ( the computer becomes a weighted mixture of the two states post measurement ) as we do not know which measurement result was found .in fact we would not even know if a noise step had been applied . here, again addressing each qubit in turn , with a certain probability we will apply one of the three pauli operators ( ) , or the identity operator ( ) : to the qubit , these four operations occurring with equal probability . again we should be careful to point out that a noise step would leave the computer in a equal mixture of the four states that result after the application of the four pauli operations .thus the qubit will have its state completely randomized ( although it could still be entangled to other qubits ) . as we want to examine the entanglement in the computer it will be interesting to investigate ways of reducing the entanglement and seeing how this affects the computer .we have already introduced mixed states into part of the quantum computer and seen that it does not affect the efficiency very much .but we could also mix the state of the control qubit .this must affect the efficiency of the algorithm ( a totally random algorithm can not be any use to us ) and it will be useful to compare this to the change in entanglement .we will do this by preparing ( and re - preparing ) the control qubit in the state where and . using the methods of our quantum computer simulatorthis tensor product operation is most conveniently done as follows : we assume that the control qubit is in state ( if it is in the state after measurement it can be flipped easily ) ; if we denote the state of the whole computer ( including the control qubit ) by we calculate the temporary density matrix where denotes the single qubit flip operator ( section [ sssotherproc ] ) ; we can then mix this with the original density matrix for the computer in the required proportions : and a final hadamard transformation on the control qubit gives us the required state of the computer .changing the parameter between 0 and allows us to decrease or increase the mixedness of the computer .firstly we will look at the entanglement in the pure and mixed state single control qubit algorithms ( calculated using the tree simulation method ( section [ sstree ] ) ) .we will look at the average bipartite entanglement as measured by the logarithmic negativity entanglement measure ( section [ ssmeasmix ] ) averaged across all possible bipartite boundaries at each stage of the algorithm ( section [ smulti ] ) .we will also average across all possible algorithms factoring numbers of 4 binary digits and 5 binary digits .these results are shown in figs .[ entn5 ] and [ entn6 ] .these two sets of results have a very similar form , in particular the entanglement in the mixed state algorithms closely mirrors the mixedness of the quantum computer ( the von neumann entropy of the state of the whole computer ) at each stage . also note that the amount of entanglement increases towards then end of the algorithm where the most significant ( i.e. the highest ) bits of the number are decided and the least significant bits of the period are found .we can see immediately that in all algorithms , both pure and mixed , according to our entanglement measure , entanglement exists although it is up to three times lower in fig .[ entn5 ] and [ entn6 ] for the mixed state algorithm .the mixed state algorithms are , however , at least around half as efficient as the pure state algorithms for the values of considered ( which can be seen by calculating for each ) .next we examine the effect of noise on particular algorithms , namely for , ( figs .[ measnoise ] and [ paulinoise ] ) which has period and , ( figs .[ n6measnoise ] and [ n6paulinoise ] ) which has period .both measurement and pauli noise were implemented as described in section [ ssnoise ] .additionally we considered the situation where the noise was not allowed to act on the control qubit .for the case when noise is allowed to act on all qubits we see an exponential drop off in the efficiency of the algorithm with increasing noise level for both types of noise .however , when noise is not allowed to act on the control qubits the efficiency only falls to the efficiency level of the mixed state algorithm which , as has been noted , is efficient enough .the is due to the way the noise is implemented and the fact that the period is 4 . for algorithms with period of the form , for some integer , each of the controlled modulo multiplication gates can be applied on any state independently of the state of the system up to that point .the reason is as follows : when the period is of the form the first controlled modulo multiplication gates are in fact the identity operation ( note , however , that individual operations within the gate decomposition of the controlled modulo multiplications will not be the identity operation so noise would then greatly affect the efficiency of the algorithm ) and the measurement result after each will be .the next gate is now the only relevant gate .if the measurement result after this gate is the period will be found correctly . in this case ? = 0 or 1 .now , the fraction will have denominator , independent of the results of these remaining measurements .noise in the pure state algorithm applied to all but the control qubit , then , prepares these qubits in some random pure state ( i.e. a mixed state ) .the first stages of the algorithm do nothing but we will correctly find the period if the next measurement result is , which will occur with the same probability as for the mixed state algorithm .this is not the case for algorithms of other periods of course as the initial operations are not identity operations .an example is the , algorithm for which the effects are noise are shown in figs .[ n6measnoise ] and [ n6paulinoise ] these results show us that for general algorithms we can not increase the mixing of algorithms during its running by re - preparing another maximally mixed state in the lower register of qubits after each measurement without reducing the efficiency of the algorithm considerably - the previous measurements have prepared a state which must not be significantly altered before the next stage .if the period is of the form we can do this ( although having a period of this form is presumably exponentially unlikely for increasing ) but this will have no effect on the entanglement until the last steps . for other algorithms this will reduce the entanglement but will also dramatically reduce the efficiency .let us now look at the effect of mixing the control qubit , as described in section [ ssmixing ] .we will do this for both the pure state algorithm ( where all other qubits are initially pure ) and the mixed state algorithm .[ mixq1pure ] and [ mixq1mix ] show the average bipartite entanglement throughout the entire algorithm versus the mixing parameter for the pure and mixed state algorithms with and respectively .also shown is the probability that each algorithm correctly finds the period , .notice in particular how the entanglement in the mixed state algorithm approaches zero before the algorithm becomes entirely random . for the pure state algorithmit does not do so until is reached .the point at which the average entanglement is zero ( to machine precision ) in the mixed state algorithm is around for the , case .this illustrates how the randomness in mixed states can completely mask the distillable entanglement even before the algorithm becomes entirely random . for the pure state case we have mixed two pure state algorithms ( with orthogonal control qubit states ) , both of which do produce entanglement , and found that the entanglement does not disappear until we mix them maximally . for the mixed state algorithm we have also mixed two orthogonal algorithms which both contain entanglement butthe distillable entanglement has been lost _before _ we lose all information about which algorithm is running . compare these with similar results for the algorithms with and in figs .[ mixq1pure2 ] and [ mixq1mix2 ] .again we see that the entanglement in the mixed state algorithm is zero before the control qubit is maximally mixed , although this point occurs at the higher value of around .finally we combine the results above to plot the probability of finding against the average entanglement .this is shown in figs .[ mixq1pvse ] and [ mixq1pvse2 ] . for each particular algorithm ,then , we see a definite trend of an increase in entanglement giving an increase in the probability of correctly finding the period , although initially increasing the entanglement from zero does not produce such a large increase in this probability .we also see that the entanglement required for a given probability is lower for the mixed state algorithm .the maximal probability of correctly finding the period is of course lower for the mixed state algorithm but in the limits this is negligibly so .it has been shown that it is possible to efficiently factorize with shor s algorithm using only one initially pure qubit and a supply of initially maximally mixed qubits .we have also seen that for algorithms with small numbers of qubits the mixing of the algorithm remains high throughout .it is then a natural question to see whether any entanglement is involved in the execution of the quantum algorithm .we find that the high degree of mixing , however , does not preclude the existence of entanglement and indeed in these example algorithms entanglement does appear to exist , even when the state of the computer starts and remains in a highly mixed state .conversely , if we try to reduce this entanglement by introducing further mixing into the control qubit we do reduce the entanglement in the computer but at the expense of a reduction in efficiency of the computation .the mixing of the control qubit also sheds light on the nature of entanglement itself , that is , we do not need to lose all information about the nature of an entangled state before we are completely unable to extract entanglement from it , as we have seen when mixing two orthogonal entangled mixed state algorithms .we have also seen that shor s algorithm operated on mixed states is nevertheless susceptible to noise of different kinds and the algorithm does not in general appear to have any increased robustness to noise ( except where the noise model on particular algorithms is too simplified to be accurate ) even though this algorithm can be run using highly mixed states .what remains an open question is as to how the above effects behave for algorithms of increasing numbers of qubits .presumably the entanglement does remain for algorithms of large numbers of qubits and the algorithm remains highly susceptible to noise but because of the exponential nature of simulating the algorithms verifying this is an extremely difficult task .the authors would like to thank r. jozsa , s. virmani , v. kendon and w. j. munro for helpful comments .this work is supported by the united kingdom engineering and physical sciences research council epsrc , the leverhulme trust , two eu tmr - networks erb 4061pl95 - 1412 and erb fmrxct96 - 0066 and the eu project equip .r. jozsa , ` geometric issues in the foundations of science ' ed .s. huggett et .1997 ) , quant - ph/9707034 ; a. ekert and r. jozsa , phil .( lond . ) 1998 , proceedings of royal society discussion meeting ` quantum computation : theory and experiment ' , november 1997 , quant - ph/9803072 .bennett and s. wiesner , phys .lett . * 69 * , 2881 ( 1992 ) ; t. hiroshima , lanl e - print quant - ph/0009048 ; s. bose , m.b .plenio and v. vedral , j. mod. opt . * 47 * , 291 ( 2000 ) ; d.j .wineland , j.j .bollinger , w.m .itano and d.j .heinzen , phys .a * 50 * , 67 ( 1994 ) ; s.f .huelga , c. macchiavello , t. pellizzari , a.k .ekert , m.b .plenio , and j.i .cirac , physical review letters * 79 * , 3865 ( 1997 ) ; r. jozsa , d.s .abrams , j.p .dowling , c.p .williams , phys .85 * , 2010 ( 2000 ) ; s. bose , s.f .huelga and m.b .plenio , phys .a * 63 * , 32313 ( 2001 ) ; a.m. childs , j. preskill , j. renes , j. mod . opt . *47 * , 155 - 176 ( 2000 ) ; p. kok , a. n. boto , d. s. abrams , c. p. williams , s. l. braunstein , j. p. dowling , lanl e - print quant - ph/0011088 ; b.m .terhal , d.p .divincenzo , and d. leung , lanl e - print quant - ph/0011042 .v. vedral , m.b .plenio , m. rippin , and p.l .knight , phys .a * 78 * , 2275 ( 1997 ) ; v. vedral and m.b .plenio , phys .a * 57 * , 1619 ( 1998 ) ; m.b .plenio , s. virmani and p. papadopoulos , j. phys .a * 33 * , 193 ( 2000 ) .d. m. greenberger , m. a. horne and a. zeilinger , going beyond bell s theorem , in _bell s theorem , quantum theory and conceptions of the universe _ , edited by m. kafatos ( kluwer academics , dordrecht , the netherlands , 1989 ) , pp 73 - 76 ; d. bouwmeester , j .- w .pan , m. daniell , h. weinfurter and anton zeilinger , phys . rev. lett . * 82 * , 1345 ( 1999 ) d.p .divincenzo , science * 270 * , 255 ( 1995 ) , m.b .plenio and p.l .knight , phys .a * 53 * , 2986 ( 1996 ) ; m.b .plenio and p.l .knight , proc .a * 453 * , 2017 ( 1997 ) ; m.b .plenio and p.l .phys . * 70 * , 101 - 144 ( 1998 ) .shor , phys .a * 52 * , 2493 ( 1995 ) ; r. calderbank and p.w .shor , phys .a * 54 * , 1098 ( 1996 ) ; a. steane , proc .452 * , 2551 ( 1996 ) ; a. ekert and c. macchiavello , phys .* 77 * , 2585 ( 1996 ) ; m.b .plenio , v. vedral and p.l .knight , physical review a * 55 * , 67 ( 1997 ) ; a.m. steane ; phil .soc a * 356 * 1739 - 1757 ( 1998 ) .shor , in 37th symposium on foundations of computing , ieee computer society press , 1996 , pp .56 - 65 also in lanl e - print quant - ph/9605011 ; d.p .divincenzo and p.w .shor , phys . rev. lett . * 77 * , 3260 ( 1996 ) ; j. preskill , proc .a * 454 * 385 ( 1998 ) ; a. steane , phys .lett . * 77 * , 793 ( 1996 ) ; m.b .plenio , v. vedral and p.l .knight , physical review a * 55 * , 4593 ( 1997 ) ; d. gottesman , phys .a * 57 * 127 - 137 ( 1998 ) ; a. steane , nature * 399 * , 124 ( 1999 ) .
we demonstrate that , in the case of shor s algorithm for factoring , highly mixed states will allow efficient quantum computation , indeed factorization can be achieved efficiently with just one initial pure qubit and a supply of initially maximally mixed qubits ( s. parker and m. b. plenio , phys . rev . lett . , * 85 * , 3049 ( 2000 ) ) . this leads us to ask how this affects the entanglement in the algorithm . we thus investigate the behavior of entanglement in shor s algorithm for small numbers of qubits by classical computer simulation of the quantum computer at different stages of the algorithm . we find that entanglement is an intrinsic part of the algorithm and that the entanglement through the algorithm appears to be closely related to the amount of mixing . furthermore , if the computer is in a highly mixed state any attempt to remove entanglement by further mixing of the algorithm results in a significant decrease in its efficiency . 2
the most famous algorithmic problem dealing with online assignment is arguably online bin packing . in this problem ,known since the 1970s , items of size between and arrive in a sequence and the goal is to pack these items into the least number of unit - sized bins , packing each item as soon as it arrives .online bin stretching , which was introduced by azar and regev in 1998 , deals with a similar online scenario .again , items of size between and arrive in a sequence , and the algorithm needs to pack them as soon as each item arrives , but it has two advantages : ( i ) the packing algorithm knows , the number of bins that an optimal offline algorithm would use , and must also use only at most bins , and ( ii ) the packing algorithm can use bins of capacity for some .the goal is to minimize the stretching factor .while formulated as a bin packing variant , online bin stretchingcan also be thought of as a semi - online scheduling problem , in which we schedule jobs in an online manner on exactly machines , before any execution starts .we have a guarantee that the optimum offline algorithm could schedule all jobs with makespan .our task is to present an online algorithm with makespan of the schedule being at most .* motivation .* we give two of applications of online bin stretching . _ server upgrade ._ this application has first appeared in . in thissetting , an older server ( or a server cluster ) is streaming a large number of files to the newer server without any guarantee on file order .the files can not be split between drives .both servers have disk drives , but the newer server has a larger capacity of each drive .the goal is to present an algorithm that stores all incoming files from the old server as they arrive . _shipment checking ._ a number of containers arrive at a shipping center .it is noted that all containers are at most percent full .the items in the containers are too numerous to be individually labeled , yet all items must be unpacked and scanned for illicit and dangerous material . after the scanning , the items must be speedily repackaged into the containers for further shipping . in this scenario ,an algorithm with stretching factor can be used to repack the objects into containers in an online manner .* history . * online bin stretchingwas proposed by azar and regev . already before this , a matching upper and lower bound of for two bins had appeared .azar and regev extended this lower bound to any number of bins and gave an online algorithm with a stretching factor .the problem has been revisited recently , with both lower bound improvements and new efficient algorithms . on the algorithmic side , kellerer and kotov achieved a stretching factor and gabay et al . have achieved .the best known general algorithm with stretching factor was presented by the authors of this paper in . in the case with only three bins , the previously best algorithm was due to azar and regev , with a stretching factor of . on the lower bound side , the lower bound of surpassed only for the case of three bins by gabay et al . , who show a lower bound of , using an extensive computer search .the preprint was updated in 2015 to include a lower bound of for four bins . * our contributions . * in section [ sec : three118 ] we present an algorithm for three bins of capacity .this is the first improvement of the stretching factor of azar and regev . in section [ sec :lowerbound ] , we present a new lower bound of for online bin stretchingon three bins , along with a lower bound of on four and five bins which is the first non - trivial lower bound for four and five bins .we build on the paper of gabay et al . but significantly change the implementation , both technically and conceptually .the lower bound of for four bins is independently shown in .a preliminary version of this work appeared in waoa 2014 and sofsem 2016 .* related work . * the np - hard problem bin packingwas originally proposed by ullman and johnson in the 1970s .since then it has seen major interest and progress , see the survey of coffman et al . for many results on classical bin packing and its variants . while our problem can be seen as a variant of bin packing , note that the algorithms can not open more bins than the optimum and thus general results for bin packingdo not translate to our setting .as noted , online bin stretchingcan be formulated as the online scheduling on identical machines with known optimal makespan .such algorithms were studied and are important in designing constant - competitive algorithms without the additional knowledge , e.g. , for scheduling in the more general model of uniformly related machines . for scheduling , also other types of semi - online algorithms are studied .historically first is the study of ordered sequences with non - decreasing processing times .most closely related is the variant with known sum of all processing times studied in and the currently best results are a lower bound of and an algorithm with ratio , both from .note that this shows , somewhat surprisingly , that knowing the actual optimum gives a significantly bigger advantage to the online algorithm over knowing just the sum of the processing times ( which , divided by , is a lower bound on the optimum ) .* definitions and notation . * our main problem , online bin stretching , can be described as follows : * input : * an integer and a sequence of items given online one by one .each item has a _ size _ ] , where is also the capacity of the bins which the optimal offline algorithm uses . the online algorithm for online bin stretchinguses bins of capacity , .the resulting stretching factor is thus .we scale the input sizes by .the stretched bins in our setting therefore have capacity and the optimal offline algorithm can pack all items into three bins of capacity each .we prove the following theorem .[ thm:3binsalgo ] there exists an algorithm that solves online bin stretchingfor three bins with stretching factor .the three bins of our setting are named , , and .we exchange the names of bins sometimes during the course of the algorithm .a natural idea is to try to pack first all items in a single bin , as long as possible . in general , this is the strategy that we follow .however , somewhat surprisingly , it turns out that from the very beginning we need to put items in two bins even if the items as well as their total size are relatively small .it is clear that we have to be very cautious about exceeding a load of 6 .for instance , if we put 7 items of size 1 in bin , and 7 such items in , then if two items of size 16 arrive , the algorithm will have a load of at least 23 in some bin .similarly , we can not assign too much to a single bin : putting 20 items of size 0.5 all in bin gives a load of 22.5 somewhere if three items of size 12.5 arrive next .( starting with items of size 0.5 guarantees that there is a solution with bins of size 16 at the end . ) on the other hand , it is useful to keep one bin empty for some time ; many problematic instances end with three large items such that one of them has to be placed in a bin that already has high load . keeping one bin free ensures that such items must have size more than 11 ( on average ) , which limits the adversary s options , since all items must still fit into bins of size 16 .deciding when exactly to start using the third bin and when to cross the threshold of 6 for the first time was the biggest challenge in designing this algorithm : both of these events should preferably be postponed as long as possible , but obviously they come into conflict at some point . before stating the algorithm itself , we list a number of _ good situations _ ( gs ). these are configurations of the three bins which allow us to complete the packing regardless of the following input .it is clear that the identities of the bins are not important here ; for instance , in the first good situation , all that we need is that _ any _ two bins together have items of size at least 26 .we have used names only for clarity of presentation and of the proofs .a _ partial packing _ of an input sequence is a function that assigns a bin to each item from a prefix of the input sequence .[ lem : gs1 ] given a partial packing such that and is arbitrary , there exists an online algorithm that packs all remaining items into three bins of capacity .since the optimum can pack into three bins of size , the total size of items in the instance is at most .if two bins have size , all the remaining items ( including the ones already placed on ) have size at most .thus we can pack them all into bin .[ lem : gs2 ] given a partial packing such that ] , otherwise we reach 2 .[ lem : gs3 ] given a partial packing such that and either ( i ) or ( ii ) and is arbitrary , there exists an online algorithm that packs all remaining items into three bins of capacity .\(i ) we have , so we are in 1 on bins and or on bins and .( ii ) we pack arriving items into .if at any time , we apply 1 on bins and .thus we can assume and we can not continue packing into any further .this implies that an item arrives such that . as , we pack into it and apply 1 on bins and .[ lem : gs4 ] given a partial packing such that , , and , there exists an online algorithm that packs all remaining items into three bins of capacity .let be the value of when the conditions of this good situation hold for the first time .we run the following algorithm until we reach 1 or 3 : if at any time an item is packed into ( where it always fits ) , then and we reach 1 . in the event that no item is packed into , we reach 3 ( with in the role of ) whenever the algorithm brings the size of to or above 15 .the only remaining case is when throughout the algorithm and several items with size in the interval arrive .these items are packed into .note that and that the lower bound of may decrease during the course of the algorithm .the first two items with size in will fit together , since .with two such items packed into , we know that the load is at least and we have reached 1 , finishing the analysis .[ lem : gs5 ] given a partial packing such that an item with is packed into bin , , and is empty , there exists an algorithm that packs all remaining items into three bins of capacity .pack all incoming items into as long as it possible . if , we have 4 , and so we assume the contrary. therefore , and an item arrives which can not be packed into .place into .if , we apply 3 .we thus have and .continue with on bins , , and in this order .we claim that 1 is reached at the latest after has packed two items , and , on bins other than .if one of them ( say ) is packed into bin , this holds because and already before this assignment enough for 1 .if both items do not fit in , they are both larger than 10 , since .we will show by contradiction that this can not happen .as from our previous analysis , we note that .we therefore have three items with and an item from our initial conditions .these four items can not be packed together by any offline algorithm into three bins of capacity 16 , and so we have a contradiction with .[ lem : gs6 ] if , and , there exists an algorithm that packs all remaining items into three bins of capacity .pack all items into , until an item does not fit . at this point .if fits on , we put it there and reach 1 because . otherwise , definitely fits on because . by the condition on , we have , and we are in 1 again . [lem : gs7 ] suppose , .if and for a new item we have , then there exists an online algorithm that packs all remaining items into three bins of capacity .we have and . placing on we increase the size of to at least and we reach 6 . throughout our algorithm, we often use a special variant of which tries to reach good situations as early as possible .this variant can be described as follows : [ dfn : gsff ] let be a list of bins where each bin has an associated capacity satisfying .( good situation first fit ) is an online algorithm for bin stretching that works as follows : for example , checks whether either , or is a partial packing of any good situation .if this is not the case , the algorithm packs into bin provided that . if , the algorithm packs into bin with capacity .if can not be placed into , halts and another online algorithm must be applied to pack and subsequent items . in a way ,any algorithm for online bin stretching for three bins must be designed so as to avoid several _ bad situations : _ the two most prominent ones being either two items of size or three items of size , where is the volume of the remaining items .our algorithm especially steps and are designed to primarily evade such bad situations , while making sure that no good situation is missed .this evasive nature gives it its name .let us start the analysis of the algorithm in step , where the algorithm branches on the size of the item . throughout the proof , we will need to argue about loads of the bins , , before various items arrived .the following notation will help us in this endeavour : * notation .* suppose that is a bin and is an item that gets packed at some point of the algorithm ( not necessarily into ) .then will indicate the set of items that are packed into just before arrived .we first observe that our algorithm can be in two very different states , based on whether or .note that the case ] is excluded due to 2 .the case is also excluded , as this would imply 5 with in .since , the only remaining possibility is .even though does not fit into , if we were to pack into , we can use the first point of this observation and get , enough for 4 . the algorithm in stepwill notice this possibility and will pack into , where it will always fit , as by 4 .we now split the analysis based on which branch is entered in step : * case 1 : * item fits into bin ; we enter step .we first note that , else we are in 4 since is still empty .this inequality also implies that , otherwise we have via observation [ obs : nothingonc ] .we continue with step until we reach a good situation or the end of input .suppose three items arrive such that none of them can be packed into and we do not reach a good situation .we will prove that this can not happen .we make several quick observations about those items : 1 .we have because or we reach 4 . the item is packed into .2 . at any point, contains at most one item , otherwise , reaching 1 .we have because by 1 . the item is packed into .4 . the bin contains also at most one item , similarly to .again , we have similarly to .the item does not fit into any bin . from our observationsabove , we get , , . therefore ,at least two of the items are of size at least . however , both items and have size at least , and there is no way to pack and the two items larger than into three bins of capacity , a contradiction .* case 2 : * item does not fit into bin .the choice of gives us .item is placed on .the limit gives us an upper bound on the volume of small items in , namely .an easy argument gives us a similar bound on , namely if , then .indeed , we have , the first inequality implied by not reaching 1 . in case 2 ,it is sufficient to consider two items that do not fit into or .we have : 1 . using , we have and .2 . none of the items fits into .if say did fit , then we use the fact that does not fit into and get and we reach 1 .the choice of the limit on implies and .4 . since at all times by 1, we have and .5 . the items and not fit together into , or we would have .this implies .from the previous list of inequalities and using , we learn that no two items from the set can be together in a bin of size .again , this is a contradiction with the assumptions of online bin stretching . from now on, we can assume that , is packed into and step of is reached .recall that by observation [ obs:1a ] , , and there is exactly one item either in , or in ; we denote this item by .we repeat the steps done by in the standard case : recall that by observation [ obs:1a ] . assuming that no good situation is reached before step , we observe the following : [ obs:6-b ] in step , as long as is empty , packing any item of size at least leads to a good situation .thus while is empty , all items that arrive in step and are not put on have size in . any item with size in \cup[4,6] ] to the first , second and third bin , creating up to three triples that need to be added to .we make sure that we do not add a triple several times during one step , we mark its addition into a auxilliary array .note that the queue needs only and the item ] and and are arbitrary , there exists an online algorithm that packs all remaining items into three bins of capacity .[ lem : newgs3 ] given a bin configuration such that $ ] and either ( i ) and is arbitrary or ( ii ) , there exists an online algorithm that packs all remaining items into three bins of capacity .[ lem : newgs4 ] given a bin configuration such that , , and , there exists an online algorithm that packs all remaining items into three bins of capacity .[ lem : newgs5 ] suppose that we are given a bin configuration such that an item with is present in the multiset and the following holds : .then there exists an algorithm that packs all remaining items into three bins of capacity .table [ table : results ] summarizes our results .the paper of gabay , brauner and kotov contains results up to the denominator 20 ; we include them in the table for completeness .results after the denominator 20 are new .note that there may be a lower bound of size even though none was found with this denominator ; for instance , some lower bound may reach using item sizes that are not multiples of ..results produced by our minimax algorithm , along with elapsed time . the column _l. b. found _ indicates whether a lower bound was found when starting with the given granularity .fractions lower than and higher than are omitted .results were computed on a server with an amd opteron 6134 cpu and 64496 mb ram .the size of the hash table was set to with chain length . in order to normalize the speed of the program , the algorithm only checked for a lower bound and did not generate the entire tree in the * yes * cases . [ cols="^,<,<,<",options="header " , ] we give a compact representation of our game tree for the lower bound of for , which can be found in appendix [ sec : appendix ] .the fully expanded representation , as given by our algorithm , is a tree on 11053 vertices . for our lower bounds of for and , the sheer size of the tree ( e.g. 4665 vertices for )prevents us from presenting the game tree in its entirety .we therefore include the lower bound along with the implementations , publishing it online at http://github.com / bohm / binstretch/. we have implemented a simple independent c++ program which verifies that a given game tree is valid and accurate . while verifying our lower bound manually may be laborious , verifying the correctness of the c++ program should be manageable .the verifier is available along with the rest of the programs and data .with our algorithm for , the remaining gap is small . for arbitrary ,we have seen at the beginning of section [ sec : bigm ] that a significantly new approach would be needed for an algorithm with a better stretching factor than .thus , after the previous incremental results , our algorithm is the final step of this line of study .it is quite surprising that there are no lower bounds for larger than the easy bound of .* acknowledgment . *the authors thank emese bittner for useful discussions during her visit to charles university .10 s. albers and m. hellwig .semi - online scheduling revisited ., 443:19 , 2012 .j. aspnes , y. azar , a. fiat , s. plotkin , and o. waarts . on - line load balancing with applications to machine scheduling and virtual circuit routing . , 44:486504 , 1997 .y. azar and o. regev . on - linebin - stretching . in _randomization and approximation techniques in computer science _ , pages 7181 .springer , 1998 .y. azar and o. regev . on - linebin - stretching . 268(1):1741 , 2001 .p. berman , m. charikar , and m. karpinski .on - line load balancing for related machines ., 35:108121 , 2000 .lower bounds for online bin stretching with several bins .student research forum papers and posters at sofsem 2016 , ceur wp vol-1548 , 2016 .m. bhm , j. sgall , r. van stee , and p. vesel .better algorithms for online bin stretching . in approximation and online algorithms ( pp .23 - 34 ) .springer international publishing , 2014 .m. bhm , j. sgall , r. van stee , and p. vesel .the best two - phase algorithm for bin stretching .arxiv preprint arxiv:1601.08111 , 2016 .e. coffman jr . , j. csirik , g. galambos , s. martello , and d. vigo .bin packing approximation algorithms : survey and classification , in p. m. pardalos , d .- z .du , and r. l. graham , editors , _ handbook of combinatorial optimization _ , pages 455531 .springer new york , 2013 .t. ebenlendr , w. jawor , and j. sgall .preemptive online scheduling : optimal algorithms for all speeds ., 53:504522 , 2009 .m. gabay , n. brauner , v. kotov . computing lower bounds for semi - online optimization problems : application to the bin stretching problem .hal preprint hal-00921663 , version 2 , 2013 .m. gabay , n. brauner , v. kotov . improved lower bounds for the online bin stretching problem .hal preprint hal-00921663 , version 3 , 2015 .m. gabay , v. kotov , n. brauner .semi - online bin stretching with bunch techniques .hal preprint hal-00869858 , 2013 . r. l. graham .bounds on multiprocessing timing anomalies ., 17:263269 , 1969 .d. johnson . .massachusetts institute of technology , project mac .massachusetts institute of technology , 1973 .h. kellerer and v. kotov .an efficient algorithm for bin stretching . , 41(4):343346 , 2013 .h. kellerer , v. kotov , m. g. speranza , and z. tuza .semi on - line algorithms for the partition problem . , 21:235242 , 1997 . j. ullman . . , 1971zobrist , albert l. a new hashing method with application for game playing .icca journal 13.2 : 69 - 73 .lower bound , scaled so that and .the vertices contain the current loads of all three bins , and a string ` n : i ` with being the next item presented by the adversary .if there are several numbers after ` n : ` , the items are presented in the given order , regardless of packing by the player algorithm .the coloured vertices are expanded in later figures . ]
online bin stretchingis a semi - online variant of bin packing in which the algorithm has to use the same number of bins as an optimal packing , but is allowed to slightly overpack the bins . the goal is to minimize the amount of overpacking , i.e. , the maximum size packed into any bin . we give an algorithm for online bin stretchingwith a stretching factor of for three bins . additionally , we present a lower bound of for online bin stretchingon three bins and a lower bound of for four and five bins that were discovered using a computer search .
game theory is a powerful mathematical tool to analyze various natural and social phenomena . after the publication of meyer ,there has been a great deal of effort to extend the classical game theory into the quantum domain , and it has been shown that quantum games may have significant advantages over their classical counterparts .the classical game theory includes a fatal drawback , namely , the multiplicity of equilibria .battle of sexes , chicken game , and stag hunt are famous examples of games with multiple equilibria . for a game possessing multiple equilibria, the classical game theory can say nothing about the predictability of the outcome of the game : there is no particular reason to single one out of these equilibria . until the present ,several quantum extensions are considered to resolve this problem , e.g. , battle of sexes , chicken game , and stag hunt .we also attack this problem by analyzing a quantum extension of a game which describes market competition . in economics, many important markets are neither perfectly competitive nor perfectly monopolistic , that is , the action of individual firms affect the market price .these markets are usually called _oligopolistic _ and can be analyzed based on game theory .recently , li et al . investigated the quantization of games with continuous strategic space , a classic instance of which is the cournot duopoly , in which firms compete on the amount of output they will produce , which they decide on independently of each other and at the same time . showed that the firms can escape the frustrating dilemma like situation if the structure involves a maximally entangled state . a key feature in li et al . is the linearity assumption , that is , both the cost function and the inverse demand function are linear . it is well known that linear cournot games have exactly one equilibrium . on the other hand , in nonlinear settings , there may be multiple equilibria , and hence we may not predict the market price . a natural question is whether the uniqueness of equilibria is guaranteed in the quantum cournot duopoly ? we are trying to answer this question in this paper . to quantize the model, we apply li et al.s `` minimal '' quantization rules to cournot duopoly in a nonlinear setting , where there are one symmetric equilibrium and two asymmetric equilibria in the zero entanglement case .we observe the transition of the game from purely classical to fully quantum , as the game s entanglement increases from zero to maximum .we show that if the entanglement of the game is sufficiently large , then all asymmetric equilibria vanish and there remains one symmetric equilibrium . furthermore , similar to li et al . , in the maximally entangled game , the unique symmetric equilibrium is exactly pareto optimal . in other words , the multiplicity of equilibria as well asthe dilemma like situation in the classical cournot duopoly is completely resolved in our quantum extension .we consider cournot competition between two firms , firm and firm .they simultaneously decide the quantity and , respectively , of a homogenous product they want to put on the market .let be the inverse demand function , where .each firm has the common cost function .then the firm s profit can be written as we assume that where . given any , we have .thus , to maximize her profit , firm chooses such that , that is , similarly , given any , firm chooses such that a pair is a nash equilibrium iff it solves eq .( [ foc1 ] ) and ( [ foc2 ] ) . then , there are three equilibria , and . at these equilibriathe profits are however , these equilibria fail to be pareto optimal .the reason why they fail is that both firms can be better off by jointly decreasing their outputs , since and at equilibria . on the other hand ,if the two firms can cooperate and restrict their quantities to where and , then they can maximize their joint profit ( obviously is pareto optimal ) .for example , for .thus , the joint profit at is greater than that of any equilibrium . with regard to the asymmetric equilibria ,the situation is similar to that of chicken game if we correspond the equilibria respectively to the equilibria in chicken game .below we will see that , as the measure of entanglement goes to infinitely large in a quantum form of cournot competition , the unique equilibrium comes to be optimal , as if the unique cooperative equilibrium is attained in chicken game .to model cournot duopoly on a quantum domain , we follow li et al.s `` minimal '' extension , which utilizes two single mode electromagnetic fields , of which the quadrature amplitudes have a continuous set of eigenstates . the tensor product of two single mode vacuum states is identified as the starting state of the cournot game , and the state consequently undergoes a unitary entanglement operation , in which and ( and ) are the annihilation ( creation ) operators of the electromagnetic field modes .the operation is assumed to be known to both firms and to be symmetric with respect to the interchange of the two field modes .the resultant state is given by .then firm and firm execute their strategic moves via the unitary operations and , respectively , which correspond to the quantum version of the strategies of the cournot game .the final measurement is made , after these moves are finished and a disentanglement operation is carried out .the final state prior to the measurement , thus , is .the measured observables are and , and the measurement is done by the homodyne measurement with an infinitely squeezed reference light beam ( i.e. , the noise is reduced to zero ) . when quantum entanglement is not present , namely ,this quantum structure faithfully represent the classical game , and the final measurement provides the original classical results : and . otherwise , namely when quantum entanglement is present , the quantities the two firms will produce are determined by note that the classical model can be recovered by choosing to be zero , since the two firms can directly decide their quantities . on the other hand , both and determined by and when .it leads to the correlation between the firms . substituting into eq .( [ clapay ] ) provides the quantum profits for firm : where and similar to the classical game , we also have for any . to maximize her profit ,thus , firm chooses such that , that is , solving eq .( [ foc3 ] ) for both firms provides the quantum nash equilibria , and the symmetric one is uniquely given as where . as easily seen from eq .( [ symnash ] ) , the quantity produced by each firm in the equilibrium , equals to , monotonically increases and approaches to the pareto optimal one , as the entanglement increases .in fact , given and , we have , and thus , as we have observed above , in addition to the symmetric one there are two asymmetric equilibria in the classical model .hence , it is expected that the quantum model also possesses asymmetric equilibria at least as far as the entanglement is not too large .in fact , we can see that this conjecture is valid for the case of as follows . by substituting with , eq .( [ foc3 ] ) can be rewritten as is a locus of quantities , which is determined by firm s best response strategy to the opponent s strategy .each intersection of and represents quantities produced in some equilibrium .[ figbr ] depicts and for , respectively .[ cols="^ " , ] fig . [ figbr ]displays that the number of equilibria varies as changes .[ figbr](i ) corresponds to the classical model , where there are three equilibria .[ figbr](ii ) , in which five equilibria exist , shows the possibility that the number of equilibria increases by the existence of the entanglement . in fig .[ figbr](iii ) , asymmetric equilibria disappear and the only one symmetric equilibrium remains . evidently from these figures, there is the possibility of multiple equilibria even if the entanglement exists .however , we can prove that asymmetric equilibria vanish when goes large .for a sufficiently large , is the unique equilibrium .it is worth pointing out that the quantifier `` sufficiently large '' in the proposition is not so restrictive by the following reason . to obtain the proposition , we use the fact that as .since converges very quickly , the lower bound for , which guarantees the uniqueness of equilibria , is not so large .for instance , any asymmetric equilibrium can not exist for when ( as we will see below in fig .[ figpayoff ] ) .finally , we consider the transition of equilibria of the game from purely classical to fully quantum , as increases from zero to infinity .[ figpayoff ] depicts the transition process for the case of and : the number of equilibria changes , as grows large , from 3 to 5 , from 5 to 3 , and from 3 to 1 at last .more precisely , there are two thresholds , namely and : for , there are three equilibria ; for , there are five ; for , there are three ; and for , the symmetric ( and unique ) one remains .the horizontal line at 11.75 ( ) represents the half of the maximum joint profit , and the line at 10 represents the profit at the symmetric nash equilibrium of the classical cournot game . as easily seen from the figure , asymmetric equilibria vanish and the unique symmetric equilibrium monotonically approaches to the optimal one as goes large . the profits at quantum nash equilibria as a function of the entanglement parameter , where and ., width=302 ]given a sufficiently large , let be equilibrium outputs .suppose to the contrary , .then , eq . ( [ br ] ) implies : where .subtracting eq .( [ foc4 ] ) from eq .( [ foc5 ] ) , we have where .it implies since we assume that , , that is , it is necessary for eq .( [ delta ] ) having a real solution that which implies that must be sufficiently close to zero since as .the solution of eq .( [ delta ] ) is given by on the other hand , eq .( [ foc4 ] ) implies that since and as , we obtain which implies that is bounded away from zero , a contradiction .thus , for a sufficiently large , any equilibrium must be symmetric .however , is the unique symmetric solution of eq .( [ foc3 ] ) . gibbons r 1992 _ game theory for applied economists _( princeton , nj : princeton univ . press ) axelrod r 1984 _ the evolution of cooperation _( new york : basic books ) maynard smith j 1982 _ evolution and the theory of games _ ( cambridge : cambridge univ press ) meyer d a 1999 _ phys .* 82 * 1052 eisert j , wilkens m and lewenstein m 1999 _ phys .lett . _ * 83 * 3077 benjamin s c and hayden p m 2001 _ phys . rev . _a * 64 * 030301 arfi b 2005 _ theory and decision _ * 59 * 127 benjamin s c 2000 _ phys .lett . _ a * 277 * 180 du j , li h , xu x , shi m , zhou , x , han r. 2001 working paper , arxiv : quant - ph/0103004v1 du j , xu x , li h , zhou , x , han r. 2001 working paper , arxiv : quant - ph/0010050 marinatto l weber t 2000 working paper , arxiv : quant - ph/0004081 , marinatto l weber t 2000 _ phys .lett . _ a * 277 * 183 nawaz a , toor , a h 2004 _ j. phys . a : math .gen . _ * 37 * 4437 eisert j , wilkens m 2000 _ j. modern optics _ * 4 * 2543 flitney a p , hollenberg l c l _ phys .lett . _ a * 363 * 381 ichikawa t , tsutsui i 2007 _ ann ._ * 322 * 531 ichikawa t , tsutsui i , cheon t 2008 _ j. phys . a : math .theor . _ * 41 * 135303 toyota n 2003 working paper , arxiv : quant - ph/0307029v1 tirole j 1988 _ the theory of industrial organization _( cambridge ; mit press ) li h , du j , massar s 2002 _ phys .lett . _ a * 306 * 73 cournot a 1897 _ researches into the mathematical principles of the theory of wealth _( new york : macmillan ) du j , li h , ju c 2003 _ phys ._ e * 68 * 016124 du j , ju c , li h 2005 _ j. phys . a : math .* 38 * 1559 lo c f , kiang d 2004 _ phys .a * 321 * 94 qin g , chen x , sun m , du j 2005 _ j. phys . a : math .gen . _ * 38 * 4247 lo c f , kiang d 2003 _ phys .lett . _ a * 318 * 333 lo c f , kiang d 2005 _ phys .lett . _ a * 346 * 65 lo c f , kiang d 2003 _ europhys .lett . _ * 65 * 529 .
a quantum cournot game of which classical form game has multiple nash equilibria is examined . although the classical equilibria fail to be pareto optimal , the quantum equilibrium exhibits the following two properties , ( i ) if the measurement of entanglement between strategic variables chosen by the competing firms is sufficiently large , the multiplicity of equilibria vanishes , and , ( ii ) the more strongly the strategic variables are entangled , the more closely the unique equilibrium approaches to the optimal one . pacs numbers : 03.67.-a , 02.50.le
quantum cryptographic protocols have garnered much acclaim in the last two decades for their ability to provide unconditional security , which is not practically assured by their classical counterparts .commercial availability of quantum infrastructure in the last decade has placed even more emphasis on developing methodologies to ascertain the reliability of protocols in practice .even though , protocols are theoretically secure , our experience with classical protocols has shown that security can be compromised during implementation .since modelling , analysing and verifying classical protocols have worked so well , developing techniques along these lines seems prudent for quantum cryptographic protocols as well .the cornerstone of quantum cryptographic protocols is the inherent probabilistic nature .unlike classical protocols which accommodates a passive eavesdropper , wherein the eavesdropper can copy the bits and analyse them later , quantum protocols mandate an active eavesdropper .this constraint is promulgated by the no - cloning theorem which handicaps the eavesdropper from copying qubits . to extract information from the qubitsan eavesdropper will inevitably resort to measuring them in a basis which might be different from the encoding basis and thereby alters the state of the qubit .this action is probabilistic in nature .moreover , quantum protocols also involve both classical and quantum channels .therefore we need a language that is capable of modelling probabilistic phenomenon and also takes into account both classical and quantum communications . communicating quantum processes ( cqp) is a language developed with the expert purpose of modelling quantum protocols .cqp uses the communication primitives of pi - calculus and has capabilities for applying unitary operators , performing measurements , and a static type system that differentiates between classical and quantum communications .hence cqp seems an obvious choice for modelling quantum protocols .reasoning along the same lines , prism allows us to model probabilistic transitions , as we show later , this allows to seamlessly translate a cqp model into a prism model . previous work on analysis of bb84 by papanikolaou has reasoned about the probability of detecting an eavesdropper and corroborates the claim made by mayers in his proof of unconditional security of bb84 . however , this work does not model bb84 in cqp .we first model bb84 in cqp , conver the cqp model into prsim and check the validity of the observations made by papanikolaou .we then proceed to show that b92 s eavesdropping detection capabilities can be reasoned along the same lines . to ensure brevity we have refrained from explaining quantum mechanical primitives like unitary operators , measurements and no - cloning theorem .one good resource is nielsen and chuang s work .also , we have only provided an elementary introduction to cqp , only to the extent to which we use it in this paper . a better and complete resource would be thimothy davidsons doctoral thesis .we are going to briefly explain quantum measurement , and working of bb84 and b92 protocols .it is inherent with any quantum mechanical system that any measurement done on the system will induce some irreversible disturbances .we are going to rely on this property of qubits heavily in any quantum cryptographic protocols .+ any quantum system can be represented as a vector in an dimensional complex hilbert space . measuring this quantum systemcan only give a set of priviliged results namely those associated with the basis vectors of the state space .+ for example , consider a _2-dimensional _ complex hilbert spcae with and as basis vectors .lets say the vector describes the system .if we try to measure the system in the basis , then the system changes to a new state , either or permanently .it has a probability of changing into and a probability of changing into .also , .we can also measure the system in whichever basis that we choose .lets measure the system in another basis , where + and , then the quantum state can be represented as .+ measuring this system in the basis will yield and with probability and respectively .a and b want to establish a secret for secure communication .a sends the encoding of some bits in the , basis to b on the quantum channel .b then chooses a random sequences of bases and measures the qubit sent by a in that basis .if the basis of alice and bob are equal then the b obtains the classical bit chosen by alice other wise she randomly gets .a and b then use the classical channel to exchange the basis and the corresponding measurements of qubits to decide upon a shared key or to detect the presence of an eavesdropper . unlike bb84 where each classical bit has two different encoding depending on the basis used , b92 has only one . in other wordsthere is a one to one correspondence between the classical bits and qubits exchanged . if alice wants to send a classical bit to bob she sends if she wants to send she sends .the rest of the steps involved are the same as in bb84 .as mentioned earlier , whenever eve measures the qubits that are in transit to bob from alice , she makes a permanent change to the state of qubits if she does nt use the same basis as that of alice . in bb84 protocolif on some qubits both alice and bob use the same basis to encode and measure but bob decodes a classical bit different from what alice encoded , suggests the presence of eve . in b92as well , alice and bob should obtain the opposite results when the encoding basis is the same , then an attacker is present .we are assuming the qubit channel shared by all the participants noiseless .a brief overview of cqp calculus is provided and then we proceed to formalise both the protocols in cqp .an example of bb84-bit commitment protocol in cqp was give by simon and gay and our formalisation uses the same techniques .a protocol at any given point of time has multiple participants , like _ alice _ and _ bob _ which are legitimate entities involved and also adversaries like .these entities are collectively known as _agents_. agents communicate with each other via communication channels to exchange information .the working of the agents is encapsulated by _processes_. every agent has more than one process , and at any given time its possible that more than one process is in action .these processes can be reasonably thought of as _ states _ in finite state automatons and every process transitions to another or terminates .cqp allows us to impose a probabilistic distribution across these transitions .also processes in cqp can be parametrised .+ 1 . channels are declared by the keyword .+ for example to declare a new qubit channel , we write ( qubitchannel:^ [ ] ) , where is the data type qubitchannel is constrained to and `` ^ '' identifies it as a channel .variables can be declared within a process like so , ( q ) .process output : _.p_{i+1}} ] to receive along channel _ c _ and then proceed with process .process action : _ evaluates expression _ e _ and then proceeds with process 6 ._ process decision : _ if the expression evaluates to then proceed with process else 7 . _ terminate : _ the process terminates after .we identify that __ bob _ are the primary agents of the protocol and to analyse the effects of an eavesdropper _ eve _ becomes an agent of the system as well .as described above channels can only transport messages of a particular type .we have to transport qubits , for integers and , , for bits .technically one bit channel would suffice .however having two different channels that are used at two different stages in the protocol helps us to convert the cqp - model into prism as will be elaborated in the next section .we have also made use of type , with its associated functions of , , and for reading the first element , dropping the first element , an empty list and placing data at the tail of the list respectively .the use of these functions is demonstrated by gay et al. .* _ system _ is parameterized by a , which constitutes the classical ( see figure 1 ) .bits that need to be exchanged between _ alice _ and _ bob _ * _ random _ agent creates a random bit and sends it via the * _ alice _ first sends the length of the number of bits to be exchanged with _ bob _ , i.e the length of . * upon sending the length of the bit list , _alice _ continues with the process _alicesend_. this is a recursive process which terminates after sending all the bits in ._ alicesend _ first receives a random bit from , if the value received is equal to zero then the is encoded in the rectilinear basis else it is encoded in the diagonal basis . creates a new qubit initialised to .hence an operation of on to create and or to convert it into and respectively ._ alicesend _ then sends the qubit via to be received by _bob_. the random bits are stores in to be used later when both the entities decide upon the key . *_ bob _ receives the length of the and then continues with _ bobreceive _ process . like _alicesend _ , this is a recursive process which terminates after receiving all the bits . _bobreceive _ then uses a random bit from , if this bit is zero then _ bob _ measures the received qubit in the rectilinear basis else in the diagonal basis .we used a list that stores a couplet , where we store the random bit and the corresponding measurement . * after exchanging the qubits , _ alice _ and _ bob _ continue with and respectively .sends the basis that she used for encoding via the . upon receiving thisbasis elements checks whether the basis he measured in the same as of that of _ alice _ in which case , he sends an acknowledgement via to _alice _ and the corresponding bit he measured ._ alice _ checks if the measurement that _ bob _ made is the same as that of the intended bit .since we are dealing with channels without any noise , if the measurement _ bob _ made does not match , _straight away confirms the presence of an attacker and sends an flag to _since and only differ in how they encode the qubits , we can modify the cqp formalisation of for ( see figure 2 ) ._ alicesend _ does not encode the qubit in a random basis .if the element is equal to zero then she sends else if the element in equal to one then is exchanged . with few modifications to _ alicesend _ in bb84, we can adopt it model b92 .these modifications are presented in _conversion from cqp to prism is a step by step process .this conversion for a subset of commands has been done by ware in his master s thesis .we are going to use the same procedure ( see appendix for the prism models ) . in the previous sectionwe have mentioned that we have used type .unfortunately a parallel for this type does not exist for . to overcome this handicapwe will have to modify the model , in both the protocols the public discussion starts after both the parties have exchanged all the qubits . instead in the prism model after every qubit exchange , both the parties proceed to exchange the encoding basis and measured bit to establish the validity of the qubit . this way we can ensure that the original characteristics of the protocol remain intact . *all the channels in the cqp model are defined as global variables in the model . *the model constitutes of three modules representing the different agents in the cqp modelling * on the the messages to be exchanged are limited to ] , i.e , we start to find these probabilities starting from one qubit being exchanged to twenty . and are to be expressed in pctl . is the pctl formula corresponding to when the eavesdropper is detected . from the prism model for bb84( in _ appendix a _ ) , whenever an eavesdropper is detected _ alice _ is in _ alicestate=15 _ and _ bob _ is in state _ bobstate=10_. the corresponding expression for and their property expression in prism : similarly for which gives the probability of eavesdropper measuring more than half of the exchanged qubits correctly is .probability of detecting eavesdropper for bb84-qkd [ cols="^,^,^",options="header " , ] after using the curve fitting algorithm to approximate the results to an equation we have : and we make the following obeservations : . . like the inferences made for bb84 , the chances of detecting an eavesdropper increases with the number of qubits exchanged and also the number of correct measurements that an eavesdropper can make decreases exponentially with the number of qubits exchanged .but unlike in bb84 , for b92 we have , hence the probability of eavesdropper detection is higher during intercept - resend than in random substitution .quite strangely we observe that with respect to random substitution type of attack , both the protocols perform identically .this is substantiated by the equations and however with respect to intercept resend style attacks they differ markedly , as evidenced by fig .12 and fig . 13 .b92 performs better in terms of eavesdropper detection as the probability approaches unity faster than b92 and in terms of decreased number of correct measurements that can be made by the eavesdropper .we have successfully modelled bb84 protocol in cqp , showed the process in which we have created prism models from the cqp models and analysed the properties using pctl .we also corroborate the observations made in earlier research with our analysis .we then extended the technique to b92-qkd protocol and compare the performance of the two .we infer that b92 is more resilient against an eavesdropper , with its ability to take fewer qubits than bb84 in identifying an eavesdropper and then potentially reducing the number of correct measurements the eavesdropper can make .i am deeply indebted to anish mathuria , professor , dhirubhai ambani institute of information and communication technology , under whose supervision my bachelor thesis research was conducted , his mentorship after my graduation , his invaluable advice and support that helped me finish this paper .1 bennett , charles h. , gilles brassard , and n. david mermin .`` quantum cryptography without bell s theorem '' physical review letters 68.5 ( 1992 ) : 557 .bennett , charles h. `` quantum cryptography using any two nonorthogonal states . ''physical review letters 68.21 ( 1992 ) : 3121 .mayers , d. `` shor and preskill s and mayers s security proof for the bb84 quantum key distribution protocol . ''the european physical journal d - atomic , molecular , optical and plasma physics 18.2 ( 2002 ) : 161 - 170 .quan , zhang , and tang chaojing .`` simple proof of the unconditional security of the bennett 1992 quantum key distribution protocol . ''physical review a 65.6 ( 2002 ) : 062301 .gay , simon , rajagopal nagarajan , and nikolaos papanikolaou . `` probabilistic model checking of quantum protocols . ''arxiv preprint quant - ph/0504007 ( 2005 ) .ware , christopher j. modeling and analysis of quantum cryptographic protocols .university of victoria , 2008 .nielsen , michael a. , and isaac l. chuang .quantum computation and quantum information .cambridge university press , 2010 .davidson , timothy as . formal verification techniques using quantum process calculus .university of warwick , 2012 .milner , robin . communicating and mobile systems : the pi calculus .cambridge university press , 1999 .gay , simon j. , and rajagopal nagarajan .`` communicating quantum processes . ''acm sigplan notices .1 . acm , 2005 .buzek , vladimir , and mark hillery .`` quantum copying : beyond the no - cloning theorem . ''physical review a 54.3 ( 1996 ) : 1844 .
proof of security of cryptographic protocols theoretically establishes the strength of a protocol and the constraints under which it can perform , it does not take into account the overall design of the protocol . in the past model checking has been successfully applied to classical cryptographic protocols to weed out design flaws which would have otherwise gone unnoticed . quantum cryptographic protocols differ from their classical counterparts , in their ability to detect the presence of an eavesdropper . although unconditional security has been proven for both bb84 and b92 protocols , in this paper we show that identifying an eavesdropper s presence is constrained on the number of qubits exchanged . we first model the protocols in cqp and then explain the mechanism by which we have translated this into a prism model . we mainly focus on the protocols ability to detect an active eavesdropper and the extent to which an eavesdropper can retrieve the shared key without being detected by either party . we then conclude by comparing the performance of the protocols .
a hyperspectral image ( aka . hyperspectral cube ) consists of two spatial dimensions and a spectral dimension .the latter contains information pertinent to the intrinsic material properties of the object .this spectral information in hsi makes them well suited for classification tasks , such as scene recognition , 3d reconstruction , saliency detection , pedestrian detection , material classification , cultural heritage and many more . in this paper, we propose a framework for hyperspectral image classification , where each band in the hsi is considered as a separate image .treating the problem at the image - level allows us to exploit high - level information , like shapes , that help to improve the classification performance .we use traditional feature descriptors ( i.e. sift , hog , lbp ) for image - level feature extraction and classification ( see fig . [fig : frontpage ] for illustration ) . for the demonstration of our method, we investigate into face recognition using wavelengths ranging from the visible ( ) to the near - infrared ( ) spectrum .the availability of more bands than the usual three rgb bands has been shown advantageous in disambiguating objects . in literature , the common tendency to exploit the hyperspectral data has been addressed at pixel - level . as the spatial and spectral dimensions in hsi increase ,it is difficult to separate hyperspectral data at the pixel - level for large classes dataset using statistical methods . however , in our method classification is done at the image - level , where from each image we extracted a limited number of features , far fewer than the number of pixels . with such smaller numbers of features at hand, we can afford to not further reduce the number of spectral bands that are considered .this allows us to exploit the entire input space without losing information for feature extraction .we show that the entire feature space information leads to significantly improved performance in a face recognition task .recently , deep learning ( dl ) a learning - based representation has outperformed traditional descriptors for deriving distinctive features in image classification task , but dl has a major shortcoming : it requires many samples for the training process and an insufficient number of training samples quickly leads the network to overfitting . as we already know, in hsi the spectral bands are very unique and discriminative .thus , we believe traditional feature descriptors can effectively exploit this spectral information and help to generate powerful feature representation of their content that characterize the object better . in this work ,we have explored the hand - crafted hsi features in v - nir images and also have shown improvement in the classification performance by a significant margin for a face recognition task .the rest of the paper is organized as follows . in section [ sec : relatedwork ] , we discuss the related work , and section [ sec : method ] describes our proposed method .experiments and analysis are given in section [ sec : experiments ] , and finally , the conclusions are drawn in section [ sec : conclusion ] .there is an extensive work on hsi classification , but here we mention a few relevant papers on face recognition task only . during the last decade ,face recognition has achieved great success .the research has tended to focus on rgb or b / w images , rather less attention has been paid to hyperspectral images . in this paper , we focus on hsi face recognition .the idea of face recognition in hyperspectral images , started with pan et al . , who manually extracted the mean spectral signatures from the human face in the nir spectrum , and were then compared using mahalabonis distance .further in pan et al . extended their work by incorporating spatial information in addition to spectral .similarly , robila et al . also uses spectral signatures but their comparison criterion was spectral angle measurements .almost all the existing proposed hsi face recognition methods perform dimensionality reduction and the low - dimensional feature space are extracted for classification . di et al . projected the hyperspectral image into low dimensional space using 2d - pca for feature extraction , and were then compared using euclidean distance . in recent works ,shen and zheng apply 3d gabor wavelets with different central frequencies to extract signal variances in the hyperspectral data .liang et al . utilize 3d high - order texture pattern descriptor to extract distinctive micro - patterns which integrate the spatial - spectral information as features .uzair et al . apply 3d discrete cosine transformation and extract the low frequency coefficients as features , where partial least squares regression is used for classification .uzair et al .most recent work in extended their previous study and employed 3d cubelets to extract the spatio - spectral covariance features for face recognition .in contrast , in our work , the hyperspectral image is fully exploited without losing information by performing feature extraction and classification at the image - level .[ fig : tsne ] 0.45 0.45in the last two decades , researchers have developed robust descriptors ( feature extraction methods ) to extract useful feature representations from the image that are highly distinctive and are perfect for an efficient and robust object recognition .the traditional feature descriptors ( sift , hog , lbp and more ) have several advantages .these features are invariant to image scaling , geometric and photometric transformations : translations or rotations , common object variations , minimally affected by noise and small distortions , and to a great extent invariant to change in illumination .these descriptors when combined with hyperspectral images , make the extracted features even more robust and powerful .we believe that the extracted new class of features obtained from hyperspectral images shall give near - perfect separation because these features are generated from highly discriminative bands ( fig .[ fig : ex_dataset ] ) in hsi that are captured in different wavelengths .these images captured in different wavelength range , contain discriminative information that characterize the object better with great precision and detail .[ fig : scnn ] shows the schematic layout of our framework for hyperspectral image classification . in our work , each mono - spectral band in the hsi is treated as a separate image .treating the mono - spectral bands as separate images , allows us to exploit : the discriminative texture patterns present in the images captured in different wavelengths , hence able to utilize the entire space without losing information for feature extraction .then , the features are extracted for each mono - spectral band using a feature extraction method ( e.g. hog , lbp , and sift ) .this allows us to capture the high - level information like shapes and abstract concepts from images : making it more suitable for high - level visual recognition tasks , similar is not possible at pixel - level .classification at the pixel - level ( think of raw pixel values ) comes at high computational burden , and it turns out that , it is difficult to disambiguate objects with large classes dataset by simple concatenating of multi - spectral data into single pixel - related vectors using statistical methods . thus , we extract from each image a limited number of features , far fewer than the number of pixels . with such smaller numbers of features at hand ,we show that , we achieve better recognition results , significantly outperforming pixel - level features ( see table [ table : soa ] ) , and can afford to not further reduce the number of spectral bands that are considered .then , the extracted feature vectors are fed to linear one vs all svm classifier , where we have a svm for each band .for the final label prediction , we merge the predictions of all svm learners trained on different bands using majority voting . for feature extraction of each mono - spectral image ,we use a fixed - size representation to extract sift , lbp and hog features . the parameters for feature extraction ( i.e. window size , step size , number of oriented bins , number of clusters , and more ) contribute to the good results , and play a crucial role in improving the recognition performance .in section [ sec : experiments ] , we describe the implementation setup in detail . the performance of the the proposed approach is demonstrated for face recognition in hsi .excellent results show that these robust feature descriptors are perfect for reliable face recognition in hyperspectral images .the proposed method in the previous section was tested on two standard hyperspectral face datasets for a face recognition task .our experiments consist of two parts . in the first part, we compare our proposed method with the state - of - the - art hsi face recognition methods .then in the second part , we compare the usefulness of the discriminative spectral bands with the rgb image representation of hsi .we now introduce the datasets used and then we move onto training / testing protocol , and implementation details . for the experimental evaluation, we used a desktop with intel i7 - 2600k cpu at 3.40ghz , 8 gb ram .all experiments were performed using vlfeat library .[ [ hyperspectral - face - datasets ] ] * hyperspectral face datasets : * + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + hong - kong polytechnic university hyperspectral face dataset ( polyu - hsfd ) ( see fig . [fig : ex_dataset ] and table [ table : dataset ] ) is acquired using the cri s varispec lctf with a halogen light source .the database contains data of 48 subjects acquired in multiple sessions , with 1 - 7 cubes over all sessions . following the same experimental protocol of ,only first 25 subjects were used for evaluation .the acquired images for the first 6 and the last 3 bands are noisy ( i.e. very high shot noise ) .so they are discarded from the experiment , as suggested in the previous work . in all , 24 spectral bands were used with spectral interval ( i.e. step size ) of 10 nm .the database has significant appearance variations of the subjects because it was constructed over a long period of time .major changes in appearance variation were in hair - style and skin condition .carnegie mellon university hyperspectral face dataset ( cmu - hsfd ) ( see fig . [fig : ex_dataset ] and table [ table : dataset ] ) is acquired using the cmu developed aotf with three halogen light sources .the database contains data of 54 subjects acquired in multiple sessions , with 1 - 5 cubes over all sessions . following the same experimental protocol of , only first 48 subjects were used for evaluation .each subject maintains a still pose with negligible head movements and eye blinks , due to of that a slight misalignment error exists between individual bands because the image capturing process takes several seconds .[ [ training - and - testing - protocol ] ] * training and testing protocol : * + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + following the same training and testing protocol as defined in , only frontal view has been considered for evaluation of both datasets . when only one sample per subject is used for training and two samples per subjectare used for testing . both gallery set ( or training set ) and probe set ( or testing set ) were constructed for 5 times by random selection and the average recognition accuracy was reported .[ [ data - preprocessing ] ] * data preprocessing : * + + + + + + + + + + + + + + + + + + + + + all images were cropped and resized to size in spatial dimension . due to oflow signal - to - noise ratio ( i.e. high shot noise ) in the datasets , we apply a median filter of size to remove shot noise . [ [ implementation - details ] ] * implementation details : * + + + + + + + + + + + + + + + + + + + + + + + + + to extract the hog and lbp features , we use a window size of , and number of orientation bins of 9 for hog . for sift, we use a bin size of 4 , step size of 8 , then the extracted sift - features are fisher encoded . to compute fisher encoding, we build a visual dictionary using gmm with 100 clusters .we normalize the features using l2-normalization . we denote dense sift fisher vectors by dsift - fvs .these parameters are fixed for all descriptors .we use linear svm with from libsvm to train classifier with features . [[ comparison - with - state - of - the - art - methods ] ] * comparison with state - of - the - art methods : * + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + in table [ table : soa ] , we quantitatively evaluate the recognition accuracy of our proposed method , and compare it with state - of - the - art hyperspectral face recognition methods reported in the literature .we observe from the results that among the traditional feature descriptors , as expected dsift - fvs outperforms the lbp and hog recognition accuracy by a significant margin .also , it should be noted that dsift - fvs outperforms all the traditional methods listed in the literature and achieve state - of - the - art accuracy with 96.1% on polyu - hsfd dataset .though , dsift - fvs is inferior to band fusion+pls , but is still better than all the other methods on cmu - hsfd dataset .furthermore , this examination reveals that sift shows the same trend of performing better in face recognition in hyperspectral images as in rgb images .[ [ comparison - of - hsi - with - rgb - image ] ] * comparison of hsi with rgb image : * + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + for this evaluation, we generate an rgb image for each of the hyperspectral image ) , ( c ) and silicon sensitivity of hamamatsu camera ] . in this regard , we refer the reader to the book by ohta and robertson for detailed steps . then, we apply the proposed method to an rgb image ( with three channels ) in the same way as we applied to the hyperspectral image , discussed earlier in section [ sec : method ] . in table [ table : soa1 ] , we quantitatively evaluate the recognition accuracy of the whole band set in the hyperspectral cube , and compare it with a three channels rgb image .it is evident from the comparison that the classification performance for the computed features from hsi images is significantly better than the computed features from rgb images on both datasets . in the literature, it has been shown by pan et al . , nir images exhibit a distinct spectral properties of the skin and this information leads to more accurate classification in comparison to retained information in rgb images ( i.e. visible range ) .the reason being , in nir range the spectral responses of the tissues are more discriminative due to larger penetration depth , which is dependent on the portion of melanin and hemoglobin in the human skin .in this paper , we proposed a novel pipeline for image - level classification in the hyperspectral images . by doing this ,we show that the discriminative spectral information at image - level features lead to significantly improved performance in a face recognition task .we also explored the potential of traditional feature descriptors in the hyperspectral images . from our evaluations, we observe that sift features outperform the state - of - the - art hyperspectral face recognition methods , and also the other descriptors . with the increasing deployment of hyperspectral sensors in a multitude of applications, we believe that our approach can effectively exploit the spectral information in hyperspectral images , thus beneficial to more accurate classification .
image - level classification from hyperspectral images ( hsi ) has seldom been addressed in the literature . instead , traditional classification methods have focused on individual pixels . pixel - level hsi classification comes at a high computational burden though . in this paper , we present a novel pipeline for classification at image - level , where each band in the hsi is considered as a separate image . in contrast to operating at the pixel level , this approach allows us to exploit higher - level information like shapes . we use traditional feature descriptors , i.e. histograms of oriented gradients , local binary patterns , and the scale - invariant feature transform . for demonstration we choose a face recognition task . the system is tested on two hyperspectral face datasets , and our experiments show that the proposed method outperforms the existing state - of - the - art hyperspectral face recognition methods .
photoacoustic imaging ( pai ) is a novel technique for tomographical imaging of small biological or medical specimens .the method makes use of the fact that an object expands after being exposed to ultrashort electromagnetic radiation , and emits an ultrasonic wave ( see e.g. ) .the resulting acoustic pressure is assumed to be proportional to the electromagnetic _ absorption _ , which is the imaging parameter of photoacoustics .it provides detailed anatomical and functional information .opposed to the conventional photoacoustic imaging , which is based on the assumption that the _ compressibility _ and _ density _ of the medium are constant ( and thus in turn the sound speed ) , this paper assumes _ both _ of these parameters spatially varying .the mathematical model describing the propagation of the ultrasonic pressure considered here is here , is the material compressibility , denotes the density and denotes the amount of absorbed energy , i.e. the imaging parameter that encodes the material properties of physiological interest in pai .we emphasize that the speed of sound is given by and that this equation is more general than which also describes acoustic wave propagation in the case of variable sound speed .the latter equation is derived from under the additional assumption that is spatially slowly varying . for further details on the derivation of from fluid- and thermodynamics, we refer to ( * ? ? ?* chapter 8.1 ) .the _ photoacoustic reconstruction _ consists in determining the function from measurement data of on a surface over time .there exists a huge amount of literature on reconstruction formulas in the case , see for instance to name but a few .time reversal in the case of variable sound speed has been studied for instance in .time reversal for photoacoustic imaging based on as well as on has been given in - note that both associated wave operators are special cases of the general operator from .their theory has been generalized to the elastic wave equation in . in this paperwe focus on numerical realization and regularization theory of photoacoustic imaging based on with different numerical methods .most closely related to our numerical approach is a time reversal algorithm from , which employs the formula below .recently we applied iterative regularization techniques in the case of variable sound speed and compared it with time reversal .the goal here is to generalize time reversal and iterative regularization for photoacoustic imaging in the case of spatially variable density and compressibility . a convergence in the least - squares - sense is thereby guaranteed by standard results from regularization theory ( see e.g. ) .the paper is organized as follows : in we analyze the mathematical equations describing wave propagation in the case of spatially variable compressibility and density .imaging based on this model is analyzed in .numerical results are presented for three different methods ; time reversal , neumann series , and landweber iteration in .the latter two seem to be new for the presented equation .we also investigate the case of partial measurement data . in the beginning we summarize the basic notation , which is used throughout the paper . denotes a non - empty , open , bounded and connected domain in with lipschitz and piecewise -boundary .moreover , is connected and relatively open .the vector , with , denotes the outward pointing unit normal vector of . the absorption density , the compressibility and the density are supposed to satisfy : * , satisfying , .we also define .moreover , we assume that are constant in and satisfy there . *the absorption density function has support in : . for the sake of simplicity of notation we omit space and time arguments of functions whenever this is convenient and does not lead to confusions .we use the following hilbert spaces : * we denote by , where denotes the complement of , with inner product * for or : * * let be the closure of differentiable functions on with compact support in , associated with the non - standard ( but equivalent ) inner product the associated norm is denoted by . * * the seminorm associated to the inner product is denoted by . * * the norm associated with the inner product is denoted by .* * the norms and are equivalent .in fact , we have * denotes the standard hilbert space of square integrable functions on with support in , together with the inner product denotes the standard hilbert space of square integrable functions on with support in , together with the inner product the induced norms are denoted by , .+ denotes the space of smooth functions with support in .* the trace operator restricts functions defined on onto , respectively .this operator is the composition of the standard trace operator and the restriction operator , which are both bounded ( * ? ? ? * theorem 5.22 ) , and thus itself bounded .we denote this section we are analyzing the _ wave operator _ mapping the absorption density onto the solution of restricted to : first , we show that the operator is bounded .analogous to , we define the total wave energy by the time derivative of , taking into account , is consequently , this , together with shows that a - priori this inequality does not provide a bound for in the whole with respect to the standard -norm .this is provided for instance by the following lemma : let be the solution of , then where for arbitrary it follows from that : with the elementary inequality it follows that this , together with shows the assertion .now , we prove boundedness of : the operator is bounded and for given let be the solution of .from it follows that the solution of is in for every .thus from and it follows that [ rem : oph1 ] from ( * ? ? ?* theorem 1 ) it follows that the trace ] is in fact in . the proof does not follow in a straightforward way from standard trace results , but utilizes the theory of fourier integral operators and microlocal analysis .in special cases , this result can be further improved , see ( * ? ? ?* remark 5 ) and also .for instance , for strictly convex , we have \in h^1\left(\partial\omega\times(0,t)\right) ] implies . in the followingwe discuss several numerical algorithms for solving .we are employing the landweber s iteration for solving and compare it with the time reversal methods presented in , which are the standard references in this field .more efficient iterative regularization algorithms are at hand , but these are less intuitive to be compared with time reversal .the landweber algorithm reads as follows : -m^\delta],\quad k=1,2,\dots,\end{aligned}\ ] ] where stands for error - prone data with , where . for a summary of results for landweber regularization in photoacousticswe refer to and the references therein .we emphasize that landweber s iteration converges to the _ minimum norm solution _ \,,\ ] ] where is the moore - penrose inverse ( see for a survey ) , if the data is an element of the range of .this is a property which is relevant when the observation time is smaller than the critical time which guarantees injectivity of . in this subsection, we first state the conventional time reversal and give a remark on necessary assumptions to obtain error estimates for this method .this is followed by a description of a modified time reversal approach , for which a theoretical analysis based on can be provided .we formally define the time reversal operator : = z(\cdot,0),\end{aligned}\ ] ] where is a solution of the fundamental difference between and is that they are defined via differential equations on and .the conventional time reversal reconstruction consists of computing \,.\ ] ] assume that , that and the speed of sound is non - trapping .for this case , hristova ( * ? ? ?* theorem 2 ) provides an error estimate for the time reversal method , employing results on the decay of solutions of the wave equation ( e.g. ) .stefanov and uhlmann define the modified time reversal for : rather than assuming ( in most cases unjustified ) the initial data we are again using the harmonic extension of the data term , for , as initial datum at .that is , for the modified time reversal operator = z(\cdot,0)\end{aligned}\ ] ] is defined by the solution of equation note that this algorithm has not been used as a basis for numerical reconstruction for the generalized .previously showed stability of the modified time reversal reconstruction under non - trapping conditions and for sufficiently large measurement time for .in fact , let be non - trapping and denote the time when all singularities have left .then the result is directly convertible to : ( * ? ? ? * theorem 1)[thm : stability1 ] let and be a closed -surface .moreover , let the coefficients , .then , where is a compact operator from satisfying .note that ( * ? ? ?* theorem 1 ) works only for complete boundary measurement data . by, the initial value can be expanded into the neumann series .\end{aligned}\ ] ] by induction one sees that the -th iterate can be written as - m],\end{aligned}\ ] ] where .\ ] ] note that with partial data , the neumann series reconstruction consists in formally applying to the extended data , where in a way that )$ ] serves as approximation to in .let now relatively open . to fix the terminology, we define for the curve to be the geodesic ( in the riemannian metric ) through in direction , where and .note that give the ( possibly infinite ) times when the geodesics leave the domain in positive ( ) resp .negative ( ) direction .regarding stability ( * ? ? ?* theorem 3 ) provides the following result , which holds also in the partial data case .[ thm : stability2 ] assume that holds for all in at least one of the two directions , .then there exists a constant such that for any , with , the estimate is valid .we compare conventional time reversal , the neumann series approach and the landweber iteration for the photoacoustic imaging problem based on .similar studies have been performed in for photoacoustic imaging based on .displays the involved parameters , including absorption density , material compressibility and density . for the numerical solution of the involved wave equations and, we use a straight - forward adaptation of the bem - fem scheme outlined in . andare discretized by finite element discretization . for the simulation of the data , for all different wave equations ,the mesh size has been chosen as and , leading to about nodal points in .for all wave equations involved in reconstruction , we use a grid with and .measurement data are assumed to be recorded on detection points on the unit circle . in the partial data example , the measurements are restricted to the lower half of the circle . the total measurement time was varied as multiples of , which is defined as in . , compressibility , density .,title="fig:",scaledwidth=30.0% ] , compressibility , density .,title="fig:",scaledwidth=30.0% ] , compressibility , density .,title="fig:",scaledwidth=30.0% ] + , compressibility , density .,title="fig:",scaledwidth=30.0% ] , compressibility , density .,title="fig:",scaledwidth=30.0% ] , compressibility , density .,title="fig:",scaledwidth=30.0% ] we investigate the performance of the different photoacoustic imaging techniques , time reversal , neumann series and landweber iteration , respectively , with spatially varying and .we show different test cases 1 . for partial measurement data ( see ) , where we show the imaging results for and , where , defined in denotes the minimal time that guarantees unique reconstruction of the absorption density 2 . as well as reconstructions from full data in the presence of noise ( ) . the noise level is stated as snr ( signal to noise ratio , in db scale ) with respect to the maximum signal value .the examples include moderate noise ( snr ) and high noise ( snr ) . , are exactly given .data are noise free and recorded for , .reconstructions for , are shown in the first and second line , respectively . * from left to right : * time reversal , neumann series with , landweber iterate .,title="fig:",scaledwidth=30.0% ] , are exactly given .data are noise free and recorded for , .reconstructions for , are shown in the first and second line , respectively . * from left to right : * time reversal , neumann series with , landweber iterate .,title="fig:",scaledwidth=30.0% ] , are exactly given .data are noise free and recorded for , .reconstructions for , are shown in the first and second line , respectively . * from left to right : * time reversal , neumann series with , landweber iterate .,title="fig:",scaledwidth=30.0% ] + , are exactly given .data are noise free and recorded for , .reconstructions for , are shown in the first and second line , respectively . * from left to right : * time reversal , neumann series with , landweber iterate .,title="fig:",scaledwidth=30.0% ] , are exactly given .data are noise free and recorded for , .reconstructions for , are shown in the first and second line , respectively . * from left to right : * time reversal , neumann series with , landweber iterate .,title="fig:",scaledwidth=30.0% ] , are exactly given .data are noise free and recorded for , .reconstructions for , are shown in the first and second line , respectively . * from left to right : * time reversal , neumann series with , landweber iterate .,title="fig:",scaledwidth=30.0% ] , measurement time . *first line * : snr . * second line * : snr . * from left to right : * time reversal , neumann series with , landweber iterate .,title="fig:",scaledwidth=30.0% ] , measurement time . * first line * : snr . *second line * : snr . * from left to right : * time reversal , neumann series with , landweber iterate .,title="fig:",scaledwidth=30.0% ] , measurement time . * first line * : snr . * second line * : snr . * from left to right : * time reversal , neumann series with , landweber iterate .,title="fig:",scaledwidth=30.0% ] + , measurement time .* first line * : snr . *second line * : snr . * from left to right : * time reversal , neumann series with , landweber iterate .,title="fig:",scaledwidth=30.0% ] , measurement time . * first line * : snr .* second line * : snr . * from left to right : * time reversal , neumann series with , landweber iterate .,title="fig:",scaledwidth=30.0% ] , measurement time . * first line * : snr .* second line * : snr . * from left to right : * time reversal , neumann series with , landweber iterate .,title="fig:",scaledwidth=30.0% ] the second test concerns a parameter setting and associated sound speed , which is reconstructed with inversions ( time reversal , neumann series , landweber iteration ) based on .the phantom simulates a water - like body ( e.g. soft tissue ) containing an inclusion with significantly different acoustic properties , like the air - filled swim - bladder of a fish ( see lower line of ) .we choose the parameters to be bounded in the intervals and , with high gradients in the transition between the inclusion and the rest of the domain ( see also ) .moreover , we include the achievable results when only is known , leading to a modeling error in the reconstructions . 1 . in the first line of we display the reconstructions with correct parameters as depicted in the second line of . 1 . next, reconstructions are performed by using the parameters and .this displays the usually considered approximation ( see second line of ) .we also try it the other way round by setting , and ( third line of ) .this leads to the wave equation in pure divergence form , and displays the case where the compressibility variations are negligible .however , in b and c the modeling error leads to severe artifacts near regions of high - gradient regions of the parameters . in the presented test examples, all three methods qualitatively reconstruct the same features .time reversal however fails to give a quantitatively correct results , due to the relatively short measurement times in use .neumann series and landweber iteration perform at the same level . as expected from theory , the landweber reconstructions appear slightly smoother , specifically in .the second test clearly indicates that a modeling error in the reconstruction method can lead to severe artifacts near regions where the parameter gradients are large . .* first line * : reconstruction using correct parameters , as pictured in the second line of . *second line * : reconstruction with parameters , . * third line * : reconstruction with parameters , . * from left to right : * time reversal , neumann series with , landweber iterate .,title="fig:",scaledwidth=30.0% ] . * first line * : reconstruction using correct parameters , as pictured in the second line of . * second line * : reconstruction with parameters , .* third line * : reconstruction with parameters , . * from left to right : * time reversal , neumann series with , landweber iterate .,title="fig:",scaledwidth=30.0% ] . * first line * : reconstruction using correct parameters , as pictured in the second line of . * second line * : reconstruction with parameters , . * third line * : reconstruction with parameters , . * from left to right : * time reversal , neumann series with , landweber iterate .,title="fig:",scaledwidth=30.0% ] + . * first line * : reconstruction using correct parameters , as pictured in the second line of . * second line * : reconstruction with parameters , . * third line * : reconstruction with parameters , . * from left to right : * time reversal , neumann series with , landweber iterate .,title="fig:",scaledwidth=30.0% ] . *first line * : reconstruction using correct parameters , as pictured in the second line of . * second line * : reconstruction with parameters , . * third line * : reconstruction with parameters , . * from left to right : * time reversal , neumann series with , landweber iterate .,title="fig:",scaledwidth=30.0% ] . * first line * : reconstruction using correct parameters , as pictured in the second line of . * second line * : reconstruction with parameters , . * third line * : reconstruction with parameters , . * from left to right : * time reversal , neumann series with , landweber iterate .,title="fig:",scaledwidth=30.0% ] + . * first line * : reconstruction using correct parameters , as pictured in the second line of . * second line * : reconstruction with parameters , . * third line * : reconstruction with parameters , . * from left to right : * time reversal , neumann series with , landweber iterate .,title="fig:",scaledwidth=30.0% ] . * first line * : reconstruction using correct parameters , as pictured in the second line of . * second line * : reconstruction with parameters , . * third line * : reconstruction with parameters , . * from left to right : * time reversal , neumann series with , landweber iterate .,title="fig:",scaledwidth=30.0% ] . * first line * : reconstruction using correct parameters , as pictured in the second line of . * second line * : reconstruction with parameters , . *third line * : reconstruction with parameters , .* from left to right : * time reversal , neumann series with , landweber iterate .,title="fig:",scaledwidth=30.0% ]in this work we have studied photoacoustic imaging based on a general wave model with spatially variable compressibility and density , respectively .we have implemented neumann series and landweber iteration for photoacoustic imaging based on this general equation and we compared the result to conventional time reversal as discussed in .the numercial methods for photoacoustic imaging reveal the differences as outlined in with respect to convergence , stability and robustness against noise at the present stage of research .stability is understood in the sense of regularization theory , meaning that the landweber iterates determined by a discrepancy principle approximate the minimum norm solution .[ tab : table1 ] .overview on the different photoacoustic imaging methods . [ cols="<,^,^,^",options="header " , ] numerical results show the reconstructions in the case of error prone data and under modeling errors .we emphasize that , so far , an error analysis is only possible for the landweber iteration .the work of tg and os is supported by the austrian science fund ( fwf ) , project p26687-n25 interdisciplinary coupled physics imaging .we are grateful to the valuable hints of silvia falletta concerning details to discretization and implementation of the coupled fem - bem system .
this paper investigates photoacoustic tomography with two spatially varying acoustic parameters , the compressibility and the density . we consider the reconstruction of the absorption density parameter ( imaging parameter of photoacoustics ) with complete and partial measurement data . we investigate and analyze three different numerical methods for solving the imaging problem and compare the results . * keywords : * photoacoustic imaging , spatially varying compressibility and density , variable sound speed , regularization , time reversal .
interesting lizards live on the mountains of armenia , by the shore of lake sevan .a peculiarity of the species is that they do not have males ; their females deposit non - fertilized eggs from which only females hatch .such a reproductive method is extremely simple and very rational : each animal can produce progeny and the difficulties of finding a `` spouse '' are eliminated .it turns out that the task of reproduction can be performed quite well without males .another interesting method of reproduction is demonstrated by silver crucian carps _carassius auratus _ ,inhabitants of russian lakes . like the lakesevan lizards , only females represent the species .these females do resort to the service of males , but .... of different fish species .sperm of `` foreign '' males stimulates the roe to develop . however , real fertilization , i.e. the fusion of nucleus of male and female sex cells , does not occur .the males do not generate new organisms genetically and can not claim the fatherhood .a needle or certain chemicals can play the role of a father .for example , the frog roe can be stimulated to develop by the prick of a thin needle , and cell division in the eggs of certain sea species is triggered by shaking or adding certain acids or salts to the water . in the laboratory , even such a highly organized animal as a rabbitcan be born without a father .the experimenters sometimes succeed in triggering cell division mechanically or chemically in an ovum extracted from a female rabbit ; the result is then placed back into the womb of the mother .later , the mother delivers a normal live newborn `` orphan '' rabbit , which can become a normal adult animal .[ this method should not be confused with the much publicized _ in vitro _fertilization on humans where the biological father , possibly anonymous , always exists ._ translator s comment_. ] in some species the attitude toward the male is very `` unfair . ''for example , a spider female allows the male to copulate but eats it up immediately after the `` marriage . ''to avoid this destiny the male must fetch some tasty food to its bloodthirsty bride . despite its name , the female of the praying mantis _mantis religiosa _ behaves `` godlessly . '' during copulation , it bites away the head of the male so that the latter has to complete its mission with no head .this set of examples can be augmented by the habits of bees which `` permit '' the males to be born only in some generations , or practice their elimination immediately after the female is fertilized .however , in the majority of species the females `` keep '' their males , tolerate them and treat them fairly well .moreover , some species which `` know '' how to reproduce without males , e.g. some crayfish , manage without the males in summer , when it is warm and there is enough food .but as soon as the fall or a drought begins they resort to the service of males .this makes one believe that there is still a purpose in males .let us try to figure out this purpose .two main methods of reproduction exist : asexual and sexual .only one parent organism participates in the asexual method , producing organisms similar to itself .two parents participate in sexual reproduction . however , the principal feature is not the quantitative one : `` two from one '' in the first case and `` three from two '' in the second case . much more important is the qualitative feature : in the asexual method no new quality appears , whereas in each instance of the sexual reproduction new qualities appear which are different from the parental ones .the asexual method is encountered mostly in one - cell species , whereas most animals including mammals reproduce sexually .this suggests that the latter method is more progressive from the viewpoint of evolution .the evolutionary advantages of sexual reproduction usually are attributed to the recombination of genes leading to a genetic variety .sexual reproduction necessarily assumes crossbreeding , and , as a rule , is accompanied by splitting into two sexes .it is the crossbreeding which generates new gene combinations .but why are two _ different _ sexes needed ?a method of reproduction exists in which the animals are not separated - not differentiated - into two sexes but crossbreeding still takes place .this method is used by earthworms .each worm is both the male and the female .oysters behave similarly : each animal first behaves as a male and then as a female .this method seems to have many advantages . indeed , suppose a population consists of 100 animals and each of them without exception can breed with all the other .then the number of possible combinations is .however , if the same population is split fifty - fifty into two sexes , then the number of possible combinations is about half as large : .the division into sexes makes things worse , as it seems , because a sexually divided population sacrifices about half of the possible genetic combinations in each generation compared with a non - divided one .what advantages are received by the sexually divided population in return ?a common belief is that the advantage is the differentiation which provides two sorts of gametes , i.e. , sex cells : small mobile spermatozoa , whose task is to reach the ova , and relatively big but immobile ova , which provide the future embryos with nutrients .however , a similar specialization takes place among the hermaphroditic animals ( e.g. , earthworms and oysters ) without sex differentiation and the accompanied decrease in the genetic variety .hence we can not explain the biological meaning of sex differentiation in this way .let us try to analyze the role of the sexes in the reproduction process , i.e. , to elucidate their relationships to the main production criteria : quantity , quality , and variety of the product . because each kind of mass - production is mainly characterized by these three parameters .suppose 100 bisons are released into a sanctuary .how should the ratio between the sexes be chosen , how many cows and how many bulls ? obviously , this ratio depends on the goal .for example , if the goal is to maximize the _ quantity _ , the number of calves , then 99 cows and 1 bull is a reasonable proportion , because 99 new calves can be born in each generation .however these calves will all share the same father , and will differ only as to the mother .the number of possible parent combinations in this case is 99 .if , on the other hand , maximum _ variety _ of the progeny is desired , then the number of cows should equal the number of bulls . in this case, the number of possible combinations is .however , the number of progeny decreases , because only 50 calves will be born in each generation .finally if the _ quality _ of the herd is the goal , then conditions for sexual selection should be created in such a way that part of the animals do not participate in the reproduction . for thisit is necessary to have extra males .then , the competition for the females will eliminate the representation of some of the males from the progeny . the larger the ratio of males to females , the more severe is the selection . thus , there exists a kind of specialization of function between the sexes in reproduction .the two sexes have different relationships with respect to the main parameters , quantity and quality of the progeny : the more females in the population , the higher the quantity , while the more males in the population , the better the selection , and the faster the changes in quality .this asymmetry appears only on the population level ; in each family , an offspring receives roughly the same quantity of genetic information from the father and the mother .the new principle becomes apparent only if we consider not a family but the entire population .( we discuss an `` ideal '' population , that crossbreeds randomly . )the specialization described above originates in the fact that the potential of a male for the transmission of genetic information is incomparably higher than that of a female .every male might , in principle , become the father of the entire progeny of the population , whereas the ability of females in this respect is limited . in the jargon of cybernetics ,the section of the channel for transmitting male genetic information to the progeny is significantly wider than that for transmitting female genetic information .genetically rare males , unlike genetically rare females , can play a substantial role in changing the average genotype , i.e. the quality of the population .the difference between the two sections of the channel to the progeny also appears in that each separate male `` tends '' to use his ability to the largest degree and leave a maximum quantity of progeny , thus affecting the quality of the population . at the same time , every separate female `` tends '' to assure the best possible quality for her limited quantity of progeny .the schematic formula looks as follows : the number of females determines the size of the population . +each female fights for the quality of her progeny .+ the number of males determines the quality of the population .+ each male fights for the number of his progeny . to a certain degree, this simplified formula explains the different psychologies of the sexes .darwin gives an incisive example of this difference : `` males of the deerhound are inclined to someone else s females , whereas the females prefer the company of familiar males . ''[ reverse translation from russian .] this does not mean , of course , that the females are good and the males are bad .they are simply different , and the difference has a biological foundation . to understand the advantages of sexual specialization , we have to consider the relationship between the population and the environment .but first we will try to reveal the features of analogous mechanisms in various control systems from the point of view of cybernetics , the science of analogies .from the point of view of cybernetics , all three are control systems .all are characterized by the attempt to reach a goal .the goal may be the moon for the rocket , victory for the soccer team , survival for the animal population .all three systems are subject to perturbations .the rocket is perturbed by the atmosphere and gravitational fields , the soccer team is perturbed by the efforts of the opposite team , and the animal population is perturbed by the environmental factors : climate , food , predators , parasites , etc .each system counteracts the perturbations to achieve stability of its motion .what is the mechanism of counteraction ?an important feature is separation of the mechanism for conservation , with the task of keeping everything as is , from the mechanism for making changes , with the task of correcting errors . in the rocket , stabilizers [ fin assembly ]are for conservation while rudders are for making changes in the motion . in the soccer team , these mechanisms are respectively the defense players , who try to keep the score constant , and the forwards , who try to change it in favor of their team . the same result , stability of motion , is achieved in a similar manner in different systems : by separate mechanisms for conservation and for making changes .this separation allows the system to have maximum stability of motion .what about the animal population ?does not the differentiation of sexes relate to the analogous separation of conservation and change ?we have already shown that the females control the quantitative side of reproduction and the males define the qualitative side . in the biological categories , this means that the females largely express the tendency for inheritance , and the males largely express the tendency for change .finally , in computer science terminology , the female represents the memory in a permanent storage for the species , whereas the male represents the memory in a temporary storage .this separation of two types of memory gives substantial advantages to the species . to see thislet us consider the relationship of the population with the environment .the notion of environment includes the set of all physical , chemical , and biological factors which affect an organism during its lifespan .climatic factors are part of the environment : cold and heat , humidity and drought ; these also include various chemicals in the food , water , and air ; and finally there are various animals of the same or different species which live nearby ( predators , parasites , etc ) .a characteristic feature of a living system is the ability to adapt to changing environmental conditions .the system must receive information about the change from the environment in order to adapt .all characteristics of an organism are directly or indirectly connected with the corresponding environmental factors : the resistance to cold is connected with the low temperature , the resistance to heat is connected with the high temperature , the resistance to drought is connected with the humidity etc .connections with other factors of the environment may be less evident ; however , it is beyond doubt that the optimal , average values for the characteristics of an organism are , in the end , determined by the corresponding factors or combinations of factors of the environment . for expository simplicitylet us choose a factor - characteristic pair , say temperature and the resistance to temperature , and represent graphically in fig.7.1 the relation between the population and the environment for this particular pair . in fig.7.1 ,the abscissa represents the intensity of the harmful perturbation factor , say cold ; at the same time , the abscissa represents the degree of resistance to the factor .the ordinate in fig.7.1 represents the number of animals that would die for a particular value of the factor , i.e. as a result of a given temperature .the curve obtained characterizes the mortality of the population as a whole .the harmful factor front is represented by a vertical line .the front cuts off the part of the population ( the shaded area in fig.7.1 ) which is most vulnerable to the factor .[ more explanation , might be helpful .an additional dimension , time , is implied in fig.7.1 .the term `` harmful factor front '' suggests the `` approaching '' factor , i.e. the weather is becoming colder .the vertical line called the `` front '' in fig.7.1 , represents the current level of the factor in this case .if , on the other hand , the factor is `` retreating '' , i.e. the weather is becoming warmer , then the current level of the factor represents its `` rear . ''fig.7.1 does not represent the latter case as the factor should not to be considered harmful when it is `` retreating '' : specimens do not die because of it .but they may start dying from the opposite factor which then becomes harmful , i.e. heat ._ translator thanks dr .kruskal who pointed out this and several other somewhat non - obvious spots in the original ._ ] the mortality curve must be always in contact with the harmful factor front in order for the population to sense the approaching perturbation .the population must always sacrifice some quantity to receive the information which improves the population quality with respect to the factor .this implies , for example , that even in a population living in the tropics , say , monkeys , some animals sometimes die from the cold . at the same time , even in an arctic or antarctic population , say , penguins or white bears , some animals sometimes die from the heat .a tribute for the information received is necessary for the informational contact with the environment .if a population pays no such tribute , no information is received from the environment and the population is not able to adapt to it .a sudden change in the environmental condition might have caught such a population unprepared and killed it entirely . naturally , it is advantageous for a population to minimize the tribute of the quantity for a new quality .how is this minimization achieved ?the optimization method employs the asymmetry of the relationship of the sexes to the quantity and quality of progeny .caused by a perturbation in the environment , losses of males or females affect progeny differently .a loss of females strongly affects the quantity of the progeny , without essentially affecting the quality . on the other hand, a loss of males during unfavorable environmental conditions does not tell on the quantity of the population but changes the quality in the proper direction .thus , we may say , that a loss of females during `` hardships '' is useless , decreasing the size of the population .a loss of males in similar conditions is useful , promoting the evolution of the species .poets and writers often call the female sex `` beautiful '' and `` weak . ''the validity of the first attribute seems undoubted .but is the second one correct ?if by strength we understand the degree of resistance in hardships , then the female sex should be considered as the strong one . indeed ,multiple experiments on plants and animals and observation on humans show that the males die first as a result of all harmful environmental factors : heat , cold , hunger , poisons , and illnesses .not only does the entire male organism have a lower resistance than the female organism , but its various organs , tissues and cells are also weaker than those in females . how can this weakness and higher mortality of males be explained ?two theories explain the phenomenon .the first one says that the heterogametic sex always has a higher mortality because of recessive genes connected with the sex chromosome .[ the heterogametic sex is the one with unlike sex chromosomes x and y , as opposed to the homogametic sex which possesses similar sex chromosomes x and x. if a recessive bad gene occurs in a sex chromosome of a heterogametic specimen , the latter acquires the bad trait because there is no similar gene in the specimen s genetic set .hemophilia is a classical example of such a trait .the gene responsible for the illness is located in the sex chromosome x. unlike men , women carrying the hemophilia gene have a low chance of acquiring the disease because their genetic set includes another x chromosome which usually masks the sick gene with a healthy counterpart ._ translator s comment_. ] as to the second theory , it infers a higher male mortality from a more intensive metabolism .the first theory contradicts the results of mortality studies among birds , butterflies and moths . unlike the overwhelming majority of other species , the females of these species constitute the heterogametic sex while the males are the homogametic sex .yet experiments show that in many species of butterflies , several species of birds and moths , male mortality is almost always higher than that of females . the second theory , in fact ,does not explain anything but substitutes one incomprehensible phenomenon of higher mortality with another no more comprehensible phenomenon of higher metabolism .if resistant females exist , why should not similarly resistant males ?the fact that the males are biologically weaker implies the following : if , on the same graph , we draw the mortality curves for each sex separately , then we will see that only the males curve makes contact with the front of a harmful factor .there are two main possibilities for drawing the females curve .the latter can either shift to the right ( fig.9.1 ) , or it can have a smaller dispersion than the males curve ( fig.9.2 ) . taking into account our previous considerations ( that the loss of males does not affect the quantity of the population but promotes adaptation of quality , and that rare specimens of males have a larger informational value than rare specimens of females ; that the males are the main carriers of the information from the environment to the population ) , we come to the conclusion that for each characteristic considered , the males curve must have larger dispersion than that of the females .this means that the males must have greater variety that the females in all qualities .if we include all the males in the population in one team and include all the females in the other and arrange competitions between the two teams , then the champions in all personal competitions will be the males , whereas in a whole - team competition ( where the results of all participants count ) the females will be the winners .[ in other words : the mean - value of the females curve is to the right of the males ( females are the winners in a whole - team competition ) , while the dispersion of the females curve is lower than of the males ( females are the losers in personal competitions ) .thus , the author suggests that a combination of both tendencies takes place , the one presented in fig.9.1 and the one presented in fig.9.2 ._ translator s comment_. ] such a relationship of the resistances between the sexes makes it possible for the population as a whole to pay for the new information mainly with males whose loss promotes the shift of quality without changing the number of specimens .thus , the higher male mortality is expedient for the survival of the species .it is now clear that the sex ratio is an important parameter of the population , connected with the hereditary conservation and changeability tendencies in the reproduction process .therefore , the sex ratio at birth must reflect these tendencies in their dependence on the environmental conditions during different periods of the population history .an increase of male mortality in unfavorable environmental conditions must lead to an increase of the male / female ratio at birth .an increase of this ratio may be triggered directly by a changed environmental conditions , independently of the ratio of sexes of adult animals .there are reasons to believe that in vertebrates this increase is controlled by steroid hormones of the hypophysis ( pituitary gland ) , adrenal cortex and gonads . to promote faster adaptation of the population , an environmental hardship should increase the male s renewal rate .the rule is that in response to _ any _ unfavorable environmental condition , _ both _ the birth and the death rates for the males increase .multiple facts reported and known in biology confirm this rule .interestingly enough , the species which utilize either method of reproduction , asexual or sexual ( bacteria , infusoria , some crayfish , and other ) , resort to sexual reproduction under unfavorable environmental conditions . for example , among many kinds of water - fleas _ daphnia genus _ as well as among the aphids _ aphidae _ , when the conditions are favorable , usually in summer , asexual reproduction ( parthenogenesis ) takes place .the new fleas ( only females ) hatch out of the summer soft - membrane eggs . when less favorable conditions strike , some of the females produce a quantity of males , which then copulate with the females .the fertilized females deposit the hard - shelled winter eggs , which can stay alive for a long time in unfavorable conditions : during cold , heat , drought , when the water reservoir dries out . extracting a population of female rotifers _ rotifera phylum _ from pond water and placing them into river or well water , or doing the reverse resettlement , the scientists observed an emergence of males on the third or fourth day after the move .the direction of the move , from the pond to the river or from the river to the pond , was unimportant : any change in the environment caused the males to appear .subjecting sexual animals like drosophila ( fruit flies ) to harmful factors , the researchers observed the simultaneous increases in both the male birth and death rates .this seemed paradoxical and unexplainable . indeed , why should a harmful factor ( no matter which : hunger , cold , heat , or a poison ) act as such for male flies at all stages of the life causing them to die more intensively , except at the very beginning of the male life and , moreover , promote the birth of more males ? why are the sex theories sometimes so contradictory ?for example , paying attention to the fact that during a cold year more boys than girls are born , a scientist inferred that cold promotes the birth of boys , while heat promotes the birth of girls . later , the scientist noticed that extreme heat also promotes the birth of the boys .then another theory appeared which explained everything in exactly the opposite way : heat promotes the birth of boys , while cold promotes the birth of girls .meanwhile , both phenomena are explainable easily by the rule of higher male renewal rate for any change in the environmental conditions .facts which confirm this rule can be found among mammals , humans including .medical and demographic statistics show that during substantial climatic and social shifts ( abrupt change of the temperature , drought , war , hunger , resettlement ) , that is , during an increase of mortality , a tendency to increase the ratio of boys to girls among the newborn babies is also observed .the same tendency is observed by cattle - breeders : better the maintenance conditions of the animals , more females in the progeny , even if artificial insemination is practiced and the sperm is taken from the same male .the separation of the population into two sexes and the specialization of the sexes , wherein one sex is responsible for quality and the other for quantity leads to the situation where any information stream about environmental changes is first received by the males , which react to this information and transform the stream .in other words , new information gets first into the temporary memory of the population , where it is checked and selected , and only after this is it transferred into the permanent memory , i.e. females .this separation , into a more inertial stable kernel and a more mobile sensitive shell , allows the population to distinguish temporary , short - term and random factors of the environment , e.g. an unusually cold winter , or an especially hot summer , from systematic changes in the same direction , say the beginning of an ice age .one may say that the information stream from the temporary memory gets to the permanent memory through a frequency filter .the filter lets low frequency through but blocks high frequencies .it is this filtration through the temporary memory by which the inertiality of the permanent memory is achieved .a good sculptor , before making a sculpture out of marble , will create many models out of clay .the nature acts similarly . like a sculptor, it first creates a large variety of males ( clay models ) , testing them and selecting good versions to implement later in females ( marble sculpture ) .thus , in a population , new qualities first appear among the males and may afterward appear among the females .we may consider the male as the vanguard of the population , which advances to meet the harmful factors of the environment .a certain distance is kept between this vanguard and the kernel , `` the golden fund '' of the population .the distance is necessary for testing and selection .the evolutionary inertiality , lagging of the females , is a payment for their perfection .vice versa , the progressivity of the males is a benefit of their imperfection .we can formulate the following hypothesis : a new quality in phylogeny must first become permanent among the males , and then it must be transferred to the females . in other words ,the males are the `` door '' for the change in the heredity of the population .thus , if the male and the female are distinct from each other in some quality , say in the height or color , one may predict the direction of change , namely , the quality is changing in the direction from the female to the male .for example , if males are bigger than the females , then there is an evolutionary tendency for size to increase in the species . in the other case ,if males are smaller than females , the species evolves to have smaller specimens .we may conjecture that humans are becoming taller at this stage of history , because an average man is taller than an average woman . among the spiders ,the tendency must be opposite , because their males are smaller than the females .the anthropologists and entomologists believe this is the case : mankind is growing , spiders are shrinking .another example is the well - known connection between ontogenetic and phylogenetic emergence of the antlers in male and female deers . a strong relation exists between the extent of antlers in a species and the age when antlers appear in a specimen .namely , the larger the extent , the earlier the antlers appear , first in males , then in females .the suggested rule can be applied to study some concrete problem of evolution , remembering of course that this general tendency can sometimes be overlapped by other tendencies .an application of certain general ideas and approaches of cybernetics to the formulation and solution of biological problems allows us to understand certain facts , previously mysterious .now we know that the advantages of asexual reproduction are efficient only in short `` sprinter '' evolutionary distances .`` stayer '' and `` marathon '' distances need the sexual method .the advantages of crossbreeding and differentiation are now clear , as well as the fact that these advantages can be fully realized only in an `` ideal '' population .this implies an insignificant sexual dimorphism in monogamous species versus a strong sexual dimorphism in polygamous species .returning to our first question , whether males are generally needed , we should answer : yes , they are needed mostly for adaptation to the changes in environmental conditions .this holds for animals .what about humans ?it is known that social and technological progress steadily decreases the role of the biological evolution .having learned how to change the environmental conditions , man renders himself free from the necessity to change himself .indeed , if a new ice age begins , animals will grow thick hair , but man will put on synthetic fur clothes . we conclude that social and technological progress should steadily increase the role and proportion of women in the society .[ this statement which is apparently saying that men are dying out is put in such an indirect way perhaps to improve the chances of the original publication ._ translator s comment_. ]
evolutionary role of the separation into two sexes from a cyberneticist s point of view . [ i translated this 1965 article from russian `` nauka i zhizn '' ( science and life ) in 1988 . in a popular form , the article puts forward several useful ideas not all of which even today are necessarily well known or widely accepted . _ boris lubachevsky , bdl-labs.com_ ]
a common bifurcation to instability , one that occurs in so - called natural hamiltonian systems that have hamiltonians composed of the sum of kinetic and potential energy terms , happens when under a parameter change the potential energy function changes from positive to negative curvature .in such a bifurcation , pairs of pure imaginary eigenvalues corresponding to real oscillation frequencies collide at zero and transition to pure imaginary , corresponding to growth and decay .this behavior , which can occur in general hamiltonian systems and is termed the steady state ( ss ) bifurcation , is depicted in the complex frequency plane in fig .alternatively , the hamiltonian hopf ( hh ) bifurcation is the generic bifurcation that occurs in hamiltonian systems when pairs of nonzero eigenvalues collide in the so - called kren collision between eigenmodes of positive and negative signature , as depicted in fig . [ hbif]b .such bifurcations occur in a variety of mechanical systems ; however , hh bifurcations also occur in infinite - dimensional systems with discrete spectra .in fact , one of the earliest such bifurcations was identified in the field of plasma physics for streaming instabilities , where signature was associated with the sign of the dielectric energy , and this idea made its way into fluid mechanics .streaming instabilities were interpreted in the noncanonical hamiltonian context in , where signature was related to the sign of the oscillation energy in the stable hamiltonian normal form ( see [ stabnf ] below ) .the purpose of this chapter and its companion is to describe hamiltonian bifurcations in the noncanonical hamiltonian formalism ( see ) , which is the natural form for a large class of matter models including those that describe fluids and plasmas .particular emphasis is on the continuum hamiltonian hopf ( chh ) bifurcation , which is terminology we introduce for particular bifurcations that arise in hamiltonian systems when there exists a continuous spectrum .there also exist a continuum steady state ( css ) bifurcation , but this will only be mentioned in passing .a difficulty presents itself when attempting to generalize kren s theorem , which states that a necessary condition for the bifurcation to instability is that the colliding eigenvalues of the hh bifurcation have opposite signature , to systems with continuous spectra .this difficult arises because ` eigenfunctions ' associated with the continuous spectrum are not normalizable , in the usual sense , and consequently obstacles have to be overcome to define signature for the continuous spectrum .this was done first in the context of the vlasov equation in and for fluid shear flow in . given this definition of signature , it become possible in to define the chh , a meaningful generalization of the hh bifurcation . in the present chapter we motivate and explore aspects of the chh , which are picked up in our companion chapter .to this end we describe in secs .[ sec : discrete ] and [ sec : theories ] large classes of hamiltonian systems that possess discrete and continuous spectra when linearized about equilibria .these classes are noncanonically hamiltonian , as is the case in general for matter models in terms eulerian variables . for a general field variable that represents the state of such a system , a noncanonical hamiltonian dynamical system has the form _t=\ { , h}= , where ] that satisfy \{c , f}0 f. we refer the reader to for further details . in sec .[ sec : discrete ] we consider a class of 1 + 1 multi - fluid theories , that possess discrete spectra when linearized about homogeneous equilibria .the linearization procedure along with techniques for canonization and diagonalization , i.e. , transformation to conventional canonical form and transformation to the stable normal form , respectively , are developed .then , specific examples are considered that display both ss and hh bifurcations . in sec .[ sec : theories ] we consider a class of 2 + 1 theories .the class is described and the chh bifurcation for the particular case of the vlasov - poisson system is discussed .relationship to the results of sec .[ sec : discrete ] is shown by introducing the waterbag model , which is one way of discretizing the continuous spectrum , and motivates our definition of the chh bifurcation . finally , in sec . [sec : conclu1 ] , we summarize and introduce the material that will be treated in .we first describe a class of hamiltonian theories of fluid type that have equilibria with discrete spectra .three examples are considered that demonstrate the occurrence of hamiltonian bifurcations like those of finite - dimensional systems . in the last example of sec .[ sssec : jeans ] , the hh bifurcation is seen to arise in the context of streaming . for our purposes here it sufficient to consider a class of 1 + 1 theories of hamiltonian fluid type .these theories have space - time independent variables , where , where , on which we assume spatial periodicity for dependent variables of fluid type , , where and are the density and velocity fields , respectively , with . these fields will be governed by a coupled set of ideal fluid - like equations generated by a hamiltonian with a noncanonical poisson bracket .the noncanonical poisson bracket for the class is obtained from that for the ideal fluid reduced to one spatial dimension , \{f , g}= _= 1^m_dx ( - ) , [ mfpb ] where the shorthand is used and and are the usual functional ( variational ) derivatives ( see e.g. ) .we consider hamiltonian functionals of the following form : h[_,u_]=_=1^m_dx(12 _ u^2 _ + _ u _ ( _ ) + 12 _ ) [ mfb ] where the internal energy per unit mass , , is arbitrary but often taken to be where and the polytropic index are positive constants .the coupling between the fluids is included by means of a field that satisfies ( x , t)=_=0^m [ _ ] [ mfh ] where is a symmetric pseudo - differential operator , = \int_{\mathbb{t}}dx\ , g\p[f] ] is the poisson bracket for a single particle with particle energy , which will depend globally on .equation ( [ eq : den.eq ] ) is therefore a mean field theory , where is a density of particles in phase space that generates and is advected along the single particle trajectories that result from .the resulting equations are typically quasilinear partial integrodifferential equations .we assume that the particle energy arises from a hamiltonian functional of the form = h_1 + h_2 + h_3 + \dots ] is a constant of motion for ( [ eq : den.eq ] ) . equation ( [ eq : den.eq ] ) with is a hamiltonian field theory in terms of the noncanonical lie - poisson bracket of \{f , g}=_d^2z f .[ eq : pb ] this bracket depends explicitly upon , unlike usual poisson brackets that only depend on ( functional ) derivatives of the canonical variables .the bracket of ( [ eq : pb ] ) is antisymmetric and satisfies the jacobi identity , though it is degenerate , unlike canonical brackets .the equations of motion may be written : = \{f , h } = - = - [ f , ] , [ eq : eom0 ] where . as mentioned in sec .[ sec : intro ] , degeneracy of the poisson bracket gives rise to casimir invariants , quantities that are conserved for _ any _ hamiltonian .for the bracket of ( [ eq : pb ] ) the casimir invariants are = \int_{\calz } d^2z \ , \calc(f) ] generally arising from translation symmetries of the interaction kernels .the system conserves momentum if there exists a canonical transformation of the phase space , , such that in the new particle coordinates , the interactions , , etc . have upon composition with one of the following two forms : h_1z= = |h_2(i , i,|-| ) [ eq : vpform ] or h_1z= 0,h_2(z , z ) = |h_2(|i - i|,|-| ) .[ euform ] in the first case = \int_{\calz}\!\!d^2z \ , i\ , f(z)\,.\ ] ] is conserved , while in the second case we have two kinds of translation invariance and thus two components of the momentum =\int_{\calz}\!\!d^2z \ , i\ , f(z ) \, \quad { \rm and } \quad p_2[f]= \int_{\calz}\!\!d^2z \,\ , { \ensuremath{\theta}}\ , f(z)\,.\ ] ] these momenta can be very useful ( cf ., e.g. , ) , but they will not be discussed further here .equilibrium states have a function of the single particle constants of motion only , i.e. , the single particle energy and possibly momenta . the example treated here has an equilibrium that only depends on , where are the action - angle variables corresponding to a given .for this reason we set and then when a choice of is made , represents the main dynamical variable .the phase space is , i.e. , periodic in , and where here .upon substitution of into , both of the forms of ( [ eq : vpform ] ) and ( [ euform ] ) can be written as follows : =\cale[f_0 ] + \cale[\ze]=:h(i ) + \phi({\ensuremath{\theta}},i)\,,\ ] ] with where and are determined by and .thus the governing equation is _t + [ f_0 , ] + [ , h+]=0 , [ eq : eom ] where =f_{\th } g_{i } - g_{\th } f_{i} ] denotes the hilbert transform , =\frac{1}{\pi}\dashint\!dp\,{f_0'}/({p - u}) ] that emerges from the origin in the complex plane at , descends , and then wraps around to return to the origin at . from this figureit is evident that the winding number of the -plot is zero for any fixed , and as a result there are no unstable modes .here we take the value of to be fixed . for a maxwellian distribution function . ]penrose plots can be used to visually determine spectral stability . as described above , the maxwellian distribution function is stable , as the resulting -plot does not encircle the origin .however , it is not difficult to construct unstable distribution functions .in particular , the superposition of two displaced maxwellian distributions , , is such a case .as increases the distribution goes from stable to unstable .this instability is known as the two - stream instability .figures [ 2max]a and [ 2max]b demonstrate how the transition from stability to instability is manifested in a penrose plot . at the bifurcation point the penrose plot crosses the origin , indicating the vanishing of the dispersion relation on the real axis and therefore the presence of a member of the point spectrum .this eigenmode will be stable because , and will be embedded within the continuous spectrum .thus , the two - stream instability is an example of the chh bifurcation .the description of the chh bifurcation requires that one be able to assign an energy signature to the continuous spectrum . because eigenfunctions associated with continuous spectra are not normalizable , this requires some delicacy .this was first done in the vlasov context in , where a comparison to the usual energy signature for discrete modes was given , and followed by a rigorous treatment of signature in . in the context of shear flow, signature was defined in , in magnetofluids in , and for the general system described in the present subsection in .a rigorous version of kren s theorm for the chh bifurcation was given in .we shall give a general description of this energy signature for the continuous spectrum in , but we motivate it here first by treating the analogous version of this instability in the context of the waterbag model , which will have the advantage of only possessing a discrete spectrum. one important feature of the system ( [ eq : den.eq ] ) is that its solution is a symplectic rearrangment of the initial condition , i.e. , its solution has the form where is a canonical transformation .the rearrangement comes from the solution of the ordinary differential equation for a single particle in the self - consistent potential .this implies that the level set topology of the initial condition is preserved , which can be leveraged to simplify the equations in the case of certain types of initial conditions .one such simplification is known as the waterbag reduction ( see , e.g. , ) , in which it is assumed that the initial condition is a sum of characteristic functions .this property is preserved under composition with the symplectic map , so that the solution remains a sum of characteristic functions .the equations simplify to equations for the locations of the contours separating different regions of constant .piecewise constant initial conditions lead to a fluid closure that is exact for waterbag initial conditions , and the theories in the previous section can be seen to arise from such an ansatz .we will exploit the reduction by using a layered waterbag or onion - like initial condition to closely approximate a continuous distribution function that undergoes the bifurcation to linear instability we are interested in . in this way we will be able to connect the hh bifurcation with the chh bifurcation which we describe later .we begin by assuming to be piecewise constant between curves , i.e. , where is a positive constant .the equations for the curves come from the equations of single particle motion for a particle at : and this system is hamiltonian , with hamiltonian function being the classical energy : here , is the change in the distribution function when crossing the waterbag layer .the poisson bracket is similar to those seen in hamiltonian fluid theories : the equilibria of the waterbag model that we are interested in studying are charge neutral and spatially homogeneous , constant , such that the electric potential .we chose such a state and linearize about it , yielding the equations of motion moving to fourier space and eliminating the dependence on in favor of the wavenumber gives the equations of motion for the fourier coefficients . in termsof the fourier coefficients , the hamiltonian of the linearized system is here the term arises from the term in the linearized vlasov equation , which indicates the signature of the continuous spectrum .the bracket is the bracket of the original nonlinear system written in terms of the fourier modes : =\sum_{k\in \n}\sum_{\al}\frac{ik}{\delta f_{\al}}\ , \left ( \frac{\p f}{\p p_k^{\al}}\frac{\p g}{\p p_{-k}^{\al } } - \frac{\p g}{\p p_k^{\al}}\frac{\p f}{\p p_{-k}^{\al } } \right)\,.\ ] ] this bracket is nondegenerate and therefore the system is nearly canonical in terms of the new variables . in particular , for a given pair , the linear equations form a finite - dimensional canonical hamiltonian system with scaling similar to that of sec . [ssec : mfexamples ] . the dispersion relation for this system , for a given wave number , and ,is derived by multiplying the equation by and summing , which is analogous to that for the vlasov system , this dispersion relation can be analyzed graphically in terms of .there are poles of the dispersion function where . for , the dispersion function always has a zero if has the same sign as , because will converge to the opposite value of infinity at each end of the interval. therefore there will be at least one zero in each interval that has this property . in intervalswhere and have different signs there are either zeros or an even number of zeros , because must converge to the same value of infinity .the reader may have noticed a similarity between the above formulas , and the multi - fluid formulas of sec .[ ssec : mulitftheo ] .in fact , the waterbag models are examples of multi - fluid models , which are thus exact fluid closures of the vlasov - poisson system .this can be seen by writing the waterbag model in terms of new variables and given by where is a fluid density , and is a fluid velocity . under this change of variables the equations governing the waterbag modeltake the following form : & & _ , t+ ( u _ _ ) _ q = 0 , _qq=- _ _ + & & ( _ u_)_t+ ( u_^2_+ _ ^3f_^2/12)_q=-__q . evidently , under the identification ( or , the above equations are identified as a multi - fluid hamiltonian system . the dispersion function can also be rewritten in terms of the new variables , so that it resembles that analogous expressions ( i.e. , the diagravic or dielectric functions of the multi - fluid section ) .after linearizing around an equilibrium state with , , and then performing some algebraic manipulations yields ( k,)= 1 - _ , [ wbvareps ] where is a thermal velocity that measures the width in velocity space of a waterbag .thus bifurcations in the waterbag model , the vlasov - poisson system , and hamiltonian multi - fluid equations are all described using similar mathematical expressions . because the waterbag system is a finite - dimensional canonical linear hamiltonian system , the standard results of that theory apply , including kren s theorem .we can therefore determine whether there are any unstable modes by counting the total number of neutral eigenvalues .if it is equal to the number of degrees of freedom of the system , then we can expect stability , otherwise , due to the fact that eigenvalues off the imaginary axis come in quartets , we can expect instability .now we determine the signature of each of the stable discrete modes of the waterbag model . beginning with the linearized equations , and assuming the normalization condition , , we find the fourier eigenvector 12 & 12#1212_12%12[1][0] in _ _ , ( , ) in _ _( , ) in _ _ , , vol . , ( , , ) pp. _ _ , lecture notes in mathematics 1160 ( , , ) * * , ( ) * * , ( ) * * , ( ) in _ _ , ( , , ) pp . * * , ( ) * * , ( ) ( ) * * , ( ) * * , ( ) * * , ( ) in _ _ , ( , , ) * * , ( ) * * , ( ) * * , ( ) _ _ ( , , ) in _ _ , vol . , ( , ) pp . * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) ( ) * * , ( ) in _ _ , ( , , ) pp . _ _( , , ) ( ) * * , ( ) * * , ( ) in _ _ , ( , ) pp . * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( )
hamiltonian bifurcations in the context of noncanonical hamiltonian matter models are described . first , a large class of 1 + 1 hamiltonian multi - fluid models is considered . these models have linear dynamics with discrete spectra , when linearized about homogeneous equilibria , and these spectra have counterparts to the steady state and hamiltonian hopf bifurcations when equilibrium parameters are varied . examples of fluid sound waves and plasma and gravitational streaming are treated in detail . next , using these 1 + 1 examples as a guide , a large class of 2 + 1 hamiltonian systems is introduced , and hamiltonian bifurcations with continuous spectra are examined . it is shown how to attach a signature to such continuous spectra , which facilitates the description of the continuous hamiltonian hopf bifurcation . this chapter lays the groundwork for kren - like theorems associated with the chh bifurcation that are more rigorously discussed in our companion chapter .
human mobility in the real world and cyberspace plays an ever increasing role in the modern society and economy .many important processes are affected by the patterns of human mobility , such as epidemic and information spreading , traffic congestion , and e - commerce .modern research on human mobility dynamics began with the trajectory - based approach , e.g. , by tracing the trajectories of dollar bills in the real world , which revealed a number of scaling relations such as a truncated power law in the distribution of the traveling distance .analysis of the mobile phone data demonstrated that the individual travel patterns can be characterized by a spatial probability distribution , indicating the existence of universal patterns in the human trajectories .the question of whether human mobility patterns are predictable was addressed through an analysis of the limits of predictability in human dynamics .more recently , human mobility in the cyberspace and its relation to that in the physical space were studied using big data analysis and phenomenological modeling .fundamental to the study of human mobility dynamics is the development of models to reproduce the phenomena and scaling relations obtained from empirical data .a pioneering work in this field is the articulation of a statistical , self - consistent microscopic model .subsequent studies focused on predicting the mobility flow between two locations through , e.g. , the classic gravity model .a stochastic process capturing local mobility decisions , the so - called radiation model , was introduced , which yields better agreement with the empirical data than the gravity model .alternative mechanisms were introduced to model the human trajectories .the modeling effort has also been extended to the cyberspace .while many models were developed to reproduce the scaling laws obtained from various human mobility empirical data , a physical and first - principle based understanding of the underlying dynamics is still missing . in particular , the widely studied model of human microscopic trajectories imposes the hypothesis that the probability for individuals to visit new locations in the physical space has a power - law form : , where is the number of distinct locations already visited , with and being two parameters .the underlying mechanism accounting for the power - law probability of exploring new locations is yet elusive , prompting us to wonder if there is a universal , minimal model capable of predicting all known scaling laws for human mobility in both the real world and cyberspace . in this paper, we show that all observed statistical features of human trajectories in the physical space and cyberspace can be quantitatively predicted through a universal mechanism : memory based , preferential random walk process .the memory effect has been known to be important to human dynamics in general , which in limited space is a basic ingredient in our minimal model .the probability for an individual to visit a new location can then be obtained from _first principle considerations _ without the need to hypothesize any particular mathematical form .the basic idea is intuitive ( as most of us have experience with ) : if an individual visited a location in the past , the location would imprint a memory effect on the individual , enhancing the probability for him / her to visit the same location in the future .the striking finding is that , this simple rule , with only a single parameter , is capable of generating all the known statistical properties of human mobility ( e.g. , those predicted by the models of self - consistency with more parameters and scaling assumption about the probability ) . solving our minimal model analytically , we obtain scaling relations that agree well with the empirical ones from mobile phone check - ins and online shopping data sets that record human trajectories in the real world and cyberspace , respectively .our minimal model thus establishes the universal underpinning of human mobility , representing a significant step forward in understanding modern human behaviors through statistical physics .this has the potential to advance a number of disciplines such as social sciences and online economics .we consider a finite space of locations , in which individuals perform random walk with the probability of visiting a position proportional to its weight . for convenience ,we use latin and greek letters to denote individuals and locations , respectively .the weight of a location with respect to individual , , is updated during the process .an actual visit of to will increase the weight through - the memory factor parameter . for and ,the random walk is unbiased and memory - preferential , respectively .is denoted as .the probability for the walker to choose location is propositional to the initial weight of this location plus the product of the memory factor parameter and . ] in our memory - preferential random walk ( mprw ) model , for individual the weight sequence of all locations at time step can be written as , where is the number of times that position has been visited before time .when is about to move to a new location at , the probability to go to is proportional to the weight of , i.e. , .we have }.\ ] ] a typical step of the memory - preferential random walk process is schematically illustrated in fig .[ fig : mprw ] .the three statistical quantities characterizing the human mobility dynamics are : ( _ i _ ) the total number of distinct locations that an individual visited within time , ( _ ii _ ) the probability for an individual to visit the new location , if he / she has already visited distinct locations , and ( _ iii _ ) the fraction of locations that have been visited times .the quantity can be used to infer whether previously visited locations are more likely to be visited than newly discovered locations , which we will show possesses a more complex form than the well - known zipf s law .the quantity is similar to the degree distribution in complex networks .the three quantities can be used to validate our model through a detailed comparison between theoretical prediction and numerical results .we aim to obtain the analytic expectation values of the three characterizing quantities .since walkers are independent of each other , it suffices to analyze a single walker .\(i ) _ the number of distinct locations , . _ is defined as the total number of distinct positions visited by the person within time .inspired by the master equation method , we write down the probability of visiting a new position : by solving it , we have \(ii ) _ the visit probability of positions discovered at different time , . _ by using same method as above we have \(iii ) _ the visit probability of each position , . _ to calculate , we note that the total number of the visited locations is .each location has its own ordinal , which gives a relation between and .suppose is a monotonously decreasing function , we can obtain its inverse , also a monotonously decreasing function .the measure of is .we have as is an integer and in the system , we have where more details can be seen in the appendix part .we conduct systematic numerical simulations of our mprw model to obtain the scaling laws governing the three characterizing quantities , using the concrete setting where 100 walkers are distributed in a space of 1000 locations and perform 1000 walks , i.e. , , and . as , , defined for each walker , it is necessary to aggregate the results from all walkers to uncover the general features .our approach is the following .( _ i _ ) for each user , we obtain the relation between and , where is the total number of previously visited distinct locations within time .we have .( _ ii _ ) to calculate , we let be the probability of s visiting the location .say has visited distinct locations before walking into .the quantity is then the fraction of times that walker visited , and we have .( _ iii _ ) for , we note that , each location can be visited by different times for each walker .let be the number of times that walker visited .the aggregated frequency of visit to is , and can be obtained through the histogram of the sequence . , the quantities , , and , respectively . in ( a ) and( b ) , the agreement between simulation and theory is remarkable , even for relatively short time duration . in ( c ), the theoretical prediction of exhibits a power - law scaling , but there are numerical deviations .the discrepancies can be reduced by increasing the duration , as shown in the inset for . ][ fig : comparison](a ) shows the function for different values of , which is a sub - linear increasing function .we see that a stronger memory effect corresponds to a smaller rate of increase , which is natural due to individuals resistance to explore new locations .[ fig : comparison](b ) shows the behavior , where we see that the memory effect in general decreases the value at which begins to decrease rapidly , meaning that nostalgic individuals tend to discover few locations , a behavior that is consistent with that in fig .[ fig : comparison](a ) . for both figs .[ fig : comparison](a ) and [ fig : comparison](b ) , the simulation results agree well with the analytical prediction . fig . [fig : comparison](c ) shows the distribution function , which exhibits a general power - law scaling behavior . for large values of ,the scaling exponent is about .however , for small values of , apparently deviates from the theoretically predicted power - law form .the deviation is a result of relatively short simulation duration .when we increase the duration to , the deviation diminishes , as shown in the inset of fig .[ fig : comparison](c ) . , , and , respectively , where the optimal memory factor parameter is chosen to be .in ( b ) , the curve from the simulated data has a shorter tail than that from the real data , which can be corrected by extending the time duration from to in the mprw model ( inset ) .( d - f ) the corresponding results from a big online - shopping data set for . ]we validate our model with location - based check - ins data .our data set recorded , in new york city , the positions of 42035 individuals as they use the check - in application , where the whole city is divided into 197 blocks . in order to obtain long enough time series , we focus on the individuals who have at least recorded locations and analyze their first records .there are 60 individuals whose recorded data fulfill this requirement .the quantities , and are computed by aggregating the data from different individuals .the parameters that can be input to the mprw model are thus , and .a key to validating the model is the choice of some suitable value of the memory factor , .the optimal value , denoted as , can be estimated by comparing from simulation and from real data . specifically , limiting the choices of to integer values , we can calculate a set of square distance values , , between the two curves .the value of is one that minimizes . for the mobile phone data , we have , as shown in fig . [fig : real_data](a ) .we see that , for this choice of , the model predicted function agrees well with that from the mobile phone check - ins data .with determined solely from , we also obtain a good agreement between the model and empirical results for the quantities and , as shown in figs .[ fig : real_data](b ) and [ fig : real_data](c ) , respectively . which is remarkable .from fig .[ fig : real_data](b ) , we note that the curve from the model has a shorter tail than that from the real data .this is due to the difference in the time scale between the simulation and real data , e.g. , in the real data may correspond to a much longer time duration in the model .extending the time duration to in the model gives a better agreement , as shown in the inset of fig .[ fig : real_data](b ) .we find that and are immune to this effect , insofar as the time duration is not too small .the big data set is from _taobao.com _ . as a main business branch of the alibaba group ( a giant chinese internet company ), taobao is regarded as china s equivalent of ebay .the data set consists of the click records of taobao users .when a user intends to make a purchase on taobao , he / she clicks a sequence of links to obtain the relevant information ( e.g. , brand and price ) of the product and then chooses one to buy .this process can be regarded as users surfing online web pages , i.e. , movements in the cyberspace .our data set consists of the records of 34330 individuals .after initial filtering to remove the individuals with abnormally long or very short click strings , we obtain a slightly smaller data set with 33462 users , for which the total number of web pages is . to cast the online - shopping process in the framework of mprw , we regard each web page as a location . to be consistent with the data , we set , and . using the same method as for the mobile phone data , we determine the optimal value of the memory - factor parameter to be .the results of , , and are shown in figs .[ fig : real_data](d - f ) , respectively .again , the results from mprw model agree well with those from the data [ for a good agreement is achieved when an extended time duration , , is used in the model , as shown in the inset in ( b ) ] , suggesting the model s universal applicability .to summarize , we develop a random walk model with a single parameter to reproduce the statistical scaling behaviors of the three quantities characterizing human mobility . the key element that makes our model distinct from previous onesis a memory - preferential mechanism in limited space .we demonstrate that , when this mechanism is incorporated into a standard random walk process , the analytically predicted behaviors agree , at a detailed and quantitative level , with those from two representative real data sets , one for real world and another for cyberspace movements .this is remarkable , considering that model is minimal with only a single adjustable parameter , the memory - factor parameter .the main message is then that , while various mechanisms can be considered for human mobility , such as planted or social events and gender difference , our findings provide strong evidence that random walk with memory is the universal underpinning of the human movement dynamics .while we assume in the present work that the walkers are homogeneous , the analysis can be extended to models incorporating memory heterogeneity .given a data set from any generic behavior of human movements , the optimal memory - factor parameter for the mprw model can be estimated by comparing the behavior of an elementary statistical quantity from data and model .this feature has the additional benefit of assessing and quantifying the degree of intrinsic memory effect in the real system , which has potential applications to problems of significant current interest such as traffic optimization and online recommendation .this work is supported by national science foundation of china ( grant no . 61573064 , 61074116 and 11547188 ) , the youth scholars program of beijing normal university ( grant no .2014nt38 ) , and the fundamental research funds for the central universities beijing nova programme , china .xyy acknowledges the support from the national natural science foundation of china ( grant no .61304177 ) and the fundamental research funds of bjtu ( grant no .2015rc042 ) .we have the probability of visiting a new position eq . ( 1 ) .so we have the equation by solving it , we have where is a constant . at the beginning of the random - walk, we have the initial condition : and .accordingly , we can obtain the value of as thus we can have the solution of as eq . ( 2 ) .[ [ the - visit - probability - of - positions - discovered - at - different - time - pz . ] ] the visit probability of positions discovered at different time , .~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ let denote the position has been visited times and denote the first time when position is visited.by using the same method , to ,we have solving it with the initial condition and , we have actually , we have a hidden condition . by using it, we can get the visited frequency of each position is proportional to , so as the sum of the equals to evolving time , so we have fras - martnez e , williamson g and fras - martnez v 2011 an agent - based model of epidemic spread using human mobility and social network information _ privacy , security , risk and trust ( passat ) and 2011 ieee third int . conf . on social computing ( socialcom ) ,2011 ieee third int . conf . on _ 57 - 64 berlingerio m _et al _ 2013 allaboard : a system for exploring urban mobility and optimizing public transport using cellphone data _ machine learning and knowledge discovery in databases lecture notes in computer science _ *8190 * 663 - 666
human movements in the real world and in cyberspace affect not only dynamical processes such as epidemic spreading and information diffusion but also social and economical activities such as urban planning and personalized recommendation in online shopping . despite recent efforts in characterizing and modeling human behaviors in both the real and cyber worlds , the fundamental dynamics underlying human mobility have not been well understood . we develop a minimal , memory - based random walk model in limited space for reproducing , with a single parameter , the key statistical behaviors characterizing human movements in both spaces . the model is validated using big data from mobile phone and online commerce , suggesting memory - based random walk dynamics as the universal underpinning for human mobility , regardless of whether it occurs in the real world or in cyberspace .
this paper presents a model for the spatio temporal field of hourly ozone concentrations for subregions of the eastern united states , one that can in principle be used for both spatial and temporal prediction .it goes on to critically assess that model and the approach used for its construction , with mixed results .such models are needed for a variety of purposes described in ozone ( 2005 ) where a comprehensive survey of the literature on such methods is given , along with their strengths and weaknesses .in particular , they can be used to help characterize population levels of exposures to ozone in outdoor environments , based on measurements taken at often remote ambient monitors .these interpolated concentrations can also be used as input to computer models that incorporate indoor environments to more accurately predict population levels of exposure to an air pollutant .such models can reduce the deleterious effects of errors resulting from the use of ambient monitoring measurements to represent exposure .for example , on hot summer days the ambient levels will overestimate exposure since people tend to stay in air conditioned indoor environments where exposures are lower . to address that problem ,the us environmental protection agency ( epa ) developed apex .it is being used by policy makers to set air quality standards under hypothetical emission reduction scenarios ( ozone , 2005 ) . interpolated ozone fields could well be used as input to apex to further reduce that measurement error although that application has not been made to date for ozone .however , it has been made for particulate air pollution through an exposure model called sheds ( burke et al . ,2001 ) as well as a simplified version of sheds ( calder et al . , 2003 ) .interest in predicting human exposure and hence in mapping ozone space time fields , stems from concern about the adverse human health effects of ozone .ozone ( 2005 ) reviews an extensive literature on that subject .exposure chamber studies show that inhaling high concentrations of ozone compromises lung function quite dramatically in healthy individuals ( and presumably to an even greater degree in unhealthy individuals such as those suffering from asthma ) .moreover , epidemiology studies show strong associations between adverse health effects such exposures .consequently , the us clean air act mandates that national ambient air quality standards are necessary for ozone to protect human health and welfare .thus , spatio temporal models can have a role in setting these naaqs .ozone concentrations over a geographic region vary randomly over time , and therefore constitute a spatio temporal field . in both rural and urban areas such fieldsare customarily monitored , the latter to ensure compliance with the naaqs amongst other things .in fact , failure can result in substantial financial penalties .a number of approaches can be taken to modelling such space time fields . herewe investigate a promising one that involves selecting a member of a very large class of so called state space models .section [ chapter2:dlmbackgrand ] describes our choice , a dynamic linear model ( dlm ) , a variation of those proposed by huerta et al .( 2004 ) and stroud et al .( 2001 ) . here`` dynamic '' , refers to the dlm s capability of systematically modifying its parameters over time , a seemingly attractive feature since the processes it models will themselves generally evolve `` due to the passage of time as a fundamental motive force '' ( west and harrison , 1997 ) .however , other approaches are possible and in a companion report currently in preparation , the dlm selected here will be compared with other possibilities .section [ chapter2:dlmbackgrand ] introduces the hourly concentration field that is to be modeled in this report . there consideration of measurements made at fixed site monitors and reported in the airs dataset leads to the construction of our dlm . [ the _ epa _ ( environmental protection agency ) changed the _ airs _ ( aerometric information retrieved system ) to the _ afs _ ( air facility subsystem ) in 2001 . ]that model becomes the object of our assessment in subsequent sections .to illustrate how to select some of the model parameters in the dlm , we use the simple first order polynomial dlm in section [ pred : var : simplest : dlm ] to shed some light on this problem .moreover , we prove there in a simple but representative case , that under the type of model constructed here and by huerta et al .( 2004 ) , the predictive variances for successive time points conditional on all the data must be monotonically increasing , an undesirable property .theoretical results and algorithms on the dlm are represented in sections [ chapter2:algo : est ] and [ chapter2:algo : pred ] .the mcmc sampling scheme is outlined in section [ chapter2:algo : est : mcmc ] .the forward filtering backward sampling ( ffbs ) method is demonstrated in section [ ffbs : state : sample ] to estimate the state parameters in the dlm .moreover , we outline the mcmc sampling scheme to obtain samples for other model parameters from their posterior conditional distributions with a metropolis hasting step .section [ chapter2:algo : pred ] gives theoretical results for prediction and interpolation at unmonitored ( ungauged ) sites from their predictive posterior distributions .section [ chapter2:example : cluster2 ] shows the results of mcmc sampling along with interpolation results on the ozone study .section [ problems : in : dlm ] describes problems with the dlm process revealed by our assessment .we summarize our findings and draw conclusions from our assessment in section [ chapter2:summary ] . as an added note ,we have developed software , written in c and r and available online ( http://enviro.stat.ubc.ca ) that may be used to reproduce our findings or to use the model for modeling hourly pollution in other settings .although we believe the methods described in this paper apply quite generally to hourly pollution concentration space time fields , it focuses on an hourly ozone concentrations ( ppb ) over part of the eastern united states during the summer of 1995 . in all , irregularly located sites ( or `` stations '' ) monitor that field . to enable a focused assessment of the dlm approach andmake computations feasible , we consider just one cluster of ten stations ( cluster 2 ) , in close proximity to one another .however , in work not reported here for brevity , two other such clusters led to similar findings .note that cluster 2 has the same number of stations as the one in mexico city studied by huerta et al.(2004 ) .the initial exploratory data analysis followed that of huerta et al .( 2004 ) with a similar result , a square root transformation of the data is needed to validate the normality assumption for the dlm residuals . [note that a small amount of randomly missing data were filled in by the spatial regression method ( srm ) , before we began . ]the plot of a bayesian periodogram ( dou et al . , 2007 ) for the transformed data at the sites in our cluster reveals a peak between 1 pm and 3 pm each day with a significant cycle for the stations in cluster 2 .we also found a slightly significant cycle .however , no obvious weekly cycles or nightly peaks were seen .thus , the dlm suggested by our analysis turns out to be a variant of the one in huerta et al .( 2004 ) ; it has states for both local trends as well as periodicity across sites . to define the model ,let denote the square root of the observable ozone concentration , at site and time being the total number of gauged ( that is , monitoring ) sites in the geographical subregion of interest and the total number of time points .furthermore , let .then the dlm for the field is where , ] , ] being the block diagonal matrix with diagonal entries and let where represents all the missing values and all the observed values in cluster 2 sites for the model unknowns are therefore the coordinates of the vector in which the vector of state parameters up to time is , the range parameter is , the variance parameter is and finally the vector of phase parameters is .let be the vector of parameters fixed in the dlm to render computation feasible .specification of the dlm is completed by prescribing the hyperpriors for the distributions of some of the model parameters : notice that and have inverse gamma distributions for computational convenience .the choice of the hyperpriors is discussed in section [ chapter2:example : cluster2 ] .we express the state space model in two different ways because of our dual objectives of parameter inference and interpolation . for simplicity, we use models ( [ dlm : obs])([dlm : state ] ) for inference about the range , variance and state parameters ( see section [ ffbs : state : sample ] ) , and use models ( [ dlm : y])([dlm : alpha ] ) for inference on the phase parameters ( see section [ ffbs : phase : sample ] ) and interpolation ( see section [ chapter2:algo : pred ] ) .before turning to the implementation of the approach in the next section , we explore theoretically , albeit in a tractable special case , some features of the model .that exploration leads to insight about how the model s parameters should be specified as well as undesirable consequences of inappropriate choices .our assessment will focus on the accuracy of the model s predictions .this simple model we consider is a special case of the so called `` first order polynomial model '' , a mathematically tractable , commonly used model .it captures many important features and properties of the dlm we have adopted . for and , the first order polynomial dlm is given by where and assume and and are all currently known .the first order polynomial dlm is particularly useful for short term prediction since then the underlying evolution is roughly constant .observe that the zero mean evolution error process is independent over time , so that the underlying process is merely a random walk ; the model does not anticipate long term variation . at any fixed time consequently , the first order polynomial dlm has the following covariance structure : where for and this dlm defines a non stationary spatio temporal process since for the first order polynomial model to be stationary , the eigenvalues of state transfer matrix , in the notation of west and harrison ( 1997 ) , must lie inside of the unit circle .however , so that this process is not a stationary gaussian dlm . furthermore , the dlm defined in section [ chapter2:dlmbackgrand ] is non stationary because given all the model parameters in ( [ dlm : obs])([dlm : state ] ) .the dlm in ( [ pred : var : obs : eqn])([pred : var : state : eqn ] ) has an important property that the covariance functions in ( [ pred : var : cov : yit : yjt])([pred : var : cov : yit : yjs ] ) depends on the time point of , not on thus confirming our observation of non - stationary .we readily find the correlation between and to be where and .1 in * remarks .* * 1 . * the correlations in ( [ pred : var : corr : yit : yjt ] ) and( [ pred : var : corr : yit : yjs ] ) have the following properties when : 1 . + for and 2 . + is a monotone increasing function of .thus for any fixed time point , as a function of attains its maximum at and decreases as increases .* 2 . * by ( [ pred : var : corr : yit : yjt ] ) , as for that property seems unreasonable ; the degree of association between two fixed monitors should not increase as an artifact of a larger time t. that suggests a need to make some of the model parameters , say , depend on time .more specifically , ( [ pred : var : corr : yit : yjt ] ) suggests making stabilize carrying this assessment further , for any two sites in close proximity , i.e. for , a result that seems quite reasonable . for two sites very far apart so that , this correlation should be close to . in other words , we should have a sufficient condition for this to hold is and the key result , ( [ pred : var : corr : yit : yjt ] ) , suggests a simple but straightforward way to adjust the model parameter according to the size of , namely , to replace it by .that choice is empirically validated in section [ chapter2:summary ] .we turn now to study the behavior of the predictive variances in the first order polynomial dlm that helps us understand the interpolation results . to that endconsider the correlations of responses at an ungauged site with those at the gauged site respectively .note that both ( [ pred : var : corr : prop1 ] ) and ( [ pred : var : corr : prop2 ] ) hold for the properties of the correlation structure in ( [ pred : var : corr : prop1])([pred : var : corr : prop2 ] ) , lead us to the conjecture that the model s predictive bands should increase monotonically over time as more data become available , in the absence of restrictions on suggested above . furthermore , even conditioning on all the data , the predictive bands should also increase over time . in support of these conjectures, we prove that they hold in a simple case where and in ( [ pred : var : obs : eqn])([pred : var : state : eqn ] ) .[ pred : var : simple : dlm : result ] for the first order polynomial dlm in ( [ pred : var : obs : eqn])([pred : var : state : eqn ] ) with and , assume the prior for to be the joint distribution of is where being the vector of 1s ( ). then we have the following predictive conditional variances : where and for this simple case , we would expect the predictive variance of based on more data collected over time to be no greater than that of based on less , that is , and moreover , we would expect that , based on the same amount of data , the predictive variance of would be no greater than that of that is , dou et al . ( 2007 ) prove these conjectures and provide other comparisons of these predictive variances .we conclude that the predictive variance function is a monotonic increasing function of time based on the same set of data .it decreases when more data or equivalently , more time is involved .furthermore , the difference between these predictive variances decreases as increases .it increases with time even when conditioning on the same dataset .[ pe : timepoint:2:site:2 ] for the first order polynomial dlm in theorem [ pred : var : simple : dlm : result ] , we have the following properties of the predictive conditional variances : as an immediate consequence of ( [ pe : diff : y02:y01:y11:y12:spatial ] ) , the predictive variances increase monotonically at successive time points conditional on all the data .that leads to monotonically increasing coverage probabilities at the ungauged sites , an interesting phenomenon discussed in section [ problems : in : dlm ] .there we will also discuss the lessons learned in this section in relation to our empirical findings .next , we present a curious result about the properties of the above predictive variances that may explain some of their key features .this result concerns these predictive variances as functions of or part of its proof is included in appendix [ pred : var : result3 ] .[ pred : var : key : result ] the predictive conditional variances in ( [ pe : y01:y11])([pred : var : example : m2 ] ) increase as increases , or increases , or decreases .thus , keeping two parameters fixed , these predictive conditional variances are monotone functions of the remaining one .therefore , the dlm can paradoxically lead to larger predictive variances when conditioning on more data .for example , in the case and applying the dlm model with only the data at yields the predictive variance , which is exactly the same as in ( [ pe : y01:y11 ] ) . this predictive variance is smaller than in ( [ pe : y02:y11:y12 ] ) , which is based on more data , under certain condition specified in the next corollary .[ pred : var : paradox ] for the first order polynomial dlm in theorem 1 , the behavior suggested by corollary [ pred : var : paradox ] is actually observed in our application ( see section [ problems : in : dlm ] ) .this section very briefly describes how to implement our model using the mcmc method , more specifically , the forward filtering backward sampling algorithm of carter and kohn ( 1994 ) .the details are given by dou et al .( 2007 ) .the joint distribution , , is the object of interest . here represents the observation matrix at the gauged sites up to time moreover , is the vector of state parameters at the gauged sites until time for simplicity , the values of are fixed but the problem of setting them will be addressed below .additional detail can be found in appendix [ appendix : section:2:3:1 ] .since that joint distribution does not have a closed form , direct sampling methods fail , leading to the use of the markov chain monte carlo ( mcmc ) method . a _ blocking mcmc_ scheme increases iterative sampling efficiency , three blocks being chosen for reasons given in dou et al .( 2007 ) : and more precisely we can : 1 . sample from 2 . sample from and 3 .sample from since has no closed form , the full conditional posterior distribution of is obtained by kalman filtering and smoothing , in other words , by the ffbs algorithm .assuming an inverse gamma hyperprior for the conditional posterior distribution of given the range and phase parameters is also inverse gamma distributed with new shape and scale parameters .note that indicating that we can sample iteratively from the three conditional posterior distributions on the right hand side of ( [ joint : post : range : variance : state ] ) to obtain samples from however , has no closed form , leading us to sample by a _ metropolis chain within a gibbs sampling cycle , an algorithm as described in the next three subsections . to sample from we use the block mcmc scheme .because of ( [ joint : post : range : variance : state ] ) , we could ideally iteratively sample from from and from however , because we do not have a closed form for the posterior density of , we use instead the _ metropolis hasting algorithm _ to sample , given the data , from the following a quantity that is proportional to its posterior density , that is , ^{-(nt/2+\alpha)}.\end{aligned}\ ] ] since we can not compute the normalization constant for the metropolis hasting algorithm is used .the proposal density , is selected to be a lognormal distribution , because the parameter space is bounded below by , making the gaussian distribution inappropriate . as moller ( 2003 ) points out , this alternative to a random walk metropolis considers the proposal move to be a random multiple of the current state . from the current state the proposed move is where is drawn from a symmetric density , such as normal . in other words , at iteration we sample a new from this proposal distribution , centered at the previously sampled with a tuning parameter , , as the variance for the distribution of .gamerman ( 2006 ) suggests the acceptance rate , that is , the ratio of accepted to the total number of iterations , be around we tune to attain that rate .if the acceptance rate were too high , for example , to we would increase if too low , for example , to we would decrease , to narrow down the search area for the metropolis hasting algorithm proceeds as follows . given and , where 1 .draw from 2 .compute the acceptance probability : 3 .accept with probability in other words , sample $ ] and let if and otherwise .we run this algorithm iteratively until convergence is reached .next , we sample given the accepted s , s , s and . the prior for chosen to be an inverse gamma distribution with shape parameter and scale parameter the posterior distribution for is also an inverse gamma distribution , but with a shape parameter and a scale parameter we now sample given the accepted s , s , phase parameters and using ffbs .west and harrison ( 1997 ) propose a general theorem for inference about the parameters in the dlm framework . for timeseries data , the usual method for updating and predicting is the kalman filter .dou et al . ( 2007 ) present a ffbs algorithm ( similar to the kalman filter algorithm ) to resample the state parameters conditional on all the other parameters and observations as part of the mcmc method for sampling from the smoothing distribution the initial state parameter is given by ,\ ] ] where being the initial information , with and known . later in section [ chapter2:example : cluster2 ], we consider how to set them for cluster 2 airs dataset ( 1995 ) .let now suppose for expository purposes , that all the prior information has been given and s coordinates are mutually independent .mcmc can be used to fill in missing values at each iteration . to see how , note that at any fixed time point , after appropriately defining a scale matrix we can rewrite the observation vector as follows : where denotes the missing response(s ) at time and the observed response(s ) at notice that `` o '' represents `` observed '' and `` m '' , `` missing '' . let where represents the set of gauged sites containing missing values at time point the set of gauged sites containing observed values at time for all and such that for and we already know that ,\end{aligned}\ ] ] so that is also multivariate normally distributed , that is , , \end{array}\ ] ] where we can also partition as where and similarly , we have where and by a standard property of the multivariate normal distribution , we have ,\end{aligned}\ ] ] where and for at each iteration , we draw from the corresponding distribution ( [ miss : mcmc : distribution ] ) at each time point and then we can write the response variables as where and we now present our method for sampling the phase parameters from its full conditional posterior distribution , that is , by using the samples of , and . for simplicity , we use the notation for models ( [ dlm : y])([dlm : alpha ] ) in this section .we then sample the constant phase parameters conditional on all the other parameters and observations .suppose has a conjugate bivariate normal prior with mean vector and covariance matrix then the posterior conditional distribution for is normal with mean vector and covariance matrix where and can be obtained from equations given in dou et al .( 2007 ) . we will not use a non informative prior such as for since that choice can lead to non identified posterior means or posterior variances .in fact for that choice we find the posterior conditional distribution of to be normal with mean vector and covariance matrix from equations given in dou et al .( 2007 ) along with the elements of , where can be singular for any ( ) .hence , we obtain the extreme values at times that invalidates the assumption of constant phase parameters across all the time scales when we sample from its full conditional posterior distribution . for fixed values of and we sample the model parameter from at each time point , andthen obtain the `` sample '' of at this iteration by the median of these samples across all the time points , under the assumption that are constant phase parameters in the models ( [ dlm : obs])([dlm : state ] ) .the mcmc algorithm we use here resembles that of huerta et al .( 2004 ) , one difference being that we unlike them , use all the samples after the burn in period , not just the chain containing the accepted samples .we believe the markov chains of only accepted results will lead to biased samples , thereby changing the detailed balance equation of the metropolis hasting algorithm .the above algorithm we use for cluster 2 airs dataset is summarized as follows : 1 .initialization : sample 2 .given the value and the observations : 1 .sample from where 1 . 1 .generate a candidate value from a logarithm proposal distribution that is , for some suitable tuning parameter 2 .compute the acceptance ratio where 3 .with probability accept the candidate value and set otherwise reject and set 2 .sample from 3 .sample from 2 .sample from 3 .sample from where 3 .repeat until convergence .we have developed software to implement the dlm approach of this section . to enhance the metropolis within gibbs algorithm , we augment the r code with c to speed up the computation . the current version , _, is freely available at http://enviro.stat.ubc.ca for different platforms such as windows , unix and linux .this section describes how to interpolate hourly ozone concentrations at ungauged sites using the dlm and the simulated markov chains for the model parameters ( see section [ chapter2:algo : est ] ) . in other words , suppose are ungauged sites of interest within the geographical region of cluster 2 sites ( excluding the possibility of extrapolation ) .the objective is to draw samples from where and denotes the unobserved square root of ozone concentrations at the ungauged site and time for and for let denote the unobserved state parameters at site and time t. the dlm is given by where and in the following two subsections , we illustrate how to sample the unobserved state parameters from the corresponding conditional posterior distribution , and demonstrate the spatial interpolation at the ungauged site .we first sample given and from the state equation ( [ dlm : state ] ) for we know that the joint density of and follows a normal distribution , with covariance matrix where denotes the distance matrix for the unobserved station and the monitoring stations . the conditional posterior distribution , derived in appendix [ appendix : spatial ] .we interpolate the square root of ozone concentration at the ungauged sites by conditioning on all the other parameters and observations at the gauged sites . asabove , and are jointly normally distributed as a consequence of the observation equation . the predictive conditional distribution for that is , is given in appendix [ appendix : spatial ] .this section applies our model to the hourly ozone concentration field described above .six ungauged sites were randomly selected from those available within the range of the sites in cluster 2 to play the role of `` unmonitored sites '' and help us assess the performance of the dlm .the geographical locations of these six ungauged sites , represented by the alphabetic letters , are shown in figure [ fig : ug ] , along with the sites in cluster 2 .this subsection presents a mcmc simulation study in which samples are drawn sequentially from the joint posterior distribution of the model parameters in the dlm ._ initial settings _ + following huerta et al .( 2004 ) , we use the following initial settings for the starting values , hyperpriors and fixed model parameters in the dlm : 1 .the hyperprior for is and for the expected value of is and so are both of the variances of and these vague priors for and are selected to reflect our lack of prior knowledge about their distributions .the initial information for the initial state parameter , is assumed to be normally distributed with mean vector and covariance matrix where and is a block diagonal matrix with diagonal entries and 3 .the hyperprior for is a bivariate normal distribution with mean vector and a diagonal matrix with diagonal entries and 4 .some of the model parameters in the dlm are fixed as follows : and _ monitoring the convergence of the markov chains_ + figure [ mcmc : one : chain : converge ] shows the trace plots of model parameters and with the number of iterations of the simulated markov chains where the total number of iterations is the burn in period is chosen to be and all the remaining markov samples are collected for posterior inference .the acceptance rate is approximately we observe that the markov chain converges after a run of less than five hundreds iterations .table [ post : summary ] displays the median and quantile from the simulated markov chains for the model parameters and this subsection assesses the model s performance by comparing the interpolated values at the ungauged sites , , with the measurements made there .we use the entire dataset to assess the performance of the interpolation results .table [ cov : prob ] shows the coverage probabilities of the credibility intervals ( or `` credible intervals '' for short ) for these six ungauged sites at various norminal levels .generally , the coverage probabilities at the ungauged sites exceed their nominal levels indicating that the error bands are too wide . among these six ungauged sites ,site has the highest coverage probability seen in table [ cov : prob ] .this may be because of s nearness to a close `` relative '' among the gauged sites , namely , site that would be consistent with our assumption that the spatial correlation is inversely proportional to the intersite distance . at the same time , these unsatisfactory large coverage probabilities point to a deficiency of the dlm . to explore this issue further , we compared the values predicted for ungauged site from may 14 to september 11 , 1995 and the measurements made there . figures [ ug : d : wk1:wk4 ] and in more detail [ ug : d : wk17:day120 ] , which exemplify results reported in more detail by dou et al .( 2007 ) , depict the results for the first four weeks and the last week of that period , respectively .furthermore , table [ ungauge : gauge : friends ] shows for all the ungauged sites , the close relatives they have among the cluster 2 sites that lie within a radius of 100 km , the corresponding global circle distance ( gcd ) in km , and along with the average of their correlations .this table confirms that indeed does enjoy the highest correlation with its relative .that relationship is further explored in figure [ friend : ug : d : g:1 ] where we see a strong linear relationship between sites and as our coverage probability assessment had suggested . in spite of its reliance on the relatives ,the dlm does not predict responses at the ungauged sites very accurately as illustrated in figure [ ug : d : wk17:day120 ] . that points to problems with this model which will be discussed in the next section .in general , the dlm provides a remarkably powerful modelling tool , made practical by advances in statistical computing .however , its substantial computational requirements still limits its applicability .moreover , the very flexibility that makes it so powerful also imposes an immense burden of choice on the model .this section summarizes critical issues and includes some suggestions for improvement . _ * monitoring mcmc convergence * _ + figure [ mcmc : two : chains ] represents the trace plots of model parameters and of two chains from the initial settings in section [ chapter2:example : cluster2:dlm ] .these two chains seem to mix well after several hundreds iterations , suggesting at first glance the markov chains have converged . *autocorrelation and partial autocorrelation of the simulated markov chains * _ + however , we know that the autocorrelation , as measured by the autocorrelation function ( acf ) , is very important when considering the length of the chain .a highly auto correlated chain needs a long run to yield accurate estimates .moreover , the partial autocorrelation function ( pacf ) is also an important index for assessing a markov chain since large values of the pacf at lag indicates that the next value in the chain is dependent on past values , not just on the most recent ones .figure [ mcmc : hist : acf : pacf ] shows the histogram , acf and pacf plots for the markov chains used in section [ chapter2:example : cluster2:result ] , after a burn in period of the acf plots show the to be highly autocorrelated , in other words that the does not mix well , potentially leading to biased estimates in section [ chapter2:example : cluster2:result ] . thinning the chain might reduce that autocorrelation . in other words , using every ( ) generated by the chain could be used to produce the estimates .however , computational challenges make that strategy impractical ; we need to use the entire chain . _* relationship between pairs of and * _ + our prior assumptions make the model parameters and uncorrelated .figure [ scatter : model : parameters ] shows the relationship between the pairs of these parameters as a way of investigating that assumption .it seems valid except for the pair in graph that graph shows a weak linear association between and thus pointing to a failure of that assumption for that pair .since determines spatial variability while determines correlation this relationship seems intriguing .larger values of tend to go with larger , i.e. , diminished spatial correlation . why they are coupled in this wayis unknown but it should be accounted for in future applications of this model . _ * time varying and : empirical coverage probabilities versus nominal credible probabilities * _ + although we follow huerta et al .( 2004 ) in assuming the temporal constancy of and it is natural to ask if those generated by the mcmc method change over time .a variant of this issue concerns the time domain of the application .would the results for these parameters change if we switched from one time span to a longer one containing it ?a `` yes '' to this question would pose a challenge to anyone intending to apply the model , knowing that the choice would have implications for the size of and . to address these concerns we carried out the following studies : 1 .study implement the dlm at ungauged sites using weekly data ( ) . generate markov chains for and obtain the coverage probabilities at each ungauged site and week for fixed credibility interval probabilities .study implement the dlm at ungauged sites using week to week data ( ) .estimate model parameters and interpolate the results at those ungauged sites . obtain the coverage probabilities at each ungauged site and week for fixed credibility interval probabilities using each week s data .study fix at week ( ) using values suggested by the markov chains generated in study .then use these as fixed values in the dlm to reduce computation time . in other words , go through all the steps in the algorithm of section [ mcmc : algorithm : summary : table ] but now using only fixed instead of generating them by a metropolis hasting step .( note that we are then only using gibbs sampling and an mcmc blocking scheme . )compute the corresponding coverage probabilities using at each ungauged site and week for fixed credibility interval probabilities .studies and are intended to explore the effect of data and time propagation on the interpolation results .study aims to pick out any significant difference in the interpolation results when using the fixed rather than using the markov samples of .it is also aimed at finding how much time would be saved by avoiding the inefficient metropolis step . table [ fix : lambda : weekdata ] shows these fixed used in study table 5 shows the time saved using fixed against the one using the metropolis hastings algorithm . figure [ time : vary : lambda : sigma ] illustrates the mcmc estimation results obtained in study it plots the markov chains of and using weekly data .it is obvious that and vary from week to week , which implies that the constant model is not tenable over a whole summer for this dataset .figure [ cov : prob : paper ] typifies figures in dou et al .( 2007 ) showing the coverage probabilities for various predictive intervals associated with the interpolators in these three studies .the solid line with bullets represents the results for study the dotted line with up - triangles for study and the dashed line with squares for study these graphs show that the coverage probabilities of study are similar to that of study this suggests that we could use the entries in table [ fix : lambda : weekdata ] as fixed in the dlm to obtain interpolation results similar to those obtained using the metropolis within gibbs algorithm .we have studied the prediction accuracy of the simplest dlm , namely , the first order polynomial model , in section [ pred : var : simplest : dlm ] . as a result, the predictive variances should increase monotonically at successive time points conditional on all the 17 weeks data , in the general dlm setting ( see section [ pred : var : simplest : dlm ] ) .the plots exhibit a monotonic increasing trend in the coverage probabilities of both studies and this trend agrees with the graph of the coverage probabilities in figure [ cov : prob : paper ] . nevertheless ,those coverage probabilities of both studies deviate slightly from the expected monotonically increasing trend at some time points because of the time varying effect of monitored in figure [ time : vary : lambda : sigma ] .on the other hand , study enjoys significant computational time savings compared with table [ time : saving ] suggests that the computation time of the former is almost 2.8 times faster than the latter .study shows an intuitively unappealing increase in the uncertainty of interpolation results as time increases ; coverage probabilities get larger over time as we see in table [ cp : threestudies:80cp ] .this increase may be interpreted as saying that for the dlm models , the and collected from the data should vary over the entire time span of the study , while the prior postulates that they do not vary over that time span .the observed phenomenon may also be due to mis specification of the model parameter values ( see the initial settings for in section [ chapter2:example : cluster2:dlm ] . ) . comparing the results of these studies , we find that sometimes , paradoxically , the model gives better results using only one week s data rather than all . however , corollary [ pred : var : paradox ] in section [ pred : var : simplest : dlm ] predicts this finding . because the prior for is the expectation of is implying that and hence , which is less than ( for example, the median of is around 1.21 in study and even larger in study ) . by the sufficient and necessary condition in corollary [ pred : var : paradox ] ,the predictive variance of study is less than that of study however , notice that and vary from week to week in which may also lead to the paradox observed in the empirical findings of this section .for example , in ( b ) of figure [ cov : prob : paper ] , the coverage probability of at the week is larger than that of from the above discussion , we know that the predictive variance of should be less than that of however , of is larger than that of leading an inflated predictive variance of this feature makes it difficult to compare these two predictive variances , but explains the paradox we see in those figures .to assess the dynamic linear modelling approach to modelling space time fields , we have applied it to an hourly ozone concentration field over a geographical spatial domain covering most of the eastern united states . to focus that assessmentwe consider just one cluster of spatial sites we call cluster 2 during a single ozone season .moreover , we have used a variant of the dynamic linear modelling approach of huerta et al .( 2004 ) implemented through mcmc sampling .our assessment reveals some difficulties with that very flexible approach and practical challenges that it presents .we also have made some recommendations on improvement . a curious finding is the posterior dependence of and , in contradiction to our prior assumption .although the very efficient method huerta et al .( 2004 ) propose to sampling these parameters is biased , that bias does not appear large enough to account for that phenomenon .we also discovered that the assumption of their constancy over time is untenable .the coverage probabilities of the model s posterior predictive credibility intervals over successive weeks , conditional on all weeks of data , increase monotonically .counter to intuition , that would imply more and more uncertainty as time evolves , an artifact of the modelling that seems hard to explain . a pragmatic way around this undesirable property involves incorporating the length of the time span of the temporal domain into the selection of the values of the model parameters , such as and section [ pred : var : simplest : dlm ] studies the correlation structure of the simplest first order polynomial dlm and finds reasonable conditions to impose on those parameters . one further study tests the proposed constraints on the data .the settings are identical with those in study except that and are replaced by and respectively , to take account of the longer week time span of our study compared to the one week time span of the application in huerta et al .figure [ cov : prob : paper ] compares study with the others .observe that its coverage probabilities behave like those of study this adjustment does seem to eliminate the undesirable property of increasing credibility bands of studies and another possible approach to dealing with the unsuitability of fixed model parameters uses the composition of metropolis hasting kernels .in other words , we could include these parameters in the metropolis hasting algorithm as in section [ ffbs : state : sample ] .we can use six metropolis hasting kernels to sample from the target distribution , updating each component of iteratively , where has defined in section [ chapter2:dlmbackgrand ] .but , not surprisingly that approach fails because of the extreme computational burden it entails .however , that alternative is the subject of current work along with an approach that admits time varying and . the greatest difficulty involved in the use of the dlm in modelling air pollution space time fields lies in the computational burden it entails .for that reason , we have not been able to address the geographical domain of real interest , one that embraces sites in the eastern united states , with 120 days of hourly ozone concentrations . in a manuscript under preparation , an alternative hierarchical bayesian method that can cope with that larger domainwill be compared with the dlm where the latter can practically be applied .we thank prasad kasibhatla of nicholas school of the environment of duke university for providing the dataset used in this paper and helping with its installation . the funding for the workwas provided by the natural sciences and engineering research council of canada .only the results about the predictive variances of and are shown in this appendix .the other two cases can be obtained by the same method .refer to theorem [ pred : var : simple : dlm : result ] , the predictive variance of can also be written as follows : and respectively .it is straightforward to obtain that is increasing when increases , or decreases , or increases .we next show these properties also hold for by theorem [ pred : var : simple : dlm : result ] , can also be written as : the corresponding first partial derivatives are given as follows : if the prior for is an inverse gamma distribution with shape parameter and scale parameter then the posterior distribution for is also an inverse gamma distribution with shape parameter and scale parameter given the values of the phase parameters , range and variance parameters and the observations until time , the joint distribution of is \ ] ] where ,\ ] ] with a scalar , a by vector , and a by matrix .we use to denote the new distance matrix for the unknown site and the monitoring stations burke , j.m . and zufall , m.j . and ozkaynak , h. ( 2001 ) . a population exposure model for particulate matter : case study results for pm in philadelphia , pa , _ j. exposure anal . environepidemiol._,11 , 470489 .calder , c.a . andholloman , c.h . and bortnick , s.m . and strauss , w.j . and morara , m. ( 2003 ) . relating ambient particulate matter concentration levels to mortality using an exposure simulator , preprint # 725 , dept .ohio state u.
this paper presents a dynamic linear model for modeling hourly ozone concentrations over the eastern united states . that model , which is developed within an bayesian hierarchical framework , inherits the important feature of such models that its coefficients , treated as states of the process , can change with time . thus the model includes a time varying site invariant mean field as well as time varying coefficients for 24 and 12 diurnal cycle components . this cost of this model s great flexibility comes at the cost of computational complexity , forcing us to use an mcmc approach and to restrict application of our model domain to a small number of monitoring sites . we critically assess this model and discover some of its weaknesses in this type of application .
one of the famous problems in the field of combinatorics is the degree / diameter problem .the degree / diameter problem is the problem of finding the largest possible number of nodes in a graph of maximum degree and diameter .the maximum degree of a graph is the maximum degree of its nodes .the degree of a node is the number of edges incident to the node .the diameter of a graph is the maximum distance between two nodes of the graph . on the other hand ,the problem of the graph golf 2015 competition is the order / degree problem .the order / degree problem is the problem of finding a graph that has smallest diameter and average shortest path length ( aspl , ) for a given order and degree .compared to the degree / diameter problem , order is given and diameter is not given in the order / degree problem .as the organizer of the competition mentioned , the order / degree problem has important role to design networks for high perfomance computing . because , the number of nodes of the network is determined based on design constraints such as cost , space , budget , and applications . solutions of the degree / diameter problem can be used to limited networks of particular number of nodes . currently , there is no trivial way to increase or decrease the number of nodes from the optimal graph of the degree / diameter problem , while keeping its diameter close to the optimal graph .for example , besta and hoefler have presented diameter-2 and -3 networks with particular number of routers , and each endpoint is connected to a router .the number of endpoints can be changed in a range .matsutani et al . have reduced the communication latency of 3d nocs , by adding randomized shortcut links .we try to solve some order / degree problems .there are two contributions in this paper .1 ) showing heuristic algorithm to create a graph for given order and degree ( sec .[ sec : algorithm ] ) . using this algorithm ,we have created two best - known graphs ; one has order and degree , the other has and .after 2015 competition , we also have created other graph of order and degree .2 ) developing evaluation function of edges for 2-opt local search ( sec .[ sec : search ] ) .local search starts with a graph that has the given number of nodes and satisfies degree constraints . swapping two edgesis accepted , if swapped graph is better than the previous graph in terms of diameter and/or aspl .for example , if two edges - and - are selected for swapping from graph , we try to swap two edges such that two edges - and - are removed from and two edges - and - are added to the graph . if diameter and/or aspl of swapped graph is smaller than , this swap is accepted .we call the evaluation function `` edge importance '' .lower - importance edge pair is selected as the candidate of swapping earlier than other pairs . for the existence of local minimum graph, we need to temporarily accept worse graphs in searching graph of order and degree .the observation of small order graphs brings us the idea of the heuristic algorithm shown in sec . [sec : algorithm ] . at the beginning of the 2015 competition, we drew graphs with small order and degree .the first graph is order and degree as shown in fig .[ fig : n16d3zdd5 ] .the second one is order and degree as shown in fig .[ fig : n16d4iida13 ] .the diameters of these two graphs are three ( ) . through drawing these two graphs, we found that these graphs contain many pentagons ( 5-node cycles ) , no or small number of squares ( 4-node cycles ) , and no triangle ( 3-node cycle ) . in fig .[ fig : n16d3zdd5 ] , there is no triangle and no square .no triangle and four squares exist in fig .[ fig : n16d4iida13 ] . we think triangles and squares cause diameter aspl ( average shortest path length ) to be larger for the case of . through this observation , we define increasing the number of pentagons as our policy in section [ sec : algorithm ] . in the degree / diameter problem , pentagons are appeared in the graphs of diameter , e.g. , petersen graph ( shown in fig .[ fig : petersengraph ] ) and hoffman - singleton graph ( and ) . anddegree ] \(a ) ring layout and degree ] \(b ) pentagon ( 5-node cycle ) layout and degree \(a ) ring layout and degree \(b ) pentagon ( 5-node cycle ) layout and degree \(c ) square ( 4-node cycle ) layoutbased on the observation of small order graphs described in sec .[ sec : small ] , we determine the outline of our heuristic algorithm as following two steps .1 ) if target diameter is , we connect small order graphs such that their diameter is .for example , if the target diameter and order , the petersen graphs ( fig .[ fig : petersengraph ] , diameter ) are connected .2 ) we try to increase the number of -node cycles , when edges are added . for graphs of , we try to increase the number of pentagons ( 5-node cycles ) .outline of our heuristic algorithm is shown in algorithm [ alg : gg ] . in the remaining of the paper ,we discuss only graphs .our algorithm generates a graph which diameter is almost 3 , for given order and degree .create a base graph , such that order of is create a graph , by greedily adding edges one by one to return the graph a base graph has nodes , i.e. . the graph is a connected graph , but its degree is five .most nodes have five edges . other nodes , i.e. , some border and anomalous nodes have four edges .graph contains multiple petersen graphs . the petersen graph , which is shown in fig . [fig : petersengraph ] , is one of well - known moore graphs , and has ten nodes and degree .the diameter of petersen graph is two . when the nodes are numbered in fig .[ fig : petersengraph ] , fifteen edges of the petersen graph are described as follows , for and . ] if a given order is multiple of ten , we generate petersen graphs , .adjacent petersen graphs , and , are connected as shown in fig .[ fig : connectpetersengraph ] by connect procedure in algorithm [ alg : basegraph ] .[ fig : connectpetersengraph ] shows only edges crossing two petersen graphs . in the case of ,we generate 1000 petersen graphs , and -th graph are connected with -th and -th graph for .create -th petersen graphs connect -th and -th petersen graphs as shown in fig . [ fig : connectpetersengraph ] when there is a remainder divided by ten , i.e. , , we replace petersen graphs with 11-node graphs .the 11-node graph is the subgraph of fig .[ fig : n16d4iida13 ] .we heuristically select eleven nodes , from the graph of fig .[ fig : n16d4iida13 ] .when we connect a 11-node graph with adjacent petersen graphs , we ignore node 10 and other ten nodes are connected similar to fig .[ fig : connectpetersengraph ] .the eleven nodes graph is shown in fig .[ fig:11nodesgraph ] , nodes are renumbered , except for node 10 .nodes in fig .[ fig : n16d4iida13 ] are renumbered to in fig. [ fig:11nodesgraph ] , respectively . for base graphs ]the base graph is generated by createbasegraph procedure in algorithm [ alg : basegraph ] .the base graph has nodes .each node of has five edges , except for nodes in the first and the last petersen graphs and node 10 of 11-node graphs .these exceptional nodes have just four edges . in this step ,we greedily add edges one by one to the base graph .our policies to add edges are the followings . 1 .increase the number of pentagons in the graph , to create a graph such that its diameter becomes three and its aspl is close to two .2 . add an edge , which has the smallest degree node on one side .no track back , i.e. , never remove edges from the graph . under the policy 1 ) , our heuristic searches two nodes such that distance of them is four , and adds an edge between these two nodes . by adding the edge , the number of pentagons is increased . even if the small - degree graph of and in fig .[ fig : n16d4iida13 ] , there are many pentagons those include a particular edge .for example , an edge 1 - 2 is contained in eight pentagons ; 1 - 2 - 3 - 4 - 0 , 1 - 2 - 6 - 7 - 0 , 1 - 2 - 15 - 14 - 0 , 1 - 2 - 3 - 8 - 9 , 1 - 2 - 3 - 13 - 12 , 1 - 2 - 6 - 5 - 9 , 1 - 2 - 6 - 7 - 12 , and 1 - 2 - 6 - 10 - 9 .the policy 2 ) is employed to uniformly increase the degree of nodes and save computation time .our heuristic maintains the nodes that have the smallest degree , and selects a node from them as one side of a new edge .a node of the other side is selected based on the policy 1 ) .although we can select the new edge from all possible pair of nodes , to save computation time , our heuristic limits search space by fixing one side of new edge . the policy 3 ) also saves computation time .as another reason , we do not find any effective evaluation function to track back . here, we explain this step in algorithm [ alg : addedges ] . in each loop iteration of lines [ l : while1 ] to [ l : while1end ] , two nodes and are selected and add an edge - to the graph .node is chosen from the nodes of the smallest degree in at line [ l : selecti ] , based on policy 2 ) . denotes the distance between two nodes and . in the loop of lines[ l : for1 ] to [ l : for1end ] , candidates of node are evaluated , based on policy 1 ) . after evaluation , node that satisfiestwo following conditions ( [ eqn : p1 ] ) and ( [ eqn : p2 ] ) is selected in line [ l : selectj ] , and an edges - is added to graph in the next line . is the subset of such that each node in satisfies condition ( [ eqn : p1 ] ) .we have no particular tie - breaking rule . countpaths function is used for the evaluation of .countpaths roughly counts the number of paths between two nodes and , those distance are three . is the set of nodes distant from node , i.e. , for every node , satisfies .for example , every node satisfy and . in line [ l : countpathp ] is close to the twice of the number of 4-node paths . [ l : while1 ] [ l : selecti ] select a node from the smallest degree nodes compute node set such that [ l : for1 ] [ l : for1end ] select that satisfies conditions ( [ eqn : p1 ] ) and ( [ eqn : p2 ] ) [ l : selectj ] add an edge - to graph [ l : while1end ] if degree of several nodes are less than , add several edges between them [ l : countpathp ] roughly count the number of paths with distance 3 between node and , for graph diameter and aspl of generated graphs are shown in table [ tbl : heuristicresults ] . fortunately , two graphs of are the new records in the competition .the graph of has the same diameter , but longer aspl than the best record and two competitors records . for two graphs of ,our first implementation is too slow and can not finish before the deadline of the 2015 competition . after the competition , we reimplement the program and get the results as shown in table [ tbl : heuristicresults ] .rr|rr|l order & degree & diameter & aspl & note + 256 & 16 & 3 & 2.12757 & not submitted + 4096 & 60 & 3 & 2.295275 & * 1 + 4096 & 64 & 3 & 2.242228 & * 1 + 10000 & 60 & 3 & 2.648980 & * 2 + 10000 & 64 & 3 & 2.611310 & * 3 + +after a graph is created by heuristic algorithm described in sec .[ sec : algorithm ] for a given order and degree , we start 2-opt local search .2-opt is the basic and widely used local search heuristic .it is used for traveling salesperson problem ( tsp ) and others . in this section , we explain edge importance which is used to prioritize edge combinations for 2-opt local search .2-opt algorithm slightly modifies a given graph recursively .the modification of 2-opt is swapping two edges .an example of swapping two edges - and - into - and - is shown in fig .[ fig : swapexmaple ] .diameter and aspl of pre - swap graph and post - swap graph are compared with each other .if diameter and/or aspl of is smaller than , this swap is accepted .- and - into - and - .( we assume degree and other nodes are not drawn .there is another swap into - and - for these edges ) ] 2-opt local search is time - consuming task .there are many ways to reduce computation time .even if we search a graph of order and degree , there are 2048 edges in the graph .the number of edge pairs is about . generally , the number of edge pairs is about .for each swapped graph , we need to calculate diameter and aspl .we adopt two techniques to save computation time .one is edge importance , and the other is fast aspl calculation for 2-opt . our edge importance ( or edge impact ) is a value given to each edge of a graph . as an intuitive explanation, less important edges probably be removed from the graph with little increase of aspl than other edges .then , we give higher priority to less important edges , when we select an edge pair for swap .the edge importance of an edge - is defined by the following function . where is the importance of edge - for node .the examples of is shown in fig .[ fig : edgeimportance ] .we assume ( symmetricity ) and divide two cases of as follows . * if two nodes and have the same distance from , i.e. , , then .( left of fig .[ fig : edgeimportance ] ) * if two nodes and have the different distance from , i.e. , , then .we define node set , each of which has an edge to and its distance from is equal to . using , is defined as follows . the center of fig .[ fig : edgeimportance ] shows a subcase of , and the right of it shows the other subcase of . note that , this case includes the case of . if , then by the definition .- for node ] two lower - importance edges are the candidate of swapping for 2-opt local search .all edges are sorted by edge importance and denoted by .the edge has the smallest importance .the 2-opt local search algorithm is shown in algorithm [ alg:2opt ] the loop of lines [ alg2opt : repoeat1 ] to [ alg2opt : repoeat1end ] is the main loop of local search . line [ alg2opt : selectedgepair ] is the important point using edge importance .we tries several orders to select a pair , which are described in the next paragraph . for pair - and- selected in line [ alg2opt : selectedgepair ] , there are two combinations already has one of edges - or - ( - or - ) , we skip the swap since the degree of two nodes are decreased by one .] of swapping , 1 ) - and - ( of line [ alg2opt : g ] ) and 2 ) - and - ( of line [ alg2opt : g ] ) .diameter and aspl of both and are calculated .the loop of lines [ alg2opt : while1 ] to [ alg2opt : while1end ] implies that edge importance is reused for swapped graphs . in our experience , after fifty swaps , ordering still valuable to find smaller aspl graph .calculate edge importance of all edges in sort edges by edge importance [ alg2opt : while1 ] [ alg2opt : repoeat1 ] select an edge pair in a particular order[alg2opt : selectedgepair ] generate swapped graph [alg2opt : g ] calculate diameter and aspl of generate another swapped graph [alg2opt : g ] calculate diameter and aspl of [ alg2opt : repoeat1end ] copy or to [ alg2opt : while1end ] we heuristically employ two searching orders of line [ alg2opt : selectedgepair ] in algorithm [ alg:2opt ] .both orders satisfy for .we think the order of the smallest first is better than it of triangle , empirically .* the smallest first : for .+ + + + + + * triangle : for .+ + + + + + since aspl calculation is time - consuming task , we additionally design aspl recalculation method for 2-opt local search .the method stores the distance matrix of graph and update the matrix for swapped graph .we can elimiate re - calculation of the distance between nodes which the swap does not affect .we run local search program during the competition and after competition .we show the smallest graph that we found in table [ tbl:2optresults ] .these graphs probably are the best - known graphs for these four combinations of order and degree . for graphs of order and , algorithm [ alg:2opt ]is directly applied .rr|rr|r order & degree & diameter & aspl & of table [ tbl : heuristicresults ] + 256 & 16 & 3 & 2.09069 & 2.12757 + 4096 & 60 & 3 & 2.295216 & 2.295275 + 4096 & 64 & 3 & 2.242170 & 2.242228 + 10000 & 60 & 3 & 2.648977 & 2.648980 + + for the graph of order and degree , our graphs fall into local optimal many times .to find graphs of smaller aspl , we accept worse post - graph than pre - swap graph in 2-opt local search .[ fig : localsearch - n256-d16 ] shows the search history of the last 1000 graphs before reaching the best - known graph of .many branches from each graph is omitted . in this figure ,we show aspl of each graph and order of edges that we swapped .the order of swapped edges is distributed from 0 to 620 , i.e. , swapped edges are two of in each graph . to reach the best - known graph ,we need to run local search at least in the range of edge pair for and .this range contains only 4 % of all edge pairs .so , the edge importance seems to be valuable function to prioritize edges for swapping .we briefly explain the distribution of order of swapped edges . since two edges are selected for each swap in fig .[ fig : localsearch - n256-d16 ] , the total number of selected edges are 2000 for 1000 swaps .the half of these edges have the order smaller than 8 . the order smaller than 108 contains 90% of these edges . and degree ] \(a ) first half ( aspl : ) and degree ,title="fig : " ] ( b ) last half ( aspl : )in this paper , we explained the heuristic algorithm that creates a graph which has small average shortest path length ( aspl ) for diameter 3 graphs .the algorithm intends to increase the number of pentagons ( 5-node cycles ) . through the observation of small order graphs which has diameter 3, we focused on the number of pentagons .the heuristic algorithm can create two best - known graphs at the graph golf 2015 competition , and a best - known graph after the competition .these three graphs have order and degree , and , and and .we also explained the technique of 2-opt local search to reduce aspl of a graph .the technique is based on the evaluation function called edge importance ( or edge impact ) .edges which have smaller importance compared to other edges are the good candidates for swap of 2-opt .we applied this technique to the graph of and , and find a best - known graph after competition .as future work , we will try to find more elegant heuristic to create a small aspl graph , for not only diameter 3 graphs but also larger diameter graphs .data structures for fast aspl computation also should be explored .this work was partially supported by jsps kakenhi grant number 26330107 .p. erds , s. fajtlowicz , and a.j .hoffman , `` maximum degree in graphs of diameter 2 , '' networks , vol . 10 , pp . 8790 , john wiley & sons , 1980 .e. loz and j. ir , `` new record graphs in the degree - diameter problem , '' australasian journal of combinatorics , vol . 41 , pp . 6380 , 2008 . m. miller and j. ir , `` moore graphs and beyond : a survey of the degree / diameter problem , '' electronic journal of combinatorics , vol .20 , no . 2 , # ds14v2 , 92 pages , 2013 . g. exoo and r. jajcay , `` dynamic cage survey , '' electronic journal of combinatorics , # ds16 , 55 pages , 2013 .m. koibuchi , i. fujiwara , s. fujita , k. nakano , t. uno , t. inoue , and k. kawarabayashi , `` graph golf : the order / degree problem competition , '' http://research.nii.ac.jp/graphgolf/ m. besta and t. hoefler , `` slim fly : a cost effective low - diameter network topology , '' proc .of international conference on high performance computing , networking , storage and analysis ( sc 14 ) , pp . 348359 , nov .h. matsutani , m. koibuchi , i. fujiwara , t. kagami , y. take , t. kuroda , p. bogdan , r. marculescu , and h. amano , `` low - latency wireless 3d nocs via randomized shortcut chips , '' proc . of the conference on design , automation & test ( date 14 ) ,m. englert , h. rglin , and b. vcking , `` worst case and probabilistic analysis of the 2-opt algorithm for the tsp , '' proc . of 18th annual acm - siam symposium on discrete algorithms ( soda 07 ) , pp . 12951304 , jan .
we propose a heuristic method that generates a graph for order / degree problem . target graphs of our heuristics have large order ( 4000 ) and diameter 3 . we describe the observation of smaller graphs and basic structure of our heuristics . we also explain an evaluation function of each edge for efficient 2-opt local search . using them , we found the best solutions for several graphs . order / degree problem , graph generation , petersen graph , average shortest path length , 2-opt
the recent years have witnessed the explosion of satellite services and applications , and the related growing demand for high data rates . in particular ,satellite systems , which are broadcast by nature , can be also used for broadband interactive , and thus unicast , transmissions .the 2nd - generation specification of the digital video broadcasting for satellite ( dvb - s2 ) standard , developed in 2003 , and its evolution , approved in 2014 with the name of dvb - s2x , represent illuminating examples in this sense .next - generation satellite systems need new technologies to improve their spectral efficiency , in order to sustain the increasing request of new services .the grand challenge is to satisfy this demand by living with the scarcity of the frequency spectrum .resource sharing is probably the only option , and can be implemented by adopting a multibeam system architecture , which allows to reuse the available bandwidth in many beams . the interference caused by resource sharing is typically considered undesirable , but a way to dramatically improve the spectral efficiency is to exploit this interference , by using interference management techniques at the receiver . in this paper, we consider the benefits of the adoption of multiuser detection at the user terminal in the forward link of a multibeam satellite system .our reference is a dvb - s2(x ) system , where an aggressive frequency reuse is applied . under these conditions ,the conventional single - user detector ( sud ) suffers from a severe performance degradation when the terminal is located near the edge of the coverage area of a beam , due to the high co - channel interference . on the other hand, the application of a decentralized multiuser detector ( mud ) at the terminal , able to cope with the interference , can guarantee the required performance .of course a computational complexity increase must be paid . a parallel investigation on the same topicsis reported in .the literature on multiuser detection is wide , and in the area of satellite communications it essentially focuses on the return link , i.e. , on the link from the user terminals to the gateway , and includes centralized techniques to be applied at the gateway .less effort has been devoted to the forward link .recently , we investigated in the benefits that can be achieved , in terms of spectral efficiency , when high frequency reuse is applied in a dvb - s2 system , and multiuser detection is adopted at the terminal to manage the presence of strong co - channel interference . in ,the authors study the applicability of a low complexity mud based on soft interference cancellation . in both papers ,the advantage of the proposed detectors is shown in terms of error rate . in this paper, we generalize the analysis of by supplying an information - theoretic framework which allows us to evaluate the performance in terms of information rate ( ir ) , without the need of lengthy error rate simulations , and hence strongly simplifying the comparison of various strategies and scenarios .the main results of this investigations are also reported in .furthermore , we consider two additional transmission strategies , where the signals intended for the two beams cooperate to serve the two users ( one in the first beam and the other in the second one ) . in one case ,the two users in the adjacent beams are served consecutively in a time division multiplexing fashion , instead of being served simultaneously .this approach is also considered in . in the other case , we consider the alamouti space - time block code , consisting in the two satellites exchanging the transmitted signals in two consecutive transmissions . finally , we show that the theoretical limits predicted by the information - theoretic analysis can be approached by practical coded schemes . as expected , the alamouti precoding based schemes work well with the standard dvb - s2(x ) modulation and coding formats ( modcods ) , designed for an interference - free scenario . on the other hand, we observe that dvb - s2(x ) modcods are not suitable for multiuser detection applications .therefore , we analyze the convergence behavior of joint multiuser detection / decoding by means of an extrinsic information transfer ( exit ) chart analysis .we start by considering dvb - s2(x ) modcods and quantify the loss with respect to the theoretical limits . once identified the reasons for this performance loss , we prove that a large gain can be obtained from a redesign of the code and/or of the bit mapper .part of this investigation is reported in . according to the information - theory literature ,the multibeam satellite channel is a broadcast channel , with the satellite serving multiple users on the ground .particularly , we are in the case of a multiple - input multiple - output ( mimo ) broadcast channel , since we have multiple antennas at the transmitter ( for the different beams ) .nevertheless , we do not use the results concerning the broadcast channel capacity since we are not interested in the ultimate performance limits of the considered scenario .our work focuses , instead , on the gain that can be achieved by one specific user if it employs a more involved detector , i.e. , a mud , when the receivers of the other users are not necessarily modified . in other words, we want to understand if and when it is worth to use a mud to decode also the signal which is not intended for the reference user . for this aim, the theory of broadcast channels is not helpful and instead we borrow ideas from the multiple access channel ( mac ) .furthermore , it is known that the sum - rate capacity of the mimo broadcast channel is achieved by means of dirty - paper coding , but nonlinear precoding leads to several problems when going to the practical implementation in satellite systems , as the channel estimation , the synchronization and the non - linear effects introduced by the satellite amplifier ( not necessarily a problem in the case of multicarrier signals ) . in the following, section [ s : sys_model ] presents the system model and describes the three considered strategies and related detection techniques .the information - theoretic analysis is addressed in section [ s : scenario ] , and gives us the necessary means for the computation of the ir for the reference beam .the exit chart analysis is described in section [ s : exit_c ] .section [ s : num_res ] presents the results of our study , whereas conclusions are drawn in section [ s : conclusions ] .we focus on the forward link ( i.e. , the link from the gateway to the user terminals through the satellite ) of a multibeam satellite communication system for broadband interactive services , as illustrated in figure [ fig : architecture ] . in this scenario ,the service area is divided into small beams in order to reuse the frequency spectrum and hence to improve the spectral efficiency . as an example, a 4-color frequency reuse scheme is shown in figure [ fig:4c ] , where beams with the same color use the same bandwidth . in a 4-color frequency reuse scheme ,the interference is very limited and can be neglected at the receiver .thus , at the receiver , a sud is employed .a more aggressive frequency reuse can be adopted with the aim of improving the system spectral efficiency .figure [ fig:2c ] depicts the case of a 2-color frequency reuse scheme . in this latter case ,a sud is still used at the receiver although the interference can be significant .assuming an ideal feeder link ( i.e. , the link between the gateway and the satellite ) , figure [ fig : system ] depicts a schematic view of the baseband model we are considering .signals , , are signals transmitted by a multibeam satellite in the same frequency band . the satellite is thus composed of transmitters ( i.e. , transponders ) and serves users on the ground .the nonlinear effects related to the high power amplifiers which compose the satellite transponders are neglected since a multibeam satellite often works in a multiple carriers per transponder modality , and hence the operational point of its amplifiers is far from saturation .we consider the case where the users experience a high level of co - channel interference , since we assume that they are located close to the edge of the coverage area of a beam and that an aggressive frequency reuse is applied ( i.e. , a number of colors lower than 4 ) . the signal received by a generic user can be expressed as where are proper complex gains , assumed known at the receivers , and is the thermal noise . without loss of generality, we assume that `` user 1 '' is the reference user and that is its received signal .we also assume that , that , and that the satellite has no way to modify the gains .the satellite could , in principle , change the power for each user , but this is not done in practice for the following reason . in a unicast scenario , if we consider a given frequency bandwidth , different users in a beam are served in time - division - multiplexing mode .hence , different frames are sent to different users .these users can have different propagation conditions ( e.g. , some of them can experience rain attenuation ) and interference . to take this into account ,different modcods are selected for the different users , in the so - called acm ( adaptive coding and modulation ) mode . as a consequence, different frames will use different modulation and coding formats .the gateway could also try to modify the power for each transmitted frame , and thus for each user .however , satellite transponders are equipped with analog automatic gain control circuits , which are very slow . a change in the power , frame by frame , would introduce strange amplitude fluctuations that the system can not cope with .hence , a modification of the power allocation is not an option , at least considering the present transponder architecture .we will evaluate the performance of the reference user considering the following three strategies , which imply different transmission and detection approaches .* strategy 1 .* signal is intended for user , and we are interested in the evaluation of the performance for `` user 1 '' , whose information is carried by the signal with . for this case , we evaluate the ir when `` user 1 '' employs different detectors .in particular , we consider the case when `` user 1 '' adopts : * a sud . here , all interfering signals , are considered as if they were additional thermal noise .this is what is typically done in present systems . * a mud for the useful signal and one interferer . in this case , the receiver is designed to detect the useful signal and the most powerful interfering signal ( that with in our model ) , which is assumed to adopt a fixed rate , whereas all the remaining signals are considered as if they were additional thermal noise .data related to the interfering signal are discarded after detection .this case will be called mud in the following. the complexity will be clearly larger than that of the benchmark system using a sud .the aim of this analysis is to evaluate the performance improvement that can be obtained by simply using a more sophisticated receiver at the user terminal with no modifications of the present standard . in other words ,this strategy is perfectly compliant with the dvb - s2(x ) standard , since it simply requires the adoption of a different receiver .our analysis can be easily extended to a mud designed for more than two users although , given the actual signals power profile , it has been shown in that the mud offers the best trade - off between complexity and performance . * strategy 2 . * a different transmission strategy , requiring a modification at medium access control layer with respect to the previous case , is adopted in this case .hence , in order to adopt this strategy , a modification of the dvb - s2(x ) standard is required . without loss of generality, we will consider detection of signals and and users 1 and 2 only .as in scenario 1 , the remaining signals are considered as additional thermal noise . instead of simultaneously transmitting signal to `` user 1 '' and signal to `` user 2 '' , as in the previous scenario , we here serve `` user 1 '' first by employing both signals and for a fraction ( ) of the total time , and then `` user 2 '' by employing both signals and for the remaining fraction of the total time . from a system point of view , in order to maximize the throughput at system level , the best thing to do would be to serve the user with the best channel only . however , this would not be fair , since the user with the worst channel would never be served .satellite operators are typically interested in serving each of the two users for half of the time or in trying to serve the users taking into account their different data rate needs .signals and are independent ( although carrying information for the same user ) .the receiver of each user must jointly detect both signals and its complexity is comparable to that of the mud described for the first scenario . in strategies 1 and 2 , and are properly phase - shifted in order to maximize the ir . *strategy 3 .* as in the first strategy , is for `` user 1 '' and for `` user 2 '' .we use two transponders to implement the alamouti precoder . unlike what happens in , we do not use the alamouti scheme to achieve a diversity gain , but as a way of orthogonalizing the two signals . in this scheme ,the two transmitters exchange the two information symbols in two successive transmission intervals . at the receiver , two consecutive observed samplesare properly processed in order to remove the interference , and then fed to two suds . in this way, we can always transmit fully overlapped signals and perform only lower complexity operations at the receiver . to preserve the orthogonality of the two signals , in this approach, the same information has to be transmitted twice over two consecutive intervals .the ir is thus halved for this reason .in this section , we describe how to compute the ir related to `` user 1 '' assuming the previously described strategies .this analysis gives us the ultimate performance limits of the considered satellite system , which will be used as a benchmark for the performance of practical coded schemes .we start considering * strategy 1 * , and describe how to compute the ir of `` user 1 '' assuming the mud receiver .the same technique can be used to compute the ir related to `` user 2 '' and straightforwardly extends to the case of mud for more than two users .the channel model assumed by the receiver is where is the -ary complex - valued symbol sent over the -th beam and collects the thermal noise , with power , and the remaining interferers that the receiver is not able to cope with .symbols and are mutually independent and distributed according to their probability mass function ( pmf ) .they are also properly normalized such that , where is the transmitted power per user .the parameter is complex - valued and models the power imbalance and the phase shift between the two signals .the random variable is assumed complex and gaussian .we point out that this is an approximation exploited only by the receiver , while in the actual channel the interference is clearly generated as in .the mud receiver has a computational complexity which is proportional to the product .we are interested here in the computation of the maximum achievable rate for `` user 1 '' when `` user 2 '' adopts a fixed rate , and the mud is employed .rates are defined as , where is the rate of the adopted binary code .the rates of the other interferers do not affect our results since , at the receiver , they are treated just as noise .this problem is quite different with respect to the case of the mac discussed in where both rates are jointly selected . in this case , in fact , the information coming from the second transmitter is not intended for `` user 1 '' .hence , the rate is fixed and data of `` user 2 '' is discarded after detection .the ir for `` user 1 '' in the considered scenario is given by theorem [ t : thir ] , whose proof is based on the following two lemmas .an alternative proof can be found in .the first one defines the maximum rate achievable by `` user 1 '' when `` user 2 '' can be perfectly decoded .[ l : lemma1 ] for a fixed rate , the rate is achievable by `` user 1 '' and is not a continuous function of .namely , a cut - off signal - to - noise ratio exists such that for and for with a discontinuity . in , it is shown that the achievable region for the mac is given by the region of points ( ) such that an example of such a region is shown in figure [ fig : region ] .if is constrained to a given value , we derive from ( [ eq : c1 ] ) and ( [ eq : c3 ] ) that when .the first term is lower when thus , is an achievable rate for `` user 1 '' .we now prove that has a cut - off rate .since , is a non - decreasing function of , there exists such that , and hence on the other hand , for a small , it holds where .it follows that .thus for ._ discussion _ : the proof of the lemma can be done graphically by considering the intersection of the achievable rate region with a horizontal line at height . when clearly the rate of `` user 2 '' can not be achieved .however , we also have to account for this case , and therefore we consider also the achievable rate , which is the relevant rate when `` user 2 '' is just considered as interference . in this case , the receiver exploits the statistical knowledge of the signal but does not attempt to recover the relevant information . particularly , the receiver does not include the decoder for `` user 2 '' .[ l : lemma2 ] the rate as a function of is always greater than 0 and satisfies for any .the proof is straightforward .it can be done by observing that and that .the computation of the irs , , , in the presence of interferers with not accounted for at the receiver can be performed by using the achievable lower bound based on mismatched detection .having defined and as the maximum rates achievable by `` user 1 '' when the signal for the other user can be perfectly decoded , or not , we can now compute the ir for `` user 1 '' by means of the following theorem .[ t : thir ] the achievable information rate for a single user on the two users multiple access channel , for a fixed rate , is given by and is a continuous function of .the proof is based on lemma [ l : lemma1 ] and [ l : lemma2 ] . in fact , and are the maximum rates achievable by `` user 1 '' when the signal for `` user 2 '' can be perfectly decoded , or not .an alternative graphical proof can be derived from figure [ fig : region_thm ] , which plots the rate achievable by `` user 1 '' as a function of , for a generic fixed value of .we clearly see that inequality is satisfied . .] for gaussian symbols and , we obtain that where .all curves are shown in figure [ fig : gausscurve ] , for the case of , , and the overall bound is given by the red curve .we can see from the figure that this bound is clearly continuous ., gaussian symbols , and . ]when a sud is employed at the terminal , the theoretic analysis can be based on the following discrete - time model where includes the thermal noise and the interferers that the receiver ignores .note that we again use mismatched detection here , i.e. , in the montecarlo average to compute we use the received samples coming from the real channel whereas the detector is assumed to be designed for the auxiliary channel model ( [ eq : channel_sud ] ) .as known , the complexity of the sud is much lower than that of the multiuser receiver , and is proportional to .the computation of the ir allows us to select the maximum rate for `` user 1 '' when the co - channel interference is not accounted for . we now consider * strategy 2 * and , without loss of generality , we consider the fraction of time when both signals and are used to send information to `` user 1 '' .the receiver is based on the channel model ( [ eq : mac ] ) , but now the rate of signal is not fixed . since and are independent , we are exactly in the case of the mac and , by properly selecting the rate of the two signals , any point of the capacity region can be achieved . clearly , since and are now both intended for the same user , we are interested in selecting the two rates in such a way that the sum - rate is maximized . in * strategy 3 * , due to the adoption of the alamouti precoding , two consecutive samples at the terminal of `` user 1 '' are where include independent gaussian noise samples and the remaining interferers .after the receiver processing , the observable for detection is and is still a sufficient statistic for detection .the noise samples are statistically equivalent to .the information carried by is discarded and the ir for `` user 1 '' is that of an interference free channel with snr , divided by 2 for the reason already explained .in this section , we analyze the convergence behaviour of the considered strategies based on multiuser detection by means of an exit chart analysis . the aim is to evaluate the effectiveness of iterative decoding / detection schemes in * strategy 1 * and * strategy 2 * , and to design novel practical systems with performance close to the theoretical limits . in the following description ,we assume the presence of only two independent signals , those processed by the receiver of `` user 1 '' , but the results in section [ s : num_res ] are generated according to the general model ( [ eq : model ] ) .each transmitted signal is obtained through a concatenation of a code with a modulator through a bit interleaver .the information data of signal is encoded by encoder of rate into codeword , which is interleaved and mapped through a modulator onto a sequence of -ary symbols . herethe channel model is the vectorial extension of the model ( [ eq : mac ] ) which allows us to consider sequences of symbols .the iterative decoding / detection scheme consists of a multiuser detection module , and 2 _ a posteriori _ probability decoders and matched to encoders and of the two transponders . the described system is reported in figure [ fig : iterative_scheme ] . the soft - input soft - output ( siso ) mud exchanges soft information with the two decoders and , in an iterative fashion .more generally , the detector and the decoders can also be composed by siso blocks . in this work ,we focus on low - density parity - check ( ldpc ) codes , whose decoder is composed of sets of variable and check nodes ( the variable - node decoder ( vnd ) and check - node decoder ( cnd ) ) .iterative decoding is performed by passing messages between variable and check nodes . the global iterative detection / decoding processcan then be tracked using a multi - dimensional exit chart .alternatively , the exit functions of the constituent decoders and of the mud can be properly combined and projected into a two - dimensional chart .similar to a system composed by only two siso blocks , the convergence threshold of our system can be visualized as a tunnel between the two curves in the projected exit chart .the system in figure [ fig : iterative_scheme ] can represent both * strategy 1 * and * strategy 2*. we recall that in the first scenario the information to recover is conveyed by signal 1 only , while the rate of the other signal is fixed .our design will be thus aimed at finding a good code , while the code for the other signal can not be changed and will be chosen among those foreseen by the dvb - s2(x ) standard . for * strategy 2 *, the scheme in figure [ fig : iterative_scheme ] is representative of the fraction of time in which both signals are carrying information for `` user 1 '' . in this case , we assume to have the freedom to choose the code of the two signals and also to apply a joint bit mapping , as we will see in section [ sec : joint mapping ] .in this section , we will first compare the three described strategies in terms of ir under different conditions .we will then try to approach the information - theoretic results with practical modulation and coding formats .we assume as reference system the dvb - s2 standard .we choose a 2-color frequency reuse scheme to generate a high co - channel interference .we assume that `` user 1 '' is located close the edge of the coverage area of its beam . to identify the interference, we define the signal - to - interference power ratio as and consider three realistic cases which have a different power profile , and are listed in table [ t : cases ] .these distributions correspond to 3 different positions for `` user 1 '' and are typical of the forward link of a multibeam broadband satellite system with 2-color frequency reuse ..interference profiles corresponding to a 2-color frequency reuse .[ cols="^,^,^,^,^,^",options="header " , ] we now consider practical modcods for multiuser detection and for the alamouti precoder , and we focus on the gap between practical and theoretical performance . in the following , we do not consider the sud of * strategy 1*. as shown by the information - theoretic analysis , it is not easy to compare the three strategies , since the best strategy depends on the power profile of the interfering signals , the rates of the signals , and the snr. figure [ fig : ir ] shows the ir in case 1 . in * strategy 1 * , the ir curve is no more the average ir with respect to the distribution in figure [ fig : histogram ] , but the signal is assumed to adopt an 8psk .we first consider modcods based on the ldpc codes of rate 1/2 and 3/4 with length 64800 bits of the dvb - s2 standard , with the related interleavers . in * strategy 1 * , we use the rate 3/4 ldpc code for signal , in order to simulate the most probable modcod according to the distribution in figure [ fig : histogram ] . in the first two strategies , we consider iterative detection and decoding andallow a maximum of 50 global iterations .the ber results have been computed by means of monte carlo simulations and are reported in the ir plane in figure [ fig : ir ] using , as reference , a ber of .these results show that schemes based on the alamouti precoding and the codes of the standard have good performance , being the loss with respect to the corresponding ir curve around 1 db .this is because interference is perfectly removed at the receiver . on the contrary ,the loss of practical modcods with respect to the ir limits is high for both * strategies 1 and 2 * , being about 2 and 4 db at and 1.5 bit / symbol , respectively .this is due to the fact that dvb - s2(x ) codes have been optimized for an interference - free scenario . in the following sections , we try to reduce this loss by redesigning the code of `` user 1 ''furthermore , we propose a bit mapping which is jointly implemented for signals 1 and 2 in * strategy 2 * , where we have greater design freedom since both signals are for `` user 1 '' .our design approach is based on exit charts : this tool is able to point out the limits of the dvb - s2 based modcods and provide very useful insights on the code and mapper design .figure [ fig : exit2 ] shows the exit chart for * strategy 2 * in case 1 .the mutual information ( mi ) curve of the mud has been obtained for db , while the considered codes have rate 1/2 .let us first focus on the mi curve of the ldpc code of the dvb - s2 standard .the exit chart analysis reveals that the dvb - s2 codes do not fit the detector , which means that codes designed for systems employing single - user detection in an interference - free scenario are not the best choice for the considered mud schemes .the exit chart of * strategy 1 * has similar features .this observation pushed us into the redesign of the ldpc .-2.0ex lccc + [ -2.0ex ] & rate & vnd distribution & cnd distribution + + [ -2.0ex ] + [ -2.0ex ] strategy 1 & 1/2 & 2 ( 60% ) 3 ( 31.4% ) 10 ( 8.6% ) & 6 ( 100% ) + strategy 2 & 1/2 & 2 ( 60% ) 3 ( 36.5% ) 20 ( 3.5% ) & 6 ( 100% ) + strategy 1 & 3/4 & 2 ( 80% ) 3 ( 18.3% ) 50 ( 1.7% ) & 12 ( 100% ) + strategy 2 & 3/4 & 2 ( 70% ) 3 ( 28.5% ) 50 ( 1.5% ) & 12 ( 100% ) + lccccc + [ -2.0ex ] & rate & ir th . & dvb code ( gap ) & new code ( gap ) & phase shift + + [ -2.0ex ] + [ -2.0ex ] strategy 1 & 1/2 & 6.3 db & 8.15 db ( 1.85 ) & 7.4 db ( 1.1 ) & + strategy 2 & 1/2 & 1.95 db & 4.1 db ( 2.15 ) & 3.05 db ( 1.1 ) & + strategy 1 & 3/4 & 9.45 db & 13.3 db ( 3.85 ) & 11.75 db ( 1.55 ) & + strategy 2 & 3/4 & 6.25 db & 9.75 db ( 3.5 ) & 7.9 db ( 1.65 ) & + the exit chart analysis clearly suggests that in our scenario we need an ldpc that is more powerful at the beginning of the iterative process , to have a better curve matching between detector and decoder .this is not surprising , since , in interference - limited channels , a siso detector is effectively able to mitigate the interference when the information coming from the decoders is somehow reliable .in other words , we mainly need a good head start .we adopt the heuristic technique for the optimization of the degree distribution of the ldpc variable and check nodes proposed in .this method consists of a curve fitting on exit charts .we optimize the vnd and cnd distribution , limiting for simplicity our optimization procedure to codes with uniform check node distribution and only three different variable node degrees . using this approach , for each strategy in case 1we design a rate-1/2 and a rate-3/4 ldpc code , whose parameters are summarized in table [ t : new codes ] .the exit curve of the new ldpc code with rate-1/2 for * strategy 2 * is shown in figure [ fig : exit2 ] .we found other degree distributions with better exit curves matching but with poor error floor when used with finite block length .the codes of length 64800 are then obtained by using the peg algorithm . in the simulations using the new codes with rate 3/4, we decreased the snr used by the mud by 0.5 and 0.25 db in * strategy 1 * and * strategy 2 * , respectively . in effect , the increase of the noise variance to be set at the receiver improves the performance at high ir where the presence of interference not accounted by the mud is more critical than for lower irs .moreover , for * strategy 2 * we used two different codes , but with the same degree distribution , for the two signals in order to increase the diversity between them . table [ t : bervsir ] summarizes the ber results at ir 1 and 1.5 bit / symbols in terms of convergence threshold , defined as the corresponding to a ber of .we also report the achievable ir limit in obtained through the information - theoretic analysis and the phase shift between the signals and .the results show that the gap between the theoretical and the convergence thresholds can be reduced thanks to the new ldpc codes .after the observation of the poor match between the curves in the exit chart , in section [ sec : ldpc design ] we have seen how to improve the threshold by properly changing the code .here we propose an alternative approach which is focused on the mi curve of the detector .in particular , we propose a joint mapping of the bits of the two signals in * strategy 2 * , which works exceptionally well in conjunction with the dvb - s2 codes .the idea comes from the fact that transmitting a single signal with gray mapping gives rise to a practically horizontal exit curve for the detector , that is exactly what we need if we want to use the codes of the standard .let us assume that and that the two signals are phase shifted of an angle equal to .this last choice grants a simple design of the mapping for the resulting constellation and an ir that is close the optimal one . given two constellations ,it is easy to see that , if we rotate one of them by , the resulting joint constellation is formed by circles , each composed of equally spaced points .two examples are shown in figure [ fig : multi_const ] , for = 4 ( top ) and 8 ( bottom ) .[ fig:2x4psk ] + \1 .[ fig:2x8psk ] we then need to design a good mapping for the joint constellation .since we have points , we need bits .we choose to use the first bits to identify the circle , and the remaining bits to label the points on each circle .mapping is gray on each circle and also between adjacent circles .the selected joint mapping is shown in figure [ fig : mapping ] for two qpsk constellations , where the first bit identifies the circle , and the remaining three bits label the points .the exit curve of the mud with joint mapping has smaller slope than that related to the classical mapping , and it is shown in figure [ fig : exit2 ] . a similar approach can be applied in * strategy 1 * , but in this case we can modify only the mapping of signal .an example is shown in figure [ fig : mapping_s1 ] , for two qpsk constellations : on the left we show the classical mapping , on the right the new mapping , where in red we have the bits of `` user 1 '' and in black the bits of `` user 2 '' , which we are not allowed to modify .we can see that the new mapping is more similar to a gray mapping than the standard one , in the sense that the distance among adjacent symbols is decreased . 0.5 0.5 the ber performance of the joint mapping in * strategy 2* is reported in figure [ fig : ber ] for the three power profiles in table [ t : cases ] . both signals and adopt qpsk modulation and dvb - s2 codes with rate . the results are compared with the curves of the standard which refer to the classical mapping and with the related ir thresholds .we observe that the joint mapping improves the performance of the reference curves and the gap with respect to the ir threshold is around 1 db in all cases .we considered the forward link of a multibeam satellite system , and investigated different transmission / detection strategies to increase the achievable rate in the presence of strong co - channel interference .as expected , multiuser detection allows a significant gain with respect to single - user detection when the user terminal is close to the edge of the coverage area of its beam .however , it is surprising that the other two strategies requiring modifications at medium access control layer , that based on the use of two transponders to serve consecutively two users and that based on the use of the alamouti precoder , can sometimes provide even larger gains , although when the interference is negligible ( i.e. , when the user terminals is in the center of the beam ) a significant loss has to be expected from their use .the conclusive picture is thus complex , since our results show that a transmission / detection strategy which is universally superior to the others does not exist , but the performance depends on several factors , such as the snr , the interference profile , and the rate of the strongest interferer .this fact outlines the importance of the proposed analysis framework , which can avoid to resort to computationally intensive simulations .its extension to perform a system analysis , averaging the results on all possible interference profiles within a beam , is also a very interesting subject of investigation .finally , we bore evidence that dvb - s2(x ) codes , designed for an interference - free scenario , are not suited when a significant interference is present . a proper redesign of the code and/or of the bit mapping can , however , solve the problem ., _ second generation framing structure , channel coding and modulation systems for broadcasting , interactive services , news gathering and other broadband satellite applications , part ii : s2-extensions ( dvb - s2x)_. .s. andrenacci , m. angelone , e. a. candreva , g. colavolpe , a. ginesi , f. lombardo , a. modenini , c. morel , a. piemontese , and a. vanelli - coralli , `` physical layer performance of multi - user detection in broadband multi - beam systems based on dvb - s2 , '' in _ proc .european wireless ( ew 2014 ) _ , ( barcelona , spain ) , may 2014 .g. cocco , m. angelone , and a. i. perez - neira , `` co - channel interference cancellation at the user terminal in multibeam satellite systems , '' in _ proc .7th advanced satell .mobile syst .conf . and 13th intern .workshop on signal proc . for space commun .( asms&spsc 2014 ) _ , ( livorno , italy ) , pp .4350 , sept .m. caus , a. i. perez - neira , m. angelone , and a. ginesi , `` an innovative interference mitigation approach for high throughput satellite systems , '' in _ proc .ieee intern . work . on signal processing advances for wireless commun ._ , ( stockholm , sweden ) , pp . 515519 , june - july 2015 .a. piemontese , a. graell i amat , and g. colavolpe , `` frequency packing and multiuser detection for cpms : how to improve the spectral efficiency of dvb - rcs2 systems , '' _ ieee wireless commun . letters _ ,vol . 2 , pp .7477 , feb .2013 .g. colavolpe , d. fertonani , and a. piemontese , `` siso detection over linear channels with linear complexity in the number of interferers , '' _ ieee j. sel .topics in signal proc ._ , vol . 5 , pp .14751485 , dec .2011 .j. arnau and c. mosquera , `` multiuser detection performance in multibeam satellite links under imperfect csi , '' in _ proc .asilomar conf .signals , systems , comp ._ , ( pacific grove , ca ) , pp .468472 , nov .d. christopoulos , s. chatzinotas , j. krause , and b. ottersten , `` multi - user detection in multibeam mobile satellite systems : a fair performance evaluation , '' in _ proc .vehicular tech ._ , ( dresden , germany ) , pp . 15 , june 2013 .g. colavolpe , a. modenini , a. piemontese , and a. ugolini , `` on the application of multiuser detection in multibeam satellite systems , '' in _ proc .ieee intern ._ , ( london , uk ) , pp . 898902 , june 2015 . g. colavolpe , a. modenini , a. piemontese , and a. ugolini , `` multiuser detection in multibeam satellite systems : theoretical analysis and practical schemes , '' in _ procieee intern . work . on signal processing advances for wireless commun ._ , ( stockholm , sweden ) , pp . 525529 , june - july 2015 .
we consider the rates achievable by a user in a multibeam satellite system for unicast applications , and propose alternatives to the conventional single - user symbol - by - symbol detection applied at user terminals . single - user detection is known to suffer from strong degradation when the terminal is located near the edge of the coverage area of a beam , and when aggressive frequency reuse is adopted . for this reason , we consider multiuser detection , and take into account the strongest interfering signal . we also analyze two additional transmission strategies requiring modifications at medium access control layer . we describe an information - theoretic framework to compare the different strategies by computing the information rate of the user in the reference beam . furthermore , we analyze the performance of coded schemes that could approach the information - theoretic limits . we show that classical codes from the dvb - s2(x ) standard are not suitable when multiuser detection is adopted , and we propose two ways to improve the performance , based on the redesign of the code and of the bit mapping .
the subject of this paper is the motion of linked rigid bodies moving in a weakly compressible , viscous fluid .it is closely connected with mathematical and computational studies of the swimming of fish , and with the motion of underwater vehicles and robotic fish propelled by changes of shape .our approach is primarily computational using the sph algorithms of kajtar and monaghan ( 2008 ) to establish scaling relations for the motion .the work is closely related to recent work on the motion of linked bodies in an infinite , two dimensional fluid which may be inviscid ( kanso et al . , 2005 , melli et al . , 2006 ) or viscous ( eldredge , 2006 , 2007 , 2008 ) . when the fluid is inviscid it is possible to bring powerful mathematical formalisms to bear on the problem in a manner similar to the motion of a single body in an inviscid fluid ( see for example lamb , 1932 ) .however , for problems involving free surfaces , or complicated rigid boundaries , or a stratified fluid , these methods become very complicated .our approach is capable of handling arbitrary body shapes and boundaries though in the present paper we concentrate on motion of linked ellipses moving in a periodic domain .the sph method also has advantages over the vortex particle method of eldredge ( 2006 ) for problems where the bodies penetrate a free surface , but in the present case no such difficulties exist and the vortex particle method provides a convenient comparison for the sph calculations .the bodies we consider are solid bodies linked by virtual rods which join at pivot points .the rods are described as virtual because they do not have any mechanical function except to define the direction of fixed lines in the bodies . in particular, fluid can flow between the ellipses .the angles between the rods ( and therefore the bodies ) are specified as an oscillating function of time .a specification of the time variation of these angles is called the gait .our aim is to determine the scaling relations which relate the speed and power developed by the linked bodies to the frequency and amplitude of a standard gait which propels the linked bodies along a path which is , on average , a straight line .a related problem was considered by taylor ( 1952 ) who studied the motion of a long slender body and applied his results to the motion of a leech and a snake moving in water .our three ellipse system has a motion which is similar to that of the leech and snake because their oscillations are are roughly sinusoidal , and are therefore not too different from the oscillations of our connected ellipses .taylor s analysis provides a remarkably accurate guide for the functional form of the dependence of the velocity and power of our three ellipse system on the frequency and amplitude of the gait .the plan of the paper is to first discuss the sph algorithm .we then show by convergence studies that a periodic domain can be used to represent the infinite domain with errors that are typically .we also establish convergence with resolution .we first compare our results with the viscous results of eldredge ( 2008 ) for massless bodies by a series of simulations with decreasing body mass .the agreement is very satisfactory .we then compare our results to the inviscid results of kanso et al .2005 by changing the viscosity so that the reynolds number varies from 50 to 5000 .this comparison shows that the sph results converge to the inviscid results for the highest reynolds number .we then discuss scaling relations for the velocity and power output , and relate them to taylor s ( 1952 ) analytical relations for the velocity and power output of long narrow animals swimming .we apply these results to the swimming of a leech .finally we determine the minimum power , and corresponding gait , required to propel the bodies at a specified average speed . throughout this paperwe use si units .we consider motion in two dimensions and use cartesian coordinates .a typical configuration of the bodies is shown in figure [ fig : bodyangles ] .the motion of the fluid , which is assumed incompressible , is specified by the navier stokes equations . in cartesian tensor form these equations are where denotes the number of bodies , is the stress tensor is the pressure and the shear viscosity coefficient .the function is a one dimensional delta function , and is the perpendicular distance from the surface of body to the position where the fluid acceleration is required . the unit vector is directed from body into the fluid .the introduction of forces into the acceleration equation ( [ eqn : navierstokes ] ) as an alternative to specifying boundary conditions on the velocity is due to sirovich ( 1967 , 1968 ) . in his formulation , as in ours , is a delta function defined so that for any quantity where the first integral is over the volume and the second integral is over the surface . in this way a volume integral involving a delta function becomes equivalent to a surface integral , and the body force per unit volume in ( [ eqn : navierstokes ] ) becomes a force per unit area .this force provides both the pressure which prevents penetration of the rigid body , and the viscous traction term .it mimics the fundamental molecular basis of the boundary conditions namely that the atoms of the fluid do not penetrate the atoms of the solid because of the atomic forces between the liquid and the solid atoms .a closely related method of using boundary forces is due to peskin ( 1977 ) who simulated elastic membranes such as the heart interacting with a fluid .peskin s equations ( 2.3 ) to ( 2.6 ) are essentially those of sirovich , though the peskin deals with an elastic material and sirovich assumes the body is rigid. further details about peskin s formulation can be found in peskin ( 2002 ) .we denote an element of area on the surface of body by .the motion of the centre of mass of solid body ( with mass ) is given by where is the force due to the constraints .the rotation of rigid body , with moment of inertia ) , is given by where is a vector from the centre of mass of body to the element of area , is the force on the element of area , and ) is the constraint torque on body . in the following , to simplify the notation , the subscript will always denote the label of a body .thus , for example , will be replaced by .the angle which fixes the rotation of body is defined as the positive rotation of a line fixed in the body from the axis of a cartesian coordinate system fixed in space . for simplicitywe assume the line fixed in the body is an axis of symmetry .the constraint conditions on the angles are where is the link number and is a specified function .the form of the determines the gait of the bodies . for the examples we consider here there are three bodies and two links as shown in figure [ fig : bodyangles ] . in the simplest case a function of but , in general , it depends on other variables .for example , in a biological problem , it could depend on the centre of mass coordinates in such a way that the fish slows down when it enters a region where food is abundant .in addition to the constraints on the angles there are constraints associated with the links .we assume the link , or pivot , is at a distance from the centre of mass of body .the condition on the components of the centres of mass of bodies and is that the coordinate of the link between them is given by or similarly the constraint is these constraints enable the coordinates of the centres of mass of the bodies , and their angles to be written in terms of those of any selected body .similarly , by differentiating the constraint conditions with respect to time , the velocities and and angular velocity of the bodies can be written as functions of the same selected body .the number of degrees of freedom ( coordinates and velocities ) of linked bodies in two dimensions is therefore 6 compared with the degrees of freedom of independent bodies in two dimensions . if the are functions of alone it is possible to reduce the equations of motion to those involving the coordinates and velocities of one of the bodies .this can also be done when the are functions of both coordinates and time but it is inconvenient to eliminate variables and , in our view , simpler to take account of the constraints by using lagrange multipliers .for that reason we use lagrange multipliers even though , in the applications to be described in this paper , the are functions of only . for the case of three bodies we have two links and therefore 6 constraints .we denote the lagrange multipliers for the , and constraints of link by , and respectively . using standard methods for holonomic constraints ( e.g. landau and lifshitz , 1976 )we find the following expressions for the constraint forces and torques for the various bodies .for body 1 for body 2 and for body 3 these constraint forces do not affect the total linear momentum of the bodies because they sum to zero .the constraint torque on body 1 is on body 2 it is and on body 3 it is the constraint forces and torques are provided by the engines which drive the angular variation between the bodies . in the case of fishthese engines are the muscles of their bodies and the work done is provided by the internal chemical energy generated by the fish . the way these constraint forces and torques affect the angular momentum will be discussed in .the form of the sph equations that we use is discussed in more detail by monaghan ( 1992 , 2005 ) . for the liquid sph particles the acceleration equation is .\ ] ] in this equationthe mass , position , velocity , density , and pressure of particle are , , , , and respectively . denotes the smoothing kernel and denotes the gradient taken with respect to the coordinates of particle . in this paper the fourth degree wendland kernel for two dimensions ( wendland , 1995 ) , and has support . in the present calculations the used in is an average .the choice of is discussed in detail by monaghan ( 1992 , 2005 ) . in this paperwe choose to be 1.5 times the initial particle spacing so that the interaction between any two fluid particles is zero beyond 3 initial particle spacings .the first summation in ( [ eqn : sphaccel ] ) is over all fluid particles and is the sph equivalent of the first term on the right hand side of ( [ eqn : navierstokes ] ) .the last term in ( [ eqn : sphaccel ] ) is the contribution to the force per unit mass on fluid particle due to boundary particles and is equivalent to the last term in ( [ eqn : navierstokes ] ) .a body label is denoted by , and is one of the set of boundary particle labels on body .the term is the non - viscous boundary particle force per unit mass on fluid particle due to boundary particle . in the present paperwe use the boundary forces analysed by monaghan and kajtar ( 2009 ) .the force acts on the line joining particle and .the boundary particles delineate the boundaries , and produce forces on the fluid in a similar manner to the delta function forces of sirovich discussed after ( [ eqn : stresstensor ] ) .the viscosity is determined by for which we choose the form ( monaghan 1997 , 2005 ) in this expression is a constant , and the notation is used . denotes the average density .we take the signal velocity to be where is the speed of sound at particle ( monaghan , 1997 , though here we take to be half used in that paper and is therefore a factor 2 larger ) . is dominated by the terms involving the speed of sound .the kinematic viscosity can be estimated by taking the continuum limit which is equivalent to letting the number of particles go to infinity while keeping the resolution length constant . by a calculation similar to that in monaghan ( 2005 )it is found that the kinematic viscosity for the wendland kernel is given by sph calculations for shear flow agree very closely with theoretical results using this kinematic viscosity ( monaghan , 2006 ) .the pressure is given by where is the reference density of the fluid .to ensure the flow has a sufficiently low mach number to approximate a constant density fluid accurately , we determine the speed of sound by where is the maximum speed of the fluid relative to the bodies . in this case is dominated by the first two terms .the precise value of will be specified for each simulation .the form of the sph continuity equation we use here is and the position of any fluid particle is found by integrating in the present simulations the liquid sph particles were initially placed on a grid of squares and thereafter allowed to move in response to the forces .the time stepping of the sph equations uses an algorithm which is symplectic in the absence of dissipation .the details of this scheme are given by kajtar and monaghan ( 2008 ) .the non - viscous force on boundary particle due to all fluid particles is where is the force per unit mass on boundary particle due to fluid particle .the viscous force is where we have used the fact that .the total force on particle is the equation for the centre of mass motion of body is then and the torque equation is the motion of a boundary particle can be determined from the motion of centre of mass and the rotation about the centre of mass .thus for particle on body , where , in this two dimensional problem , the rotation is around the axis which is perpendicular to the plane of the motion .the total rate of change of the linear momentum of the rigid bodies with respect to time is \ ] ] where , as noted earlier , the sum over the constraint forces is zero .the rate of change of linear momentum of the fluid sph particles is given by .\ ] ] noting that the sum over the pressure and viscous forces between fluid particles vanishes because of symmetry .recalling that we deduce that which shows that the linear momentum is conserved .the angular momentum of the bodies is composed of the centre of mass angular momentum about some fixed origin , and the sum over each body of the spin angular momentum about the centre of mass of body .the time rate of change of the total centre of mass angular momentum is the rate of change of the spin angular momentum is the rate of change of the angular momentum of the fluid particles is ,\ ] ] where , because of symmetry , the sum over pressure , and viscous terms between fluid particles have vanished .the rate of change of the total angular momentum ( the sum of ( [ eqn : bodangmom ] ) , ( [ eqn : spinangmom ] ) and ( [ eqn : fluidangmom ] ) ) becomes the first term vanishes because the boundary forces are radial and the last four terms vanish because of the constraint conditions ( [ eqn : xcons ] ) and ( [ eqn : ycons ] ) . finally we note that the previous arguments about conservation assume the time derivatives are exact .the actual conservation in the numerical simulations depends on the form of the time stepping algorithm .linear momentum is always conserved to round off error , but the angular momentum conservation is less accurate because the lagrange multipliers are calculated at the mid - point . in our simulations we use periodic boundaries andthese do not conserve angular momentum exactly . a detailed discussion of the conservation of angular momentum is given by kajtar and monaghan ( 2008 ) .the sph equations can be applied to the linked bodies moving in a channel , as is the case for many laboratory experiments on fish , or in a pond with an irregular boundary , by replacing the boundaries of the pond by boundary force particles as we have done for the rigid bodies .the sph algorithm does not need to be changed if the linked bodies move through and out of a free surface , which would be required to mimic the motion of dolphins .this facility was used earlier for bodies hitting the water ( monaghan and kos , 2000 , monaghan et al . , 2003 ) .in the present paper , where we compare our results with those of kanso et al .( 2005 ) and eldredge ( 2008 ) , we need to deal with an infinite medium .this can not be done directly because it would require infinitely many particles .one alternative , and the simplest , is to replace fluids of infinite extent by periodic boundary conditions .these boundaries alter the solutions of the differential equations but the effects are small if the periodic cells are sufficiently large .we determine their effect by carrying out test calculations for successively larger domains .we consider ellipses moving with the gait where throughout this section , we set and . the ellipses have semi - major axis , semi - minor axis , and distance between the tip of the ellipse and the pivot .these dimensions , and the gait , are identical to those of kanso et al .( 2005 ) and eldredge ( 2008 ) but we use a different notation for the angles . the configuration of the ellipses is shown in figure [ fig : linkbodymotion ] at intervals of 1/3 of a period .we define the reynolds number by using the characteristic velocity and the characteristic length scale , so that the speed of sound , and the boundaries of the ellipses were defined by boundary particles with spacing ( monaghan and kajtar , 2009 ) .the motion takes place in a domain with periodic rectangular cells .the motion of the linked bodies is characterised by the path followed by the centre of mass of the middle body .this path will be referred to as the ` stride path ' .the gait ( [ eqn : phi1 ] ) and ( [ eqn : phi2 ] ) is oscillatory with period so that the stride path is oscillatory and has the shape of a zig zag .we refer to the straight line distance between two consecutive lower points of this zig zag , travelled in time , as a ` stride length ' . the results to follow show that the stride length is , in general , not constant , in agreement with the results of eldredge .kajtar and monaghan ( 2008 ) showed that the sph algorithm gave results in good agreement with experiments for a driven oscillating cylinder , and for cylinders freely oscillating in a channel flow . in this paperwe describe three levels of further tests .the first of these is concerned with the convergence as the resolution is refined with a fixed periodic cell size , and convergence as the size of the cell is increased with fixed resolution ( and ) .the latter is to ensure that our comparison with the results of kanso et al .( 2005 ) and eldredge ( 2008 ) is legitimate .the second level of tests is concerned with comparisons with the results of eldredge by studying the stride length when the mass of the bodies is reduced and kanso et al . by studying the stride length change as the viscosity coefficientis increased ( and ) .the third level of tests shows that the numerical simulations agree with general scaling relations ( 7 ) . throughout this section, we use a rectangular domain with periodic boundaries aligned with the and axes of a cartesian coordinate system .the ratio of the lengths of the sides of the domain , the aspect ratio , is 4:3 .the fluid spans from to along the horizontal axis , and from to in the vertical axis .the initial coordinates of the centre of mass of the middle body were always .for the sph simulations the periodic boundaries were implemented by copying rows and columns of fluid particles in width to the opposite boundary , top to bottom , left to right , and vice versa .this process guarantees that the fluid particles of interest in the rectangular domain get the correct rates of change in each time step .we ran the calculations for initial particle spacing , 1/40 , 1/50 and 1/60 .the domain was of size and .for these tests , the bodies were neutrally buoyant and .the simulations for each resolution were run for the same time .the stride paths for the four values of are plotted in figure [ fig : respaths ] .note that the strides for the lowest resolution ( ) are significantly longer than for the other three finer resolutions .the paths for , 1/50 and 1/60 lie almost on top of one another although , because of the slight differences in average velocity , the differences increase with time and we note that the convergence is not monotonic i.e. the results for are closer to those for than are those for .however , for the three smallest resolutions the relative difference between a stride length of one resolution and another is at most 5% ( for the third stride ) .figure [ fig : respaths ] also shows that the direction of the path is not sensitive to the resolution .the results of this numerical test indicate that a fluid particle resolution of is sufficiently accurate to determine the stride path in length and direction to within 5% . .open circles , filled circles and the solid line are for , 1/50 and 1/60 respectively.,scaledwidth=80.0% ] in order to determine a fluid domain size that adequately represents an infinite domain , the calculation was run for a number of periodic cell sizes with fixed .we ran the calculation with four different domain sizes with the same aspect ratio , , , and .again , the bodies were neutrally buoyant and .the calculations were run for , but we found that stride paths varied substantially from one case to the next .however , with a resolution of , the stride paths show a smoother trend with increased domain size .the stride paths for the four domain sizes are plotted in figure [ fig : domainpaths ] .note that the strides for the smallest domain are significantly longer than for the other three larger domains .these results indicate that a domain of size is close to being sufficiently large for modelling an infinite domain .the distances travelled in three strides for the different domain sizes , and for the two resolutions are given in table [ tab : domainstrides ] .these values demonstrate the large variation for different domain sizes with . for , neglecting the smallest domain , the maximum relative difference is 3% ( between the and domains ) . for on the other hand ,the maximum relative difference is 9% ( between the and domains ) . .the crosses denote the stride path for domain size .open circles , filled circles and the solid line are for , and respectively . note that for the purposes of comparing the paths on this plot , the stride paths have been shifted so that they all begin at ( 0,0).,scaledwidth=80.0% ] .distance travelled in three strides with different domain sizes , and for two different fluid resolutions . [cols="^,^,^",options="header " , ] based on the numerical tests for the resolution and the domain size , we chose and a domain of size for our subsequent production runs .eldredge ( 2008 ) considers massless bodies , which are inconvenient to use with our algorithm ( the expressions for the body velocities with are singular ) .we can , however , observe the trend in the motion of the linked bodies as their mass .for neutrally buoyant elliptical bodies , the masses are , and moment of inertia .we ran the simulation for a number of body masses in the range in order to determine a relationship between the mass and the stride length .for these simulations .the reynolds number is the same as in the calculation of eldredge .eldredge reports that the massless linked bodies have a stride length of .the stride lengths from the sph simulations , as well as the result of eldredge are plotted in figure [ fig : eldredge - comp ] .the line of best - fit shows that there is a linear trend toward eldredge s result. the results of eldredge show that the stride lengths vary from stride to stride .the second stride is longer than the first , and the third is longer than the second .our results show a similar behaviour .we find that the second stride length is larger than the first typically by , and the third stride length is larger than the second by .the equivalent results of eldredge are , and which is similar to our results . for the inviscid case ( discussed below ) ,kanso et al .show that the stride length is constant .eldredge estimated the stride length by taking the average of the second and third strides , and we followed the same procedure in generating the results of figure [ fig : eldredge - comp ] . , scaled with the length parameter , for the motion in fluid with constant viscosity and fixed gait but a range of body masses .the crosses denote the sph results and the filled circle denotes the result of eldredge .the line of best - fit is also shown.,scaledwidth=80.0% ] the vorticity generated by the motion of the linked bodies after time is shown in figure [ fig : viscousvort ] .this plot can be compared to the last frame of figure 10 of eldredge ( 2008 ) .note however , that for this figure , whereas eldredge has massless bodies .the contours of eldredge are much smoother than those shown in figure [ fig : viscousvort ] , but the main features are recognisable .there is one large , and one smaller eddy near the rear of the linked bodies , which are in the same positions as with eldredge , and there is an intense eddy generated by the front body .the stride path is in good agreement with eldredge . .this plot is at time .the vorticity contours have values in the range -5 to 5 with 40 levels .the stride path is also shown.,scaledwidth=100.0% ] finally , we note that the vortex particle spacing in eldredge s simulation is typically compared to our .eldredge has 280 panels on each body which is close to the 244 boundary particles per body in the sph calculation .kanso et al .( 2005 ) consider the motion of three linked ellipses in an inviscid fluid .although the sph algorithm is only stable with non - zero viscosity we can study the stride length variation with change in the viscosity and estimate the stride length for zero kinematic viscosity coefficient as .we ran the simulation with neutrally buoyant bodies for viscosity in the range , which corresponds to a range in reynolds number of .the calculations were run with neutrally buoyant bodies .kanso et al .find that the stride length for the neutrally buoyant bodies is .this result , as well as the stride lengths from the sph simulations , are plotted in figure [ fig : kanso - comp ] .the sph results show a trend toward the case of kanso . in some respectsthe agreement is remarkable because there are significant differences between the inviscid and non - inviscid cases .for example the flow produced by an oscillating cylinder changes dramatically as the reynolds number changes from small to large though the time averaged drag terms are nearly constant for . in the inviscid case the fluid motion produced by a system of oscillating linked bodies will cease the instant they stop oscillating , while in the viscous case , the motion will continue though it will be damped . and as discussed in the previous section , the strides increase in length both for our calculations and those of eldredge whereas those of kanso et al .are constant .these results suggest that when the average motion of the linked bodies is determined primarily by added mass effects as discussed by saffman ( 1967 ) for swimming by shape change in an inviscid fluid .the stride lengths plotted in figure [ fig : kanso - comp ] were calculated as described in the last section .in addition to the sph calculations we have plotted in figure [ fig : kanso - comp ] an estimate of the variation of the stride length with viscosity based on an analytical result obtained by taylor ( 1952 ) in his discussion of the swimming of long slender bodies .a curve was fitted for the stride length of the form where and are arbitrary constants ( since we have fixed ) determined by fitting to our data set .the form of ( [ eqn : kanso - fit ] ) is determined from ( [ eqn : taylorv - general ] ) , which will be discussed in the next section .for the present case we determine the coefficients using two points from the sph results .we find and .the curve shows good agreement in the higher viscosity range where and . we do not expect ( [ eqn : kanso - fit ] ) to be valid for very high reynolds number ., for the motion in a fluid with different viscosities but constant gait and mass .the crosses denote the sph results and the filled circle denotes the result of kanso et al .the dashed curve is based on an analytical result by taylor , which is not expected to be valid for very high reynolds number .note that if then ,scaledwidth=80.0% ]the speed with which the linked bodies move through the fluid depends upon a number of parameters .as already seen , the speed depends ( at least ) upon the mass of the bodies and the fluid viscosity .additionally , we expect the speed to depend upon the ratios , , the frequency , and the amplitude .similarly , we expect the power expended to be dependent on these parameters . because the fluid is treated as slightly compressiblethere is a further non dimensional quantity typically equal to 1/20 in our calculations .we neglect contributions from this quantity .the speed of the linked bodies , , was estimated from the stride length divided by the time taken to complete the stride . following the approach from the previous section , we take the stride length to be the average of the second and third strides .the time taken to complete a stride is . in a biological creaturethe power expended for locomotion is provided by the actions of the muscles which themselves depend on their body chemistry . in the case of a marine robotic vehiclethe energy is provided by the engines within the vehicle . in our formulationsthe energy can be estimated from the constraint forces in the equations of motion .we calculate the average power expended by the linked bodies over the time interval to from the expression where and are the constraint forces and torques on body respectively .substituting the constraint forces and torques ( [ eqn : cforce1]-[eqn : ctorque3 ] ) into ( [ eqn : powerdef ] ) gives was calculated by numerical integration over the time of interest ( in this case , the time taken to complete the first three strides ) . to simplify the velocity and power relations, we study the motion with bodies of fixed mass , fixed lengths , , and and a fixed periodic - domain size .we then expect the speed to be given by an expression of the following form since the work done by the constraints is proportional to , and , we expect a power relation of the following form it is useful to compare these scaling relations with the analysis of taylor ( 1952 ) .he shows that an infinite , flexible cylinder , along which a wave of amplitude and wavelength propagates with speed , moves with an average forward velocity in a viscous fluid given by where , is a drag coefficient , and the functions , , , , and are given by integrals .the quantity is given by and when is small enough .for our oscillating bodies , so that . because is close to our amplitude replace by to convert taylor s formula to a form appropriate for our system . is a reynolds number where the characteristic length is the diameter of the cylinder .taylor s formula is an example of the relation ( [ eqn : velrelation ] ) with replaced by .our oscillating ellipses are similar to a small section of taylor s oscillating cylinder and this suggests that taylor s formula might provide a useful model for the scaling relations appropriate to the linked ellipses even though his calculations are for motion in three dimensions and ours are for motion in two dimensions . to that endwe replace by our reynolds number and expand taylor s formula assuming is small .we then find , with , that where the left hand side is the stride length scaled in units of and a constant of proportionality has been absorbed into and . in the following wereplace by .the continuous curve in figure [ fig : kanso - comp ] , which applies to the case of constant and , was obtained by fitting the parameters and using two values of the viscosity .it can be seen that this gives a good fit to our results except at the lowest values of the viscosity coefficient . furthermore , for constant viscosity and amplitude , the variation of with frequency deduced from ( [ eqn : velrelationb ] ) is where and are constant for fixed amplitude , viscosity and length .if , the integrals in taylor s formula can be expanded in a power series in . if this is done , and we replace by , we find that where and are constants .these formula , with suitable values for the constants , give a very satisfactory fit to our results .taylor ( 1952 ) also derives an expression for the power generated .if , and the amplitude we can expand taylor s formula to find ( we replace his notation w for the power by ) that when the viscosity and the lengths , and are constant we can write this as taylor s expressions for the velocity and power agree , as expected , with the general scaling relations ( [ eqn : velrelation ] ) and ( [ eqn : powrelation ] ) . from the scaling relations we expect for a fixed amplitude and reynolds number that and . in order to test these relations the mass of each ellipse was kept constant ( neutrally buoyant ) , with constant lengths , and and reynolds number 200 , with . the frequencies were in the range . in order to keep the reynolds number fixed , the fluid viscosity was changed with the frequency of oscillation according to .additionally , the speed of sound required for the equation of state was constant with the value where is the maximum frequency used .figure [ fig : constreynum - vel ] shows that there is an approximate linear relationship between the velocity and the frequency , which is in substantial agreement with ( [ eqn : velrelation ] ) except that the velocity vanishes for .figure [ fig : constreynum - pow ] shows the power against .the continuous curve is a cubic polynomial in agreement with ( [ eqn : powrelation ] ) . for the case where the viscosity is fixed we can be guided by taylor s formula .we ran the calculations , again with neutrally buoyant bodies , and the same body length parameters .we chose , such that .we ran the calculations for a number of frequencies and amplitudes in the range and .note that for , the angle between two consecutive bodies is less than .this case has no physical analogue to the long slender body of taylor .the speed is plotted against for three amplitudes 0.5 , 0.9 , and 1.3 in figure [ fig : vel - freq ] .we used two data points from the data set in order to determine the constants and in ( [ eqn : taylorv - general ] ) which can then be written . as figure [ fig : vel - freq ] shows , this gives a good fit to the and 0.9 data .the ( dashed ) curve for does not agree with the sph results , but we do not necessarily expect it to , since ( [ eqn : taylorv - general ] ) is only valid for .the velocity is plotted against for fixed frequencies in figure [ fig : vel - amp ] .the curves on these plots are again from ( [ eqn : taylorv - general ] ) , using the same values for and as determined previously .once again we see that the curves give a good fit for .we found that the peak velocity is achieved with for all of these cases .the velocity appears to be smaller when the bodies swing through , or more , relative to one another .= 0.5 , 0.9 and 1.3 respectively .the curves are from ( [ eqn : taylorv - general ] ) with constants fitted from the set with .the dashed curve is for which is outside the range for which ( [ eqn : taylorv - general ] ) is valid.,scaledwidth=70.0% ] = 0.75 , 1.25 and 2.0 respectively .the curves are from ( [ eqn : taylorv - general ] ) , which is appropriate for .,scaledwidth=70.0% ] the average power is plotted against for fixed amplitudes in figure [ fig : pow - freq ] . as with the velocity , we chose two data points from the data set in order to determine the constants and in ( [ eqn : taylorp - general ] ) .this gives a very good fit to the sph results .the power is plotted against for fixed frequencies in figure [ fig : pow - amp ] .again , ( [ eqn : taylorp - general ] ) with the same values for and gives a very good fit to the sph results .= 0.5 , 0.9 and 1.3 respectively .the curves are from ( [ eqn : taylorp - general]).,scaledwidth=70.0% ] = 0.75 , 1.25 and 2.0 respectively .the curves are from ( [ eqn : taylorp - general]).,scaledwidth=70.0% ] taylor applied his formula to estimate the speed of a leech which swims with shape changes shown in figure [ fig : taylorleech ] .these shape changes only roughly approximate a sine wave down a very long cylinder since taylor does not include end effects and , in any case the motion is only approximately sinusoidal ( see for example frame 4 ) . we can estimate the speed by making appropriate adjustments to our scaling formulae . since these mimic taylor sthe only issue is whether the constants calculated from our simulations allow us to calculate the speed for an animal with different length scales .returning to the relation ( 7.12 ) , and determining the constants by fitting to our results for we find the leech taylor considered was approximately 0.08 m long , and travelled with a velocity 0.043 m / s .the gait of the leech produces a wave like motion along its body with an average speed 0.153 m / s which we estimate as being equivalent to where we take the wavelength as the length of the leech and set this to be approximately , the total length of our ellipses .this gives .taylor finds that the average value of so that . consistent with out earlier discussion , we can take .the thickness of the leech changes as it moves with an average diameter estimated by taylor to be 0.0055 m so that , for our ellipses , we can estimate .the estimate of for the leech is 0.08/6 and the ratio as in our simulations .the viscosity coefficient is . substituting these values into ( [ eqn : leechvel ] ) with , we find m / s , which compares favourably with taylor s estimate from the experiment of 0.043 m / s .it might be thought that this agreement is a lucky coincidence but , taken with the agreement of our results with formulae modelled on taylor s expression , it does suggest that the speed of a three dimensional long thin body is similar in form to that of three linked , long , thin ellipses in two dimensions .we now ask the question : for a given fluid viscosity , body size and body mass configuration , what is the frequency and amplitude to move with a given speed , while expending the least power ?we do not attempt to obtain the optimum frequency and amplitude for all possible ellipses .instead we have the more modest aim of determining if an optimum set of parameters exists for a typical set of ellipses . with this in mindwe consider neutrally buoyant bodies , with , and .the kinematic viscosity is , so that . in figure[ fig : optmotioncontours ] we show the contours of constant velocity and constant power in the plane .these contours were obtained by using the results of the simulations to fit the velocity with polynomials of the form and the power with the function . while ( 8.1 ) does not include the fractional powers we derived by comparison with taylor s formula it is still possible to a satisfactory over the whole range of and .it is clear from figure [ fig : optmotioncontours ] that there is a set of values of and which will give a specified speed with minimum power .the dashed line in figure [ fig : optmotioncontours ] gives the optimum set of and .this line was calculated by traversing a contour of constant power and finding the and which give the maximum speed .it is interesting to note that the optimal motion is close to regardless of the frequency .we can estimate some properties of the contours in figure [ fig : optmotioncontours ] without detailed numerical calculations . for constant viscosity, we can estimate from ( [ eqn : taylorv - general ] ) , for fixed lengths , and that where we have replaced by which is reasonably accurate for , and is a constant . similarly from ( [ eqn : taylorp - general ] ) we estimate where is a constant .these expressions show that increases faster as decreases for the constant curves than for the constant curves when .this gives the shape of the contours on the left hand side of figure [ fig : optmotioncontours ] .the principal results of this paper are ( a ) that the accuracy of the sph algorithm for linked bodies moving in a fluid has been established , ( b ) that the variation of the calculated speed and power output take simple forms consistent with scaling relations , ( c ) that there is remarkable agreement between the two dimensional results and those taylor obtained for the swimming of long narrow animals in three dimensions , and ( d ) the minimum power to produce a specified speed for a given gait has been calculated and forms a basis for other such calculations .the first of these results has been obtained by resolution studies and by comparison with the results of eldredge ( obtained for massless bodies in a viscous fluid ) and kanso et al .( obtained for neutrally buoyant bodies in inviscid fluids ) . in both casesthe relevant results are limits of our calculations . in the case of eldredgewe estimated his value from a series of calculations where the mass of the body was changed . in the case of kansothe viscosity was steadily decreased so that the reynolds number increased from 50 to 5000 .the second and third results were obtained by fixing the dimensions of the bodies and their masses but changing the frequency and amplitude of the gait .the simulations show that the the results have a simple dependence on frequency and amplitude which is similar to that found by taylor ( 1952 ) .these results suggest that the drag forces on a long thin ellipse in two dimensions is similar to that on a cylinder in three dimensions .in particular , it suggests that the drag has two additive contributions .one varying with reynolds number as and one depending on the square of the velocity relative to the fluid .we are unaware of calculations or analysis which would confirm this conjecture in detail .it is clear however , that there will be pressure forces proportional to the square of the velocity on the bodies , and viscous forces due to flow along and between the ellipses .the result ( d ) shows that the efficiency is poor if the linked bodies are driven with a gait amplitude which is too large or too small .we find that the optimum performance occurs when the amplitude or , equivalently , when the angles between the links varies between .the formulation we have used is general and can be immediately applied to the motion of linked bodies in stratified fluids , or with a free surface , or within complex boundaries , or with more complex constraints including those where the gait depends on the positions of the bodies in the domain .we are currently studying these problems .eldredge , numerical simulations of undulatory swimming at moderate reynolds number , bioinsp .biomim . 1 ( 2006 ) s19s24 .eldredge , numerical simulation of the fluid dynamics of 2d rigid body motion with the vortex particle method , j. comput .221 , ( 2007 ) 626648 .eldredge , dynamically coupled fluid - body interations in vorticity - based numerical simulations , j. comput .( 2008 ) 91709194 .kajtar , j.j .monaghan , sph simulations of swimming linked bodies , j. comput .( 2008 ) 85688587 .e. kanso , j.e .marsden , c.w .rowley , j.b .melli - huber , locomotion of articulated bodies in a perfect fluid , j. nonlinear sci . 15( 2005 ) 255289 .h. lamb , hydrodynamics , sixth ed . , cambridge university press , 1932 .landau , e.m .lifshitz , mechanics : course of theoretical physics .i. mechanics , pergamon , 1976 . j.b .melli , c.w .rowley , d.s .rufat , motion planning for an articulated body in a perfect planar fluid , siam j. appl .5(4 ) ( 2006 ) 650669 .monaghan , smoothed particle hydrodynamics , ann .30 ( 1992 ) 543573 .monaghan , sph and riemann solvers , j. comput .phys . 136 (1997 ) 298307 .monaghan , a.m. kos , scott russell s wave generator , phys .fluids , a 12 ( 2000 ) 622630 . j.j .monaghan , a.m. kos , n. issa , fluid motion generated by impact , j. waterw .port c. , 129(6 ) ( 2003 ) 250259 .monaghan , smoothed particle hydrodynamics , rep .progress phys .68 ( 2005 ) 17031759 .monaghan , smoothed particle hydrodynamics simulations of shear flow , mon . not .365 ( 2006 ) 199213 .monaghan , j.b .kajtar , sph particle boundary forces for arbitrary boundaries , to appear in comput .( 2009 ) c.s .peskin , numerical analysis of blood flow in the heart , j. computat .phys . 25 ( 1977 ) 220252 .peskin , the immersed boundary method , acta numer .10 ( 2002 ) 479517 .p. saffman , the self propulsion of a deformable body in a perfect fluid .( 1967 ) 385 - 389 .l. sirovich , initial and boundary value problems in dissipative gas dynamics , phys . fluids 10 ( 1967 ) 2434 .l. sirovich , steady gasdynamic flows , phys .fluids 11 ( 1968 ) 14241439 . g.i .taylor , analysis of the swimming of long and narrow animals , proc .ser - a . 214( 1952 ) 158183 .h. wendland , piecewise polynomial , positive definite and compactly supported radial functions of minimal degree , adv .4 ( 1995 ) 389396 .
in this paper we study the motion of three linked ellipses moving through a viscous fluid in two dimensions . the angles between the ellipses change with time in a specified manner ( the gait ) and the resulting time varying configuration is similar to the appearance of a swimming leech . we simulate the motion using the particle method smoothed particle hydrodynamics ( sph ) which we test by convergence studies and by comparison with the inviscid results of kanso et al . ( 2005 ) and the viscous results of eldredge ( 2006 , 2007 , 2008 ) . we determine how the average speed and power output depends on the amplitude and oscillation frequency of the gait . we find that the results fit simple scaling rules which can related to the analytical results of g.i . taylor for the swimming of long narrow animals ( 1952 ) . we apply our results to estimate the speed of a swimming leech with reasonable accuracy , and we determine the minimum power required to propel the bodies at a specified average speed .
an important decision - making process faced by scholars on a regular basis is choosing the outlets for publication of their manuscripts ( 1 - 3 ) .the right choice of venue can have a significant effect on the dissemination , usage , and even the preservation of the future scientific paper ( 4 , 5 ) .pepermans and rousseau ( 6 ) identified three types of factors driving the decisions of authors : author characteristics , journal characteristics and other characteristics , focusing on the acceptance / rejection rate as an important factor in those decisions .prestige of a journal is often used , implicitly if not explicitly , as an assessment of the quality of research ( 7 ) .maximizing the impact of a publication , most commonly quantified as a number of received citations , is a natural goal of most authors , even when it is expressed as a desire to reach the widest possible audience ( 8 , 9 ) .author s citation count and the related h - index ( 10 ) can be critical factors for funding , hiring , tenure and promotion decisions .such practices have resulted in a pressure to publish in high - impact , often general - science journals , versus the specialized venues with smaller impact factors ( 11 ) .some authors target highest - impact venue first , `` cascading '' to journals with lower impact until acceptance ( 8) , a process that can exert significant publication delays and place burden on editors and reviewers , as well as the authors .authors often rely on the impact of target journal even when choosing among alternative specialized venues ( 12 , 13 ) . understanding if and when such strategies are worth the additional effort is clearly important . evaluatingthe impact of journals is not a straightforward task .the most widely used measure , known even outside of scientific community ( 14 ) , is the so - called journal impact factor ( if ) .the if is a metric introduced by eugene garfield in 1972 ( 15 ) , and its definition is rather simple : the if of a venue in year equals the average number of citations received in from all papers published in the preceding two years ( and ) covered by the citation database .official if values are released annually by the thomson reuters journal citation reports . a widespread habit to assess , at least in the short - term , the quality of scientific publications , and consequently the authors of these papers on the basis of the if of the venue has received much criticism ( 16 , 17 ) .the basis for this criticism lies in the fact that if is a very poor indicator of the actual number of citations that a given paper will receive ( 18 , 19 ) .furthermore , the number of citations received by a journal can be dependent not only on their quality , but the quantity of articles that they publish ( 15 ) . while qualitative assessment of the quality and significance of work is still unsurpassed by any indicator, most authors intuitively presume that a journal with a higher if offers a greater probability that their article will perform well in terms of received citations ( 20 ) . there is some empirical evidence that very similar articles published in higher - if journals do receive more citations than their `` twins '' published in lower - if journals , thus justifying such notion by authors ( 21 ) .such advantage may seem obvious when the difference in impact factors of potential journals is very large , but is less clear in cases when the choice is between journals having ifs of , say , 3 vs. 4 .in this paper we use detailed citation data on .5 million papers from all disciplines to test the validity of a publication strategy based on journal if , and to quantify the benefits of targeting higher - ranked journals .furthermore , we explore whether , in the absence of detailed citation data , the if can give a quantitative guidance to the author regarding the benefit of targeting a higher - ranked journal .for this study we use _ thomson reuters _ web of science ( wos ) database of bibliographic records of journal articles obtained in 2015 . specifically , we use all records that wos classifies as the following document types : article , review and proceedings paper .these are the types of documents that are commonly cited , and are the types that are included in the calculation of the official if in journal citation reports ( jcr ) . for simplicity, we will refer to these three types of documents as `` articles . ''we perform all analysis for citations received in 2010 .our results do not depend on the choice of year .the if for year 2010 ( which jcr released in 2011 ) is the number of citations that the articles published in years 2008 and 2009 received in 2010 divided by the number of these publications . for the analysis we select 15,906 journals which have published 25 or more articles during the publication window ( years 2008 and 2009 )the cut was chosen to ensure well - sampled citation distributions , but the results are insensitive to the exact choice of threshold .the total number of articles published in selected journals in 2008/09 is 2,352,554 .the if values computed from our data are smaller than from those officially published by jcr by about 4% . in our analysis, we therefore multiplied all if values by a factor 1.04 .there are a number of reasons for slight discrepancies between the official ifs and the one calculated from wos data ( 28 ) .being able to accurately reproduce the official ifs is not essential for our analysis .however , it is important to base both the calculation of the if and the citation benefit on the same data , which we do .the goal of the study is to examine how ifs are related to a probability of receiving higher number of citations in one journal ( journal a ) compared to another ( journal b ) . in order to calculate this probability ,we first construct citation distribution for every journal in the study .we then calculate rank - sum corresponding to journal a ( ) from the mann - whitney u test .the probability that a randomly drawn article from journal a will have a greater number of citations than an article drawn from journal b will be , where and are the number of articles in each journal . in case of a tie ,the rank - sum splits the score in two .so , if two journals have identical citation distributions , the probability is 50% ( ) .in other words , probability of 50% means that both journals do equally well .probability of 90% suggests than journal a is considerably more likely to bring more citations than journal b.we consider 15,906 journals , spanning all disciplines , indexed in the _ thomson reuters _ web of science ( wos ) database . for each of the 2,352,554 research articles published in these journals in years 2008 and 2009 , we compute the total number of citations the articles have received in year 2010 .the average number of citations received by papers in a specific journal determines the if value of that journal for year 2010 . for a pair of journals and , we calculate , using mann - whitney rank sum , the probability that a paper published in journal ( the target journal ) would have accumulated a greater ( or equal ) number of citations than a paper published in journal .we refer to this probability as the citation benefit of publishing in journal instead of journal .fig 1 shows the citation benefit of publishing an article in a journal different from _ nature _ , which represents the reference journal in this case .our measure of citation benefit is plotted against the if value of the target journal .we note that the if values of the various target journals range from 0 to 110 , whereas the term of comparison , i.e. , _ nature _ , has if value 35.5 .if for some target journal the benefit equals 50% , there is no expected advantage in targeting that journal in place of _ nature _ , and _vice versa_.the remarkable feature of fig 1 is that the dispersion of the empirical points is relatively small : for a given if value , the citation benefit is narrowly distributed .the if is a simple citation average , and is often perceived as an inadequate or at least very limited characterization of journal s citation capacity . as fig 2 shows , citation distributions of journals are indeed very broad , spanning two to three orders of magnitude even after only a few years ( 23,24 ) .this broadness implies that the if value poorly represent the range of actual number of citations received by the population of papers published in those journals .nonetheless , our non - parametric measure of relative citation benefit is well captured by if values . since this is exactly the consideration that many authors have when they are facing the selection of a publication venue ,we conclude that the if does represent a meaningful and easily retrievable metric to guide publication strategy .given that _nature _ is already one of the journals with the highest if value ( it is in top 10 most highly ranked journals ) , it is not surprising that no other journal has a significantly higher citation benefit , thus leaving only a few alternatives with marginally higher chances of receiving more citations .indeed , seven of the ten journals with positive benefit with respect to nature are review journals ( _ annual review of immunology , nature reviews : molecular cell biology , nature reviews : cancer , nature reviews : immunology , nature reviews : neuroscience , physiological reviews , annual review of neuroscience _ ) , and therefore not valid alternatives for original research papers .the remaining three are _ cell , nature genetics , and new england journal of medicine _ , and they can be therefore targeted only by biomedical researchers .two journals with the highest if values ( _ ca : a cancer journal for clinicians and acta crystallographica section a _ ) actually show citation benefit smaller than 50% . the reasons why some journals scatter offthe tight relation lies in the fact that their citation distributions are atypical compared to other journals of the same if , usually because of a small number of very highly cited articles that boost the average citation , and thus the if value. however , such journals are rare ( see fig 1 ) .we note , however , that even large differences in if values of two journals do not translate into significant differences in the benefit associated with publishing in one journal instead of the other .for example , as fig 1 shows , targeting a journal with an if value equal to 20 ( almost two times smaller than the if of _ nature _ ) , still permits a 35% probability that publishing in that journal will result in an article more cited than a _ nature _ paper . going to if values approximately equal to 10 ( e.g. , that of _ pnas _ ) , leads to a probability of 17% . as the if of the target journal decreases ,so do the chances of receiving more citations than if the article was published in _nature_. however , even in more extreme cases , for example , for a journal with if ( e.g. , _plos one _ ) , the probability is still non - negligible and equals 7% . the reason why the relation between the if of the target journal and the citation benefit is not very steep lies in the fact that the citation distributions of journals tend to be broad and to overlap .this can be appreciated from fig 2 , where we show citation distributions of articles published in four major multidisciplinary journals : _ nature , science , pnas _ and _ plos one_. the four journals have a wide range of if values : from 35.5 for _ nature _ to 4.5 for _ plos one _ ( complete information is given in table 1 ) .nevertheless , their citation distributions overlap to a large extent . _nature , science _ and _ pnas _ have papers with anywhere between 0 and citations , while this range is between 0 and 200 for _ plos one_. the relation between if values and citation benefit would have been steeper if the citation distributions were narrower .for example , if papers in _ plos one _ only had between 0 and 10 citations ( which could still produce the actual if = 4.5 ) , while all papers in _ nature _ had more than 10 citations ( which could still result in if = 35.5 ) , then there would have been a null probability for _ plos one _ papers to accumulate more citations than a _ nature _ paper. we also note that _ nature _ and _ science _ actually have very similar citation distributions , but the reason why _ science _ has a somewhat smaller if value than _ nature _ ( 28.9 vs. 35.5 ) is due to slightly fewer very highly cited papers than _ nature_. so far , we have discussed the relation between various journals and _ nature_. in table 1 , we present cross comparisons among four multidisciplinary journals . as expected , the biggest contrast is between _ plos one _ and _ nature _ , in the sense that _ nature _ papers have 94% probability to accumulate more citations than _ plos one _ papers .minimal benefit is present between _ nature _ and _ science _ , with only 56% of the papers accumulating more citations in _nature_. we remark that our calculations are based on 2010 data .the most recent if values are slightly different : _ science _ and especially _ nature _ have higher if values than they did in 2010 , while _ pnas _ is nearly the same , and _plos one _ is lower .these changes will likely be reflected in somewhat greater benefits of the first two journals with respect to the other two . [ cols= " < , < , < , < , < , < , < , < , < ,< , < " , ] a generic entry of the table shows the relative citation benefit of publishing in one of the journals listed in first column instead of a journal from columns 2 - 5 .the three rightmost columns of the table report : the 2010 if derived from our bibliographic dataset , and the official ifs for years 2010 and 2014 as published in the journal of citation report ( jcr ) . we also present a case study of relative citation benefits for publishing in biochemistry .the list of all journals in the jcr category biochemistry & molecular biology was presented to an expert in the field who selected a comprehensive set of journals that are most relevant to his research field . from the list we selected a subset of 24 journals for which relative benefits were calculated ( see table s1 ) .these journals have if values in the range 1.3 to 14.9 . for journals in the intermediate impact range ( if 5 ) the change in if of 1 ( from 5 to 6 ) is associated with a marginal advantage ( 5% increase ) of receiving more citations in a higher - ranked journal .this should be kept in mind when authors strive , sometimes at a cost of greater inconvenience or higher publication charges , to publish in a journal with a nominally higher if .the exact computation of the relative benefit for a pair of journals requires the availability of full citation distributions for both journals .this is a clear limitation for wide implementation .fortunately , as fig 1 shows , the relative benefit and if values are related by a narrow function .this empirical fact allows the possibility of estimating the citation benefit rather precisely using only the if values of the journals , which are readily available to authors . in fig 3, we show citation benefit for four reference journals ( _ science , pnas , plos one _ , and _ proceedings of the royal society a ( prsa ) _ ) , chosen to exhibit a fairly wide range of if values , from 28.9 ( _ science _ ) to 1.7 ( _ prsa _ ) .we now plot the citation benefit as a function of the ratio of the if values of target to reference journal , on a logarithmic scale .the shape of the relation for all four reference journals is similar and has a characteristic sigmoid shape .when the if ratio is high , the relative benefit approaches 100% .when the if ratio is 1 , benefit is around 50% , as expected .the main difference between the four curves consists in the location of the lower asymptote the probability that a target journal with a very small if will receive more citations than the reference journal .this plateau probability is close to zero for a high - if journal like science , but becomes as high as 20% for _ prsa _ ( if = 1.7 ) .the non - zero plateau is due to the uncited papers in the reference journals . _plos one _ and _ prsa _ publish a non - negligible proportion of papers that do not receive citations ( at least in the time window used for calculating if ) , so even a target journal with if = 0 ( no paper having received any citation ) will be tied with uncited articles from the reference journal . because ties count as `` greater than '' half of the time , the plateau will be located at 1/2 of the `` uncited fraction '' ( ) of the reference journal .the existence of a plateau that depends on the uncited fraction seems to prevent the construction of a benefit function that would only depend on easily available ifs .fortunately , fig 4a shows that the fraction of uncited articles is itself a tight function of if , a feature noted in some previous studies ( 25 - 27 ) .this is another consequence of the fact that the journals with the same if have similar citation distributions . for journals withif 1 , the uncited fraction is around 50% .the tightness of the scatter plot suggests that a suitable functional form could allow relatively precise determination of from if alone .we find that is described almost perfectly by the generalized logistic function ( to be accurate , the function is logistic when if is expressed as a logarithm ) : where the values of the factor and exponents and are : , , and . of papers that have accumulated zero citations one or two years after their publication andplot it as function of the if value of the journal .the blue curve is given by the generalized logistic function of eq 1 .( b ) residuals are symmetrically distributed around the fit ( blue curve and line ) , and their value is independent of the if value of the journal.,scaledwidth=40.0% ] at this point , we have all the ingredients necessary to establish a relation between citation benefit and if ratio .this is given by a logistic function with a positive lower asymptote : where is the ratio of ifs of target and reference journals , and is the uncited rate of the reference journal that can be evaluated from eq 1 , or read off from fig 4 .factor is required to ensure that = 0.5 when , and equals .fitting of eq 2 to the data in order to determine is performed as follows .benefit probabilities from mann - whitney calculation are averaged in equal bins in log of 0.05 .binning ensures that equal weight is given to journals with different if ratios .fitting is performed by minimizing square deviations of probability with respect to the fitting function .the fitting has only one free parameter , the exponent .fig 5 shows that only weakly depends on the reference journal , giving eq 2 a universal character . on averageit takes the value the benefit is therefore a funciton of two independent variables , if and .when the uncited rate of the reference journal is low ( if ) or when , eq 2 simplifies to : i.e. , the benefit then depends solely on the if ratio .in fig 6 we show the benefit matrix for journals with if and the residuals when the benefits are obtained using only eq 1 and 2 with the value of fixed to 1.23 .the residuals are small ( few percent ) and symmetric . in eq 2 , describing the shape of the citation benefit function .* based on 1,400 journals with if .exponents take a relatively small range of values attesting to the universality of the benefit if ratio relation shown in fig 3.,scaledwidth=50.0% ] .* in panel a , we show the citation benefit of publishing in a target journal with if value equal to if instead of a reference journal with if value equal to if . to generate this figure, we consider only the 1,400 journals with if .colors range from red ( benefit = 0% ) to blue ( benefit = 100% ) .the empirical values in panel a are very well reproduced by our eqs . 1 and 2 , which only depend on the ifs .the residuals are typically small ( few percent ) , and are distributed symmetrically around zero ( panel b).,scaledwidth=80.0% ] to facilitate the calculation of citation benefit , we also provide a web calculator ( http://tinyurl.com/hxgnz4f ) , which only requires a user to input the ifs of two journals .we have shown that the relation between the citation benefit and ifs is relatively tight , a consequence of the fact that citation distributions of journals with the same if are similar ( 23,28 ) .for the same reason the fraction of articles with zero citations can be predicted from the if .atypical distributions are rare , leading to few outliers in the benefit if ratio relation .furthermore , we have shown that the benefit if ratio relation is a universal function of the ifs of the target and reference journals .when the if ratio the benefit largely depends only on the ratio .for example , journal a will have % probability of receiving more citations than journal b regardless of whether if of a is 10 and of b is 5 , or if a is 30 and b is 15 .essentially , we demonstrate that the relative differences in ifs are more relevant than the absolute differences . the benefit if ratio relation shows that in order to achieve a high benefit , e.g. , with a confidence of 90% , one has to target a journal with .5 higher if than the reference journal .the reverse is also true .aiming for a journal with if five times lower than a high - if reference journal still gives some chance ( % ) of doing as well or better than the high - if journal . that the probability of receiving more citations is not significantly different for small relative differences in if should therefore be born in mind when researchers strive , sometimes at great expense , to publish an article in a journal with marginally higher if .the fundamental reason for the gradual change in probabilities lies in the fact that even journals with very different ifs have broad and largely overlapping citation distributions ( fig 2 ) . in this paperwe have focused on a question of receiving more citations , regardless of how many .no sensible predictions are possible of how many more citations will be received , because the citation distributions are very broad .one can only estimate the average expected difference in citation counts , which will simply be the difference in ifs . on the other hand ,it is perfectly justified to ask a question of a probability of receiving times as many citations in one journal as opposed to another , as this is just the modification of our original question of receiving times more citations .for example , we calculate that in order to make obtaining 2 as many citations very likely ( % ) requires targeting a reference journal with 10 times higher if . to summarize , the journal impact factors are useful in guiding publication strategy , but it is important to understand when the benefits are significant .[ [ s1_table ] ] s1 table .+ + + + + + + + + * the citation benefit matrix for biochemistry journals . * citation benefit of an article in a biochemistry journal in row m ( numbered 1 - 24 ) with respect to the journal listed in column n ( numbered 1 - 24 ) .for example , article published in biomacromolecules ( row 12 ) has 66% probability of receiving more citations than the article in biochemistry ( column 20 ) .this work uses web of science data by thomson reuters provided by the network science institute and the cyberinfrastructure for network science center at indiana university .we thank andras muhlrad for selecting the biochemistry journals .fr acknowledges nsf grant sma-1446078 .borgman cl ( 2007 ) scholarship in the digital age : information , infrastructure , and the internet ( the mit press , cambridge ) .tenopir c & king dw ( 2000 ) towards electronic journals : realities for scientists , librarians , and publishers ( sla publishing , washington , dc ) .rowland jfb ( 1982 ) the scientist s view of his information system. journal of documentation 38(1):38 - 42 .lawrence s ( 2001 ) online or invisible .nature 411(6837):521 .kurtz mj & bollen j ( 2010 ) usage bibliometrics. annual review of information science and technology , ed cronin b ( information today , inc . , medford , nj ) , vol 44 , pp 1 - 64 .pepermans g & rousseau s ( 2015 ) the decision to submit to a journal : another example of a valence - consistent shift ?journal of the association for information science and technology doi : 10.1002/asi.23491 .ravetz jr ( 1971 ) scientific knowledge and its social problems ( oxford university press , new york ) .gordon md ( 1984 ) how authors select journals : a test of the reward maximization model of submission behavior .social studies of science 14:27 - 43 .luukkonen t ( 1992 ) is scientists publishing behavior reward - seeking ?scientometrics 24(2):297 - 319 .hirsch je ( 2005 ) an index to quantify an individual s scientific research output . pnas 102(46):16569 - 16572 .verma i m ( 2015 ) impact , not impact factor .proceedings of the national academy of sciences ( pnas ) 112(26):7875 - 7876 .garfield e ( 2006 ) the history and meaning of the journal impact factor .jama 295(1):90 - 93 .rousseau s & rousseau r ( 2012 ) interactions between journal attributes and authors willingness to wait for editorial decisions .journal of the american society for information science and technology 63(6):1213 - 1225 .glnzel w & moed hf ( 2002 ) journal impact measures in bibliometric research .scientometrics 53(2):171 - 193 .garfield e ( 1972 ) citation analysis as a tool in journal evaluation .science 178:471 - 479 .dora ( 2012 ) san francisco declaration of research assessment .( retrieved from http://www.ascb.org/files/sfdeclarationfinal.pdf ) .hicks d , wouters p , waltman l , de rijcke s , & rafols i ( 2015 ) bibliometrics : the leiden manifesto for research metrics .nature 520:429 - 431 .seglen po ( 1992 ) the skewness of science .journal of the american society for information science 43(9):628 - 638 .seglen po ( 1997 ) why the impact factor of journals should not be used for evaluating research .bmj : british medical journal 314(7079):498 - 502 .calcagno v , et al .( 2012 ) flows of research manuscripts among scientific journals reveal hidden submission patterns .science 338(6110):1065 - 1069 .larivire v & gingras y ( 2010 ) the impact factor s matthew effect : a natural experiment in bibliometrics .journal of the american society for information science and technology 61(2):424 - 427 .bar - ilan , j. ( 2010 ) .rankings of information and library science journals by jif and by h - type indices .journal of informetrics , 4(2 ) , 141 - 147 .stringer m , sales - pardo m , & amaral la ( 2008 ) effectiveness of journal ranking schemes as a tool for locating information .plos one 3(2):e1683 .redner , sidney ( 1998 ) how popular is your paper ?an empirical study of the citation distribution .the european physical journal b - condensed matter and complex systems 4 : 131 - 134 .weale ar , bailey m , & lear pa ( 2004 ) the level of non - citation of articles within a journal as a measure of quality : a comparison to the impact factor .bmc medical research methodology 4(14 ) .schubert a & glnzel w ( 1983 ) statistical reliability of comparisons based on the citation impact of scientific publications .scientometrics 5(1):59 - 74 .moed hf , van leeuwen tn , & reedijk j ( 1999 ) towards appropriate indicators of journal impact .scientometrics 46(3):575 - 589 .radicchi f , fortunato s , & castellano c ( 2008 ) .universality of citation distributions : toward an objective measure of scientific impact .proceedings of the national academy of sciences 105(45 ) : 17268 - 17272 .
choosing the best scientific venue for the submission of a manuscript , with the aim of maximizing the impact of the future publication , is a frequent task faced by scholars . in this paper , we show that the impact factor ( if ) of a journal allows to rationally achieve this goal . we take advantage of a comprehensive bibliographic and citation dataset of about 2.5 million articles from 16,000 journals , and demonstrate that the probability of receiving more citations in one journal compared to another is a universal function of the if values of the two journals . the relation is well described by a modified logistic function , and provides an easy - to - use rule to guide a publication strategy . the benefit of publishing in a journal with a higher if value , instead of another with a lower one grows slowly as a function of the ratio of their if values . for example , receiving more citations is granted in 90% of the cases only if the if ratio is greater than , while targeting a journal with an if 2 higher than another brings in marginal citation benefit .
in this video ( ref ) , we show that when a liquid drop impacts a powder which is superhydrophobic , the drop rebounds with a powder coating that can `` freeze '' the oscillations and yield a deformed ( i.e. non - spherical ) liquid marble . for water drops ,the critical impact speed for the onset of this phenomenon is m / s .repeat experiments with more viscous drops show even more deformed shapes .
this document accompanies fluid dyanmics video entry v83911 for aps dfd 2012 meeting . in this video , we present experiments on how drop oscillations can be `` frozen '' using hydrophobic powders .
the scientific case for a medium resolution spectrograph with high throughput , and covering both optical and near infrared , is substantial .a brief resume would include time - resolved studies of planetary transits , where spectra taken at ingress and egress provide information on the planetary atmospheres ; -ray bursters , where the spectral evolution during the fading afterglow provides an indication of the interaction of the fireball with its surroundings , and therefore information on the progenitor ; studies of early structure and pre - galactic clouds through spectra of absorption line systems in distant quasars ; source identifications and redshifts of star - forming galaxies at high redshift detected in submillimetre surveys ; supernovae and the interaction between their ejecta and previous episodes of mass loss , leading to estimates of chemical abundances , density distributions and ejecta masses ; novae and accreting binary stars , where indirect techniques such as echo and eclipse mapping and doppler tomography provide detailed information on accretion flows onto neutron stars , black holes and white dwarfs at micro - arcsecond scales ; echo mapping of accretion in active galaxies and in microlensing studies , where detailed information can be obtained on both lensed and lensing object from the time - resolved spectra .all of these studies require only a small field of view and a spectral resolving power ; high time resolution is also important .the coverage of both optical and infrared bands is essential , not only in measuring the spectral energy distribution as it evolves , but also in the relationship between spectral lines at widely different wavelengths , and in the accessibility of the redshifted uv and optical lines in objects at high redshift . consequently , one of the instruments identified for a second generation eso vlt instrument suite was a spectrograph of medium resolving power ( ) , with a small field of view ( arcsec ) and spectral coverage over almost a decade in wavelength from the atmospheric cutoff at 320 nm to the long wavelength end of the k band at 2400 nm .the capability to take relatively short exposures was also cited as important , and the instrument was required to deliver a factor improvement in throughput over current instrumentation .this study was carried out in preparation for a response to eso , but not taken further owing to changes in the manufacturing capability for the detectors .an increase in throughput by a significant factor over existing front - rank instrumentation is challenging .one improvement is to replace the cross - dispersing grating in a classical echelle with prisms , as in ucles at the aat ( walker & diego 1985 ) and bhros on gemini ( diego et al 1997 ) .this should increase the throughput of the cross - dispersing optics by a factor of perhaps 1.5 , and also reduce the polarisation effects between dispersing elements suffered in cross - dispersed spectrographs . alternatively, a non - echelle spectrograph could be feasible , with dichroics splitting the optical into two beams and three beams in the infrared , where the detector format is more problematic .this beamsplitting allows the optics and gratings to be optimised for each beam , but is inefficient , and allows spectral leakage .consequently , the overall gains are likely to be modest , and such a spectrograph would be complex . in order to achieve the increased performance required , a radical , but optically simple design is needed .we describe such a concept here , a multi - order spectrograph that takes advantage of the intrinsic colour resolution of superconducting tunnelling junction ( stj ) detectors .stjs are the first detectors to be able to intrinsically distinguish colour in the uv - ir band ( perryman et al 1993 ) , and , in their tantalum form , have sufficient wavelength resolution to separate the spectral orders , eliminating the need for order - sorting optics .we calculate that with 5000-pixel linear arrays of stj detectors , most of the 3202400 nm range can be covered at a resolving power with orders 15 or 16 , using a single detector system .the simplicity of the design enables a higher throughput to be reached .stjs have a sensitivity exceeding 70% ( uncoated ) through most of the optical band .this drops to a few percent into the infrared , but the coverage of the entire band compensates for the lower quantum efficiency of the detectors .because stj detectors are photon - counting , high time resolution information is preserved .the exposure time can be constructed non - destructively _ postfacto _ , at the data analysis stage .this is particularly important for variable objects , such as -ray bursters , where the intensity changes are not predictable beforehand .it also eliminates the overheads of readout time , which can be very large in the case of non - frame - transfer ccds for short exposure times , especially for echelle formats : this in itself can lead to factors up to 2 increase in throughput , for short exposures .a further quality of stjs is that they have no readout noise , and insignificant dark noise . despite the low readout noise of the best current - generation ccds ,this still causes a significant reduction in s / n ratio in ccd echelle spectrographs .the gain is particularly marked in the infrared , where current infrared detector technology can not hope to match the stj performance in this respect .larger format stj arrays could be envisaged for the future , but to keep within currently feasible technology we restrict this instrument concept to two linear arrays ( the second for sky subtraction ) .this means that the spectrograph will be optimised for sources of small angular extent , the effective field of view being set by the spatial extent of each pixel .it may , however , be possible to provide some spatial resolution as an upgrade path relatively simply .initial studies for an stj - based spectrograph working in the uv were presented in the hstj proposal for the hubble space telescope ( griffiths et al 1997 ) .a more detailed exposition of this concept , including a full science case , can be found in cropper et al ( 2002 ) .stj detectors were first developed for the x - ray band , but their potential for use in the uv / optical / ir band was realised by perryman et al ( 1993 ) .they are the first detectors to provide intrinsic colour discrimination at optical wavelengths .this is possible because the energy to break the cooper - pairs in the superconductor is small compared to the energy of an optical photon .the details of the operation of stjs can be found in perryman et al ( 1993 ) , but the essence is that more or fewer electrons are generated depending on whether the incident photon is blue or red . it then remains to measure this charge , to determine its wavelength .an stj consists of two metal films , separated by an aluminium oxide barrier , across which the electrons tunnel .this ` sandwich ' constitutes a pixel .it is supported on a substrate , such as sapphire , though which it is illuminated .each pixel requires its own electrical connection : generally the bottom metal film is connected in common with the other pixels , while the top metal film has a unique connection .the device requires a magnetic field bias to suppress the josephson current .a schematic is given in figure [ fig : stj_pixel ] .current generation stjs use tantalum metal films .pixel sizes range from m .the stj arrays in the s - cam2 instrument in use at the 4.2 m william herschel telescope at la palma have a format of tantalum pixels in a staggered rectangular array ( perryman et al 2001 and references therein ) .each pixel has its own independent preamplifier and analog electronics chain .this array has now been in use for more than two years , and superseded a first generation array of the same format .s - cam2 provides a spectral resolving power of over the 300650 nm band .the timing accuracy for the photon events is , and the maximum count - rate is 5000 counts / pixel / sec .a new pixel array for the camera is under test , and developments using other materials to provide higher intrinsic spectral resolution are under way .scientific results from s - cam include the determination of qso redshifts ( de bruijne et al 2002 ) , studies of the accretion regions and streams in eclipsing cataclysmic variables ( perryman et al 2001 , bridge et al 2002a , b ) , and observations of the crab pulsar ( perryman et al 1999 ) . to reach the wavelength coverage required ,the spectrograph will need the maximum array length in the dispersion direction .the limits here are set by the wafer size , which we assume nominally to be 5 cm , together with the minimum practical pixel size , m , giving a maximum length of 2500 pixels .this is insufficient ( see section [ sec : concepts ] ) , so two such arrays must be butted to give 5000 pixels in the dispersion direction .while more than two arrays could be used , thus providing increased wavelength coverage , other aspects ( optics , thermal issues ) rapidly become more problematic . because in this concept we have maintained the principle that at most only modest technological developments should be required for the realisation of the concept , we reserve such enhancements for the future and limit the array lengths to 5000 pixels .yields for current stj devices are sufficiently high to give confidence that each 2500 pixel strip can be fabricated successfully , so that our concept is feasible .the join region can be made small .moreover , one of the major difficulties limiting the size of _ square _ stj arrays is the access for electrical connections to each pixel : in _ linear _ arrays this is much easier .cross - talk betwen pixels is also lower for the linear arrays since each pixel has half the number of adjacent pixels .the spectrograph concept assumes a ` small ' field of view ( arcsec ) .the most conservative approach would be to use two strips of detector pixels , one for star and one for sky subtraction , each only one pixel wide .this results in a total of pixels , each with easy access for electrical connections .this number of pixels is acceptable in terms of electrical connections and signal processing chains ( see section [ sec : concepts_electronics ] ) .there is some freedom in the width ( in the spatial direction ) of the pixels .such rectangular pixels may also slightly increase the intrinsic stj wavelength resolution without a concommitant increase in array length .tantalum stj array currently in use in s - cam2 ., title="fig : " ] + tantalum stj array currently in use in s - cam2 ., title="fig : " ] some spatial information along the slit could be achieved using droids .these could be introduced in our concept at a later upgrade path , as their technology matures .droids ( verhoeve et al 2000 ) consist of two stjs placed at the end of a tantalum detection strip ; the summed charge is related to the energy of the photon , while the location at which the photon was detected can be recovered from the difference in the two charges collected by the stjs .verhoeve et al ( 2000 ) find that m devices provide an effective 11 pixels , for just two electrical connections .the issues here are whether the droids can be laid down efficiently in such large linear arrays , their spectral resolution is sufficient for the order separation ( for the spectral formats below this appears to be the case ) , and whether it is more desirable scientifically to have the sky subtraction array further displaced from the source array than is possible over the effective 11 pixels provided by the droid .the key to the spectrograph concept is that the intrinsic wavelength resolution of the stjs is used to perform the order sorting .tantalum stjs with m pixels have a theoretical wavelength resolving power of 20 at 320 nm ( peacock et al 1998 ) , decreasing with wavelength to 8.1 at 2400 nm .the resolving power depends on the pixel size : m pixels have slightly better resolving power of 22 at 320 nm . in the s - cam2 array ,a practical resolving power of 12 is reached at 320 nm ( rando et al 2000b ) , limited by infrared background and electronics noise .( the spectral resolving power of droids is % poorer depending on pixel size , worse for larger pixels . )other materials such as hafnium and molybdenum provide superior spectral resolving power ( eg verhoeve et al 2000 ) , with hafnium exceeding 100 at 320 nm .however although some junctions have been fabricated from such materials , the technology is not yet sufficiently mature to use in this concept , and the spectral resolution of tantalum is in any case sufficient .stjs are photon - counting , and intrinsically fast devices . in practise ,their time resolution is set by the pulse - counting electronics and the resolution of the time - tagging electronics . in s - cam , this accuracy is in the regime of tens of microseconds .the s - cam2 devices count up to in excess of 5000 counts / sec / pixel .the detector resolution degrades slightly with countrate ( % at 5000 c / s / pix ) , mostly due to pileup in the currently realised pulse height analysing electronics ( rando et al 2000b ) .faster countrates would be possible , with revised electronics ; however for this concept we take the approach that the current countrates of 5000 should provide an adequate working count - rate limit for sufficient dynamic range .the quantum - efficiency of stjs exceeds 70% in the u r bands , and drops towards longer wavelengths , reaching 20% at 1000 nm and 5% at 2400 nm ( see ( peacock et al 1998 , verhoeve et al 2000 . ) .this is the result mainly of reflection off the tantalum film .other ( less well developed ) stj materials , hafnium and molybdenum , have reflective qualities similar to tantalum in the infrared . from the point of view of infrared sensitivity , there are therefore no particular gains to be made from using them. the infrared reflectance can potentially be improved by the deposition of an appropriate coating on the substrate , before the pixel structure is laid down .currently , little work has been done to investigate this possibility , but this would be obviously desirable for this spectrograph .the steep drop in quantum efficiency beyond m in the infrared aids in ensuring the greater thermal background flux at longer wavelengths does not saturate the detector countrate . in order to sort order 2 from 1, a detector needs to have a resolving power of 2 .generalising , the highest order a detector can sort is set by the resolving power of the detector at that wavelength .the increase in order number with decreasing wavelength implies that the resolving power needs to be highest at the extreme blue end of the operating range .the resolution of the stjs increases with decreasing wavelength .this is in the right sense to match the order sorting requirement . in order to minimise the possibility of allocating a photon to the incorrect order, we need some resolution margin .if we use the current s - cam2 resolving power of at 320 nm , then for the maximum usable order is . for these low order numbers the resolving power required to sort lower orders drops more quickly than the intrinsic resolving power of the stj , so all lower orderscan also be sorted ( and more easily ) .the basic constraints on multi - order spectrographs are simply derived from first principles .we start from the grating equation : where is the order number , is the grating spacing ( _ i.e. _ is the number of grooves per unit length ) , is the angle of the incident ray , and is the angle of the diffracted ray .generally and are constant .this indicates that at any position on the detector ( implying constant ) for all orders , constant .so for example , if a particular pixel in order 1 corresponds to 1000 nm , then light from 500 nm in order 2 will also fall on the pixel , 333 nm in order 3 , and so on .if in our case we allow the maximum order to be 5 , with a central wavelength of 350 nm , then order 1 will be at 1750 nm . alternatively ,if we wanted a central wavelength of 2100 nm , with coverage down to 350 nm , then the highest order will be 6 .the actual wavelength range covered in each order is determined by differentiating the grating equation with respect to wavelength : here may , for example , correspond to one pixel on the detector .thus corresponds to the dispersion .we can see that at a particular position on the detector ( again , a constant ) , the dispersion , , is proportional to . if we calculate the resolving power , then at a particular position on the detector , with ( is the wavelength in first order ) , and ( a constant ) , then , _ i.e. _ constant , independent of order number .thus although the numerator in is halved , so has the step in wavelength corresponding to , say , one pixel .the consequence of this is that all orders have the same resolving power at a particular pixel on the detector .the next step is to calculate the length of the spectrum ._ within _ each order , equation [ eq : grating_dif ] indicates that the dispersion is approximately constant if does not change too much ( which is generally true ) .thus , within each order , each pixel corresponds to a fixed wavelength interval .now if we need a resolving power corresponding to pixels ( for nyquist sampling of the slit ) , then the dispersion where is the centre wavelength of the first order .for example , if we require a resolving power of at 2000 nm , this corresponds to 0.2 nm / resolution element , or 0.1 nm / pix ( nyquist ) .the wavelength coverage in each order is then simply set by the number of pixels in the detector . for the above example, a 5000 pixel detector provides coverage in first order of nm centred on 2000 nm ,_ i.e. _ nm . in second orderit provides nm centred on 1000 nm , so nm , and so on .it is evident in this example that there is a gap between 1125 and 1750 nm not covered by the detector .gaps can be fixed either by lowering the required resolution , or increasing the detector length .neither of these may be feasible , in which case the location of the gaps has to be optimised .the blaze curve of a grating is approximately unchanged with order if plotted as a function of , instead of .the efficiency falls off from the blaze , typically reaching half the peak efficiency at and ) , where is the blaze wavelength for order .if we adopt the criterion that the length of each order is defined by the wavelengths at which the efficiency drops to below that of prior or following orders , then the spectrum appears to shorten gradually as the orders ( and thus order overlap ) increase .this is familiar from the raw images taken by cross - dispersed echelle spectrographs .it is important to note , however , that some photons will arrive in a particular order beyond this wavelength , and will be there for collection if there are active detector pixels to detect them .we now apply these considerations with the following inputs : the maximum order is , set by the stj resolving power with factor ; the minimum wavelength should be the atmospheric cutoff at 320 nm ; the resolving power should be and the detector length should be limited to 5000 pixels , set by realistic expectations of array size .we have given an example in section [ sec : spectral_length ] above with a resolving power of of .this has gaps between orders for orders up to 4 .a reduction in resolving power to 8000 closes the gap between order 3 and 4 , and leaves only those between 1 and 2 and between 2 and 3 ( only 50 nm around 700 nm in this latter case ) .further adjustments can be made in the resolving power , and in the centre wavelength of the first order , but assuming nm and then the spectral coverage in figure [ fig : echelle_pattern](upper ) is obtained .this has the advantage of covering the u z bands and the h band in the infrared . for a highest order 6 ,the centre wavelength of is 2100 nm , in the k band , and the coverage is now u r , z , j and k bands , missing h and z. this is shown in figure [ fig : echelle_pattern](lower ) .the wavelengths covered in each band are shown in table [ tab : echelle_orders ] .rrrrrrrr order & & & resolving & & & overlap & overlap + & ( nm ) & ( nm ) & power r & ( nm ) & ( nm ) & ( nm ) & ( pix ) + 1 & 1750 & 0.1094 & 8000 & 2023 & 1477 & -465 & -4250 + 2 & 875 & 0.0547 & 8000 & 1012 & 738 & -64 & -1167 + 3 & 583 & 0.0365 & 8000 & 674 & 492 & 14 & 375 + 4 & 438 & 0.0273 & 8000 & 506 & 369 & 36 & 1300 + 5 & 350 & 0.0219 & 8000 & 405 & 295 & 42 & 1917 + + order & & & resolving & & & overlap & overlap + & ( nm ) & ( nm ) & power r & ( nm ) & ( nm ) & ( nm ) & ( pix ) + 1 & 2270 & 0.1419 & 8000 & 2625 & 1915 & -603 & -4250 + 2 & 1135 & 0.0709 & 8000 & 1312 & 958 & -83 & -1167 + 3 & 757 & 0.0473 & 8000 & 875 & 638 & 18 & 375 + 4 & 568 & 0.0355 & 8000 & 656 & 479 & 46 & 1300 + 5 & 454 & 0.0284 & 8000 & 525 & 383 & 54 & 1917 + 6 & 378 & 0.0236 & 8000 & 437 & 319 & 56 & 2357 + 7 & 324 & 0.0203 & 8000 & 375 & 274 & 54 & 2688 + exploring further alternatives , if we place order 1 in the j band , the gap to order 2 includes r and i bands , which is clearly unsatisfactory . on the other hand , to cover j , h and k bands , the commensurability of their bandcentre wavelengths is approximately in the ratio 4:3:2 , so that order 2 should be selected to be at nm .unfortunately , this requires a large number of orders , to reach nm , for which an stj spectral resolving power of would be required .this is out of the reach of tantalum devices , while insufficient experience has been gained with other materials such as molybdenum for their use in this concept .this arrangement of orders may be of use in the future . of the earlier two possibilities , the first , with in the h band , provides optimal and almost complete coverage through the optical band up to z , and complete coverage of h. in the second , where is in the k band , an important consideration is the thermal infrared , making this more challenging to realise as a practical design .the loss of some of the i band may also be undesirable ; on the other hand , the j band is gained .a conservative approach would place in the h band , but further consideration may well conclude that the larger infrared coverage provided by in the k band is more desirable scientifically , and also technically feasible .our optical design concept is relatively simple : a multi - order spectrograph without cross disperser , using an off - axis collimator and folded schmidt camera with an accessible focus .the main challenges here relate to the infrared baffling and long wavelength supression , together with the space required at the focus of the camera for the uniform magnetic field system to bias the stj array . using the principles in bingham ( 1979 ) ,we have made a preliminary optical layout ( figure [ fig : ses_optics ] ) .this is based on an off - axis collimator of diameter 261 mm feeding a grating of length 319 mm at a angle of incidence .the grating has a nominal 278 grooves / mm , blazed at .rays are diffracted at from the grating into a schmidt camera of 655 mm focal length ( ) , with a perforated flat folding mirror to provide access to the focus .the window of the cryostat would be figured to flatten the field , and may need to be achromatic .optimisation is required , particularly regarding the camera back focal distance ( currently mm ) to provide sufficient access to the array within the cryostat , and as regards thermal aspects ( see section [ sec : infrared_issues ] ) .+ this preliminary design is relatively straightforward and compact , with the emphasis on being conservative. it may be possible to increase the nyquist - sampled slit width to more closely match the 0.8 arcsec median seeing typical at most front - rank observatories .this would increase the grating and optics , in particular the camera , and in order to match the fixed detector array size a faster camera would be required , which will be more difficult .the grating size increase can be limited by immersing it , as in the original hros concept for gemini s ( darrigo et al 2000 ) .other concepts are also possible . an all - reflecting camera based on a three - mirror anastigmat ( tma )design may be superior .off - axis aspheric mirror technology has made significant strides in the past few years , and such designs are now routinely being considered .due to atmospheric dispersion , the light from celestial sources is split into spectra with the blue end pointing towards the zenith .the length of the spectrum is proportional to the tangent of the zenith distance . in a traditional echelle spectrograph at nasmyth or coude ,a beam rotator is usually used to align the atmospheric dispersion along the entrance slit .the dispersion then adds to , or subtracts from , the cross dispersion .no light is therefore lost at the aperture , although the echelle orders can ` drift ' slightly on the detector during the process of an observation .while losses will be mitigated by using stj pixels that are rectangular , it will be necessary to consider the effect of atmospheric dispersion .a possible solution uses two counter - rotating fused - silica prisms ( ` risley prisms ' ) ahead of focus , perhaps with an identical but inverted set near focus .the first prism - pair compensates for the atmospheric dispersion and provides a white - light image in the vicinity of the spectrograph entrance aperture .however , if the aperture were located at this point , the pupil would be displaced in the vicinity of the spectrograph collimator , and it would also be dispersed .if this is not acceptable , a second ( inverted ) pair of prisms is included , which compensates for both the dispersion and displacement of the pupil in the spectrograph .the overall beam - offset created by the two prism - pairs , which varies with zenith distance , is compensated by a telescope offset .an atmospheric dispersion compensator of any design in the converging beam from the telescope would require an aberration - study , and may require curved surfaces on the prisms .tantalum stjs are sensitive to infrared photons at wavelengths beyond the k band .the sensitivity at wavelengths above m is shown in figure 8 of peacock et al ( 1998 ) ( curve labelled ) .it decreases to a few percent at m , and then drops precipitously , before recovering at wavelengths longer than m .infrared photons beyond the range of interest have negative consequences as follows : 1 .the photons generate only small amounts of charge , so if detected in isolation , they contribute to the electronic noise peak at low pulse heights and may advance the tail of this noise into the first order peak ; 2 .because the energy of these photons is small , for a given flux the photon number is high , and the probability of one of these arriving within the time constant of the pulse - counting electronics is high : this adds a small amount of charge which broadens the spectral peak and reduces the resolution ; 3 .the large photon flux can exceed the maximum count rate of the pulse - counting electronics , leading to saturation of the device , and 4 .the infrared photon flux can induce local heating of the detector substrate .these photons will have been emitted by the source under study , as well as the thermal environment of the instrument and cryostat , particularly the warm cryostat window . in s - cam ,stringent measures were taken to eliminate the infrared flux ( rando et al 2000a ) , firstly to baffle the field of view seen by the detector , and then to minimise the flux from the cryostat window and baffles by using two filters of successive coldness of kg2 glass .this arrangement depresses the throughput through the optical by % , with a turnover at nm , depressing the 1000 nm flux by a factor of . nevertheless , the remaining infrared flux is still the major contributor in reducing the spectral resolving power of the stjs from to .a spectrograph has the advantage over a camera that any flux seen by the spectrograph at wavelengths longer than the red extreme of order 1 will be directed to the side of the detector in the direction of the zero order .this means that the infrared source flux will be less of a problem .however , the thermal loading from the cryostat window will remain , and , indeed , be more significant because of the large window size .in addition , because our concept has an infrared capability , the cutoff of any infrared filter will need to be at wavelengths longer than the red extreme of order 1 . in order to keep the thermal background within acceptable limitsit will be necessary to incorporate the slit and optics grating within a cold environment .this will allow rejection of all source _ and _ window - induced flux .it adds to the cost of the instrument and influences the cryogenic design significantly , depending strongly on whether is in the h band or in the k band .one technique for unwanted infrared flux rejection is to place the detector at the focus of a spherical or toroidal mirror , except for an aperture to accommodate the incoming beam . in this casethe detector experiences a radiated thermal environment approximately appropriate to its surface temperature .this also provides an opportunity for improving the infrared sensitivity : it may be possible to refocus reflected infrared source photons back onto the detector , permitting a second chance for detection .this will require a tilted focal plane , which is not implemented in the initial optical design in section [ sec : optical_design ] .the calibration requirements are relatively standard , involving detector flat - fielding and wavelength calibration. the infrared night sky lines may be sufficient to provide a continuous monitoring of the wavelength scale for all wavelengths through the superimposed spectral orders on the detector , but external lamps would still be required .a calibration unit would therefore be incorporated , providing an appropriate selection of arc and continuum lamps for the optical and infrared .the flat - fielding will be wavelength dependent , so filters would be required to isolate a single order for the flat - field calibrations .this would also have a blocked position for checks on the ( low levels of ) dark noise .the filter unit will be in the main beam so that it can also be used for astronomical observations , for example should very bright sources be observed in a single order with very high time resolution .tantalum stjs operate at a temperature of 0.3 k. this is just within the reach of a pumped he3 cryogenic system , but it will probably be better from the point of view of operating costs to use an unpumped he4 system , with a final sorption refrigerator stage , as in s - cam ( verveer et al 1999 ) . although the stj array will not , in itself , generate significant heat , the parasitic heat injection through the large harness would indeed be significant .nevertheless , considerable experience has been gained in the past decade in such cryogenic instrumentation on ground - based telescopes , for example the scuba submillimeter bolometer array on the james clerk maxwell telescope on mauna kea ( 0.075k ) , and s - cam2 .parasitic heat loads on the detector and its cold stage result from radiation from the surroundings , through the support structure for the cold stage and through the electrical cryoharness .the first of these imposes a requirement for nested thermal shrouds .the second requires careful selection of material and strut design .materials such as kevlar strings can be considered , but it is likely that stainless steel struts are more satisfactory mechanically , without too significant a thermal disadvantage .the mounting of the stj array to a support structure would require non - magnetic materials with low thermal expansion , in order to match the characteristics of the stj substrate .outgassing in this coldest of environments close to the stjs must also be minimised , as the contaminants will be trapped preferentially on the coldest part of the system , which includes the detector array .the material of choice will probably be a ceramic material .care will need to be exercised to interface this structure to the cryocooler coldfinger material .significant experience has been gained in the operation of the s - cam2 cryocooler , and improvements identified , for example in the inner surface emissivity optimisation , and a reduction in support strut cross - section .this would be valuable in this spectrograph concept , with the main differences being a larger focal plane assembly and magnetic bias system , a larger cryostat window and infrared blocking elements , different constraints on back focal distance from the optical design and a very much larger parasitic heat loading through the cryoharness. a particular consideration will be the cooling power of the he3 sorption refrigerator .the other general consideration for the cryosystem is the cooling of the optics , alluded to in section [ sec : infrared_issues ] .this will entail either the incorporation of the he4 cryostat within an ln or other cryostat , or the interfacing of the two cryostats . for the hstj studies we investigated different methods of providing a low thermal conductivity harness through industrial studies ( see griffiths et al 1997 ) .these included ribbon cables , fine wires , superconducting leads on a ceramic substrate and thin films on a kapton substrate .it was found that fine wires made of manganin or stainless steel , or niobium tracks laid on kapton provided acceptable solutions in terms of electrical conductivity and thermal loading .the surface area , ease of handling and routing , ease of manufacture and ease of making connections were also important considerations , and these favoured the niobium / kapton solution , which also allowed greater uniformity of the electrical characteristics of the harness ( particularly the capacitance ) .there are a small number of common return lines of lower resistance .the thermal calculations included the conduction along the tracks and through the kapton , and included radiative loading and losses .in general the conduction along the tracks exceeds that through the kapton .the effect of radiation impinging on the cable at temperatures above 20k is important , requiring the harness to be covered with a low emissivity surface such as a gold coating .the cryoharness must be temperature - clamped on the 4k he4 stage in order to limit the requirements on the sorption coolers . in practise , for the spectrograph, the more critical section is the short segment from this stage to the stj arrays .this would be bump - bonded to make the connection to the stj detector contacts .a magnet subsystem is required to produce a constant , uniform bias field in the presence of a possibly magnetically noisy environment during the operation of the stj detectors .it must also be possible to vary the magnetic field during the cooler recycling .this task must be carried out within the restricted accommodation available in the vicinity of the detectors , and operate in conjunction with the thermal infrared filtering and baffling in this locality .the design of the magnet subsystem depends on the uniformity and stability of the magnetic field required by the stj detectors .it also depends on the magnetic environment of the stjs ; in particular , the presence of motor - driven mechanisms and perhaps compressors in the vicinity of the instrument may impose a magnetically noisy environment with rapid transients .this would drive the magnetic field controller time constants to be shorter .a design adapted from the pre - proposal studies of the magnet subsystem for hstj ( griffiths et al 1997 ) should be appropriate .this consisted of a screened helmholtz coil assembly and a set of magnet power and control electronics .a scaled - down version of this is used in s - cam .the helmholtz coils are in a -metal screening box attached to the open end of the cryostat .they operate at superconducting temperature to provide magnetic field bias and trim .magnetic field sensors are used to monitor the stability and uniformity of the magnetic field .control electronics would provide interfaces to the magnetic field sensors in the vicinity of the stj detectors , control signals for the coil power drive electronics , an interface to the instrument computer and stable and controlled power to the helmholtz coils .particular care will need to be taken with contamination .the stj detectors would be the coldest elements in the cryostat so contaminants would be deposited preferentially on them .this would require a careful selection of the materials to be used in the coils and coil assembly and also some assessment of the likely contamination paths and rates .each pixel in the stj arrays requires its own detector chain .the matrix readout approach investigated by martin et al .( 2000 ) has some performance disadvantages and is not appropriate for the linear arrays required here , so we have retained the approach of providing an independent electronic channel for each stj element .a -pixel array therefore needs large scale integration in order to access these pixels , with analog electronics of sufficient quality to minimally degrade the detector response .such a task has been accomplished many times before , for example in high energy collider instrumentation and even in space applications ( _ swift_-bat , barthelmy 2000 ) . using asics ( application specific integrated circuits ) , each containing channels , a sufficiently low number of circuit cards is needed so they can be placed close to the detector arays .we show in figure [ fig : electronics ] a block diagram for the data - flow electronics .one hundred asics each providing around 100 pre - amplifier and analogue shaping amplifiers could easily be accommodated on five printed circuit boards .the analog - to - digital converter ( adc ) performance required to convert the amplified charge packets to digital signals is not demanding due to the moderate energy resolution of the detectors .an asic providing 20 independent 6-bit flash adcs could easily be developed ; 500 of these would be needed and they could be accommodated on 10 printed circuit boards . these adcs would operate asynchronously . a small first - in - first - out buffer ( fifo ) ,only a few words deep , could be implemented as a field - programmable gate array ( fpga ) to buffer the output from a group of ( say ) 10 adcs and record the event timestamp .we calculate that the event rate in any group of 10 adcs would not exceed the fifo write rate .ten such fifo blocks could easily be contained within a low cost fpga .each fpga could also buffer the output from its own fifos to provide one output port for onward bussing to a dual port memory which would be written by the detector system described here .the values written to this memory would be the 6-bit pixel energy , the 14-bit pixel address and a 32bit timestamp which allows to be resolved per 24 hours .the spectrograph would also require other ( relatively standard ) instrument electronics to control and monitor the state of the instrument and cryostat , to interface with the observatory and telescope systems and to provide a user interface .such functions include temperature and magnetic field measurements and field current control , cryogen level sensing , calibration unit activation and control , filter wheel operation and slit unit control and viewing .we envisage the spectrograph to be situated at the nasmyth focus .such an arrangement considerably simplifies the overall structural and mechanical design .since the gravity vector is constant , flexure issues are eliminated , and the cryostat design is made significantly easier . the spectrograph would be designed around an optical bench structure , on which are mounted the slit unit and viewing system , the spectrograph optics and calibration unit and cryostat .the analog data handling electronics would be located close to the detector array , with standard interfaces to an external unit containing the majority of instrument control and monitoring electronics and power supplies .interfaces for replenishment of the cryogenic consumables would also need to be considered . the major mechanical issue would centre on the thermal performance of the structure , both within the detector he4 cryostat , and , if present , within the larger cryostat enclosing the spectrograph optics .this is aided by the almost entirely reflective optical design in our concept .standard techniques ( for example the use of invar rods ) can be used to maintain camera focus between room and cryogenic temperatures .the slit unit and slit viewing subsystem are important , but standard items , and we would not expect these to pose any problems .the same comment applies to the calibration subsystem .the spectrograph concept would include a filter wheel located between the slit and collimator , which can be used to introduce bandpass or neutral density filters or polarisation scramblers if these are required. these will be useful in commissioning and in calibration : normally the open position would be used for observations .the spectrograph would need to operate in the host observatory control and data handling system .it is of fundamental importance that the spectrograph provide real - time feedback not only on the status of the instrument from its various sensors , but also on the instrument performance directly from the data stream .outside of an observation , use of the calibration lamps should allow detector performance , stability and freedom from electrically - induced noise and also optical throughtput to be assessed immediately .stj detectors produce a photon event stream characterised by time , position and energy . as such ,their data outputs are more akin to those familiar with data from x - ray detectors , so that techniques and tools developed in that field could be the most readily adapted for this concept .the availability of energy information makes it simpler to retain the event list format until a final binning in spectral , temporal or spatial coordinates .a data reduction sequence could proceed approximately as described in perryman et al ( 2001 ) .the data volume could be large , up to 200 mbyte / sec , but typically it would be a small fraction of this .we have developed an instrument simulator to make predictions of the performance of the spectrograph .as inputs we use the standard data for paranal available from the eso etc ( exposure calculator ) website : telescope ut1 mirror reflectance , sky background brightness , extinction , seeing , infrared absorption .the oh sky line atlas is from rousselot et al ( 2000 ) , also used in etc : this extends down only to 624 nm , so misses sky lines at shorter wavelengths .the star spectra are from the atlas of pickles ( 1998 ) : these are at nm resolution and extend over the nm band of interest .the simulator interpolates between , or sums over grids of known transmission , reflectance , emissivity as a function of wavelength , depending on whether the input grid is more finely or more coarsely sampled that that required for the prediction .the throughput is calculated per order , then summed to obtain the overall throughput and s / n ratios , as well as the total count rate on the array , which is important for ascertaining bright limits .we limit here the reporting of our exposure estimates to that of the optical+h band configuration , but the optical+j+k configuration can just as easily be calculated .we provide details of the throughput of each element of the spectrograph below the slit for information in figure [ fig : item_efficiency ] .the ut1 mirror reflectivity is for a single reflection 3 reflections are used for nasmyth .the throughputs assume the optical design in section [ sec : optical_design ] , and use polarisation - averaged grating efficiencies for appropriate gratings in gratings catalogs .also shown is the total optical efficiency excluding and including the slit .figure [ fig : relative_efficiency ] shows the overall throughput .the seeing profile is assumed to be gaussian .the calculations use a zenith angle of , seeing of 0.8 arc sec and a slit width of 0.5 arc sec in the spectral direction and 1 arcsec in the spatial direction .the slit is the major source of losses in the spectrograph : the slit transmission is only for the above parameters .this is one area where significant improvements may be possible with optimisations of the optical design ( see section [ sec : optical_design ] ) .we have calculated star and sky spectra using the throughputs in section [ sec : throughput ] above .we show in figure [ fig : spectrum_a0v ] the resulting total count spectrum in 1000 seconds for a v=20 a0v star , and in figure [ fig : spectrum_m0v ] that for a v=17 m0v star ( both from the pickles atlas ) . these spectra are noiseless : s / n ratio calculations are shown later . for these calculationswe assume standard zero points in the literature ( johnson 1966 , bessel 1979 ) as used in the eso etc , and the collecting area appropriate for the vlt as given on the the eso etc website .the calculations have been cross - checked against the eso instrumentation predictions , and found to be consistent within % .+ + the a0v star blue continuum dominates the background even for , while in the red , the night sky lines are a significant component .these sky lines are seen more clearly in the infrared in the expanded lower plot in figure [ fig : spectrum_m0v ] , where they are well resolved .the s / n ratio calculations use the formulae in the eso - etc explanatory notes , with the simplification that there is no readout noise associated with the stjs .we show in figure [ fig : sn_ratios ] that the spectrograph provides a s / n ratio reaching per pixel in 3600 sec for an a0v star of , and it reaches this ratio in the h band for in the same time ( assuming seeing of 0.5 arcsec ) .these figures provide a realistic indication as to the limiting performance at the faint extreme . increasing the slit by a factor of 2increases the throughput through the slit significantly for the typical seeing conditions at paranal , at the cost of reducing the spectral resolving power to 4000 .an important aspect of any photon - counting detector system such as stjs is the limit set on the dynamic range by the maximum count rate .this is easier to achieve in a spectroscopic than in an imaging application .we show in figure [ fig : countrate ] the overall countrate ( _ i.e. _ summed over order number ) on the array , as a function of pixel number , for a a0v star .the countrates reach cts / sec / pixel for such a brightness. the current maximum countrates on s - cam2 are cts / sec / pixel , set by the analog processing chains . even if we assume no increase in this capability, this indicates that the spectrograph will have a high bright limit , sufficient for good flat - fielding calibrations ( already demonstrated in the case of s - cam2 ) and access to celestial spectroscopic standards .the seeing is 0.8 arcsec with a slit of 0.5 arcsec and an array width of 1 arcsec ; the zenith angle is . ]we have outlined a spectrograph concept which uses the intrinsic wavelength resolution and extended wavelength response of stj detectors as the basis for a high - throughput optical - infrared spectrograph with high time resolution .the intrinsic wavelength resolution of the detectors is used to perform the order sorting . such an stj - based concept promises an elegantly simple medium resolution spectrograph .we have calculated that it should be a significant improvement on existing front - rank instrumentation , opening new regions of parameter space .the concept utilises technology which is currently available , or can be considered to be feasible with only a small level of development .nevertheless , it is a step up from existing stj - based instrumentation , in terms of array size and wavelength range .although we have argued that the technological increment in moving from existing stj arrays to those suggested for this concept is made significantly easier by the fact that the arrays in this concept are linear , and so provide for straightforward electrical connectivity , the pixel arrays would still be a critical development area .the main challenges are in the area of yield and pixel uniformity ( in terms of electrical characteristics ) .an additional issue is the buttability .another area closely connected with the detector performance is the subsystem providing the magnetic biasing on the stj array .the main challenge here is the uniformity of magnetic field bias , given the physically large arrays . at the same timethe cryostat would provide strong constraints in terms of available accommodation for the magnets , the siting of infrared baffles and filters , and the thermal design requirements of the detector support structure .we have identified further areas where development would be required in the cryocooler and cryoharness .the issue in the former is the heat pumping capacity of the sorption cooler(s ) to provide the 0.3k temperature drop from the he4 cryostat . for the latter ,the large number of electrical connections to the detector array mean that the parasitic heat loading through these must be minimised .there is also the question of the physical routing of several such cryoharnesses into a relatively small space .the operation of a -pixel stj array is not feasible without large - scale integration of the front - end analog electronics .this may require dedicated asics of particular design suitable for stjs .an ameliorating factor is that the asics consist of a large number of relatively simple and independent units .the infrared baffling and thermal rejection would need careful consideration at the general instrument level ( in terms of rejection of telescope thermal background ) and for the detector environment , in order to minimise the stj wavelength resolution degradation as discussed earlier .it has implications for the overall instrument thermal design , infrared responsivity , on whether the in the k band option can be exercised , and ultimately on costs .on the other hand , we have identified the potential for increased infrared performance by the use of coatings to reduce the reflectivity of tantalum beyond 800 nm , and the appropriate design of the thermal infrared rejection optics around the detector array .a further performance enhancement may be available through the provision of a degree of spatial resolution along the slit by the use of droids .support for the stj technological development at esa - estec is provided by peter verhoeve , nicola rando , didier martin , jacques verveer and axel van dordrecht .keith horne is supported by a pparc senior fellowship .barthelmy , s. , 2000 , spie , 4140 , 50 bessell , m. s. , 1979 , pasp , 91 , 589 bingham , r. g. , 1979 , q. jl .20 , 395 bridge , c. m. , cropper , m. , ramsay , g. , perryman , m. a. c. , de bruijne , j. h. j. , favata , f. , peacock , a. , rando , n. , reynolds , a. , 2002a , mnras in press ( astro - ph/0207162 ) bridge , c. m. , cropper , m. , ramsay , g. , de bruijne , j. h. j. , reynolds , a. p. , perryman , m. a. c. , 2002b , mnras submitted cropper , m. , et al . , 2002 , proposal submitted in response to eso ao for second generation vlt instrumentation , ses / mssl / pr/0001.01 darrigo , p. , bingham , r. , charalambous , a. , crawford , i. , diego , f. , percival , j. , savidge , t. , 2000 , spie , 4098 , 159 de bruijne , j. h. j. , reynolds , a. p. , perryman , m. a. c. , peacock , a. , favata , f. , rando , n. , martin , d. , verhoeve , p. , christlieb , n. , 2002 , a&a , 381 , l57 diego , f. , brooks , d. , charalambous , a. , crawford , i. a. , darrigo , p. , dryburgh , m. , jamshidi , h. , radley , a. s. , savidge , t. , walker , d. d. , 1997 , spie 2871 , 1126 griffiths et al . ,1997 , proposal submitted in response to nasa ao-96-oss-03 for hst instrument refurbishment johnson , h. l. , 1966 , ara&a , 4 , 193 martin , d. , verhoeve , p. , peacock , a. , goldie , d. , 2000 , spie , 4008 , 328 peacock , a. , verhoeve , p. , rando , n. , erd , c. , bavdaz .m. , taylor , b.g . ,perez , d. , 1998 , a&a suppl . , 127,497 perryman , m.a.c . , foden , c.l . , peacock , a. , 1993 , nuclear instruments and methods in physics research , a325 , 319 perryman , m.a.c . , cropper , m. , ramsay , g. , favata , f. , peacock , a. , rando , n. , reynolds , a. , 2001 , mnras , 324 , 899 pickles , a. j. , 1998 , pasp , 110 , 863 rando , n. , verhoeve , p. , gondoin , p. , collaudin , b. , verveer , j. , bavdaz , m. , peacock , a. , 2000a , _8th international conference on low temperature detectors_. rando , n. , verveer , j. , verhoeve , p. , peacock , a. , andersson , s. , reynolds , a. , favata , f. , perryman , m. a. c. , goldie , d. j. , 2000b , spie , 4008 , 646 rousellot , p. , lidman , c. , cuby , j .- g . , moreels , g. , monnet , g. , 2000 , a&a , 354 , 1134 verhoeve , p. , den hartog , r. , martin , d. , rando , n. , peacock , a. , goldie , d. , 2000 , spie 4008 , 683 verveer , j. , rando , n. , andersson , s. , gondoin , p. , peacock , a. , 1999 , _ rev ._ , 70 , 4088 walker , d. , diego , f. , 1985 , mnras , 317 , 255
we describe a multi - order spectrograph concept suitable for 8m - class telescopes , using the intrinsic spectral resolution of superconducting tunneling junction detectors to sort the spectral orders . the spectrograph works at low orders , 15 or 16 , and provides spectral coverage with a resolving power of r from the atmospheric cutoff at 320 nm to the long wavelength end of the infrared h or k band at 1800 nm or 2400 nm . we calculate that the spectrograph would provide substantial throughput and wavelength coverage , together with high time resolution and sufficient dynamic range . the concept uses currently available technology , or technologies with short development horizons , restricting the spatial sampling to two linear arrays ; however an upgrade path to provide more spatial sampling is identified . all of the other challenging aspects of the concept the cryogenics , thermal baffling and magnetic field biasing are identified as being feasible . instrumentation : spectrographs , detectors .
online convex optimization ( oco ) is an emerging methodology for sequential inference with well documented merits especially when the sequence of convex costs varies in an unknown and possibly adversarial manner .starting from the seminal papers and , most of the early works evaluate oco algorithms with a _static regret _ , which measures the difference of costs ( a.k.a .losses ) between the online solution and the overall best static solution in hindsight .if an algorithm incurs static regret that increases sub - linearly with time , then its performance loss averaged over an infinite time horizon goes to zero ; see also , and references therein . however , static regret is not a comprehensive performance metric .take online parameter estimation as an example .when the true parameter varies over time , a static benchmark ( time - invariant estimator ) itself often performs poorly so that achieving sub - linear static regret is no longer attractive .recent works extend the analysis of static regret to that of _ dynamic regret _ , where the performance of an oco algorithm is benchmarked by the best dynamic solution with a - priori information on the one - slot - ahead cost function .sub - linear dynamic regret is proved to be possible , if the dynamic environment changes slow enough for the accumulated variation of either costs or per - slot minimizers to be sub - linearly increasing with respect to the time horizon .when the per - slot costs depend on previous decisions , the so - termed competitive difference can be employed as an alternative of the static regret .the aforementioned works deal with dynamic costs focusing on problems with time - invariant constraints that must be strictly satisfied , but do not allow for instantaneous violations of the constraints .the _ long - term _ effect of such instantaneous violations was studied in , where an online algorithm with sub - linear static regret and sub - linear accumulated constraint violation was also developed .the regret bounds in have been improved in and .decentralized optimization with consensus constraints , as a special case of having long - term but time - invariant constraints , has been studied in .nevertheless , do not deal with oco under time - varying adversarial constraints .c * 3|c reference & type of benchmark & long - term constraint & adversarial constraint + & static and dynamic & no & no + & static & no & no + & dynamic & no & no + & dynamic & no & no + & dynamic & no & no + & static & yes & no + & dynamic & yes & no + this work & dynamic & yes & yes + in this context , the present paper considers oco with time - varying constraints that must be satisfied in the long term . under this setting, the learner first takes an action without knowing a - priori either the adversarial cost or the time - varying constraint , which are revealed by the nature subsequently .its performance is evaluated by : i ) _ dynamic regret _ that is the optimality loss relative to a sequence of instantaneous minimizers with known costs and constraints ; and , ii ) _ dynamic fit _ that accumulates constraint violations incurred by the online learner due to the lack of knowledge about future constraints .we compare the oco setting here with those of existing works in table [ tab : comp ] .we further introduce a modified online saddle - point ( mosp ) method in this novel oco framework , where the learner deals with time - varying costs as well as time - varying but long - term constraints .we analytically establish that mosp simultaneously achieves sub - linear dynamic regret and fit , provided that the accumulated variations of both minimizers and constraints grow sub - linearly .this result provides valuable insights for oco with long - term constraints : _ when the dynamic environment comprising both costs and constraints does not change on average , the online decisions provided by mosp are as good as the best dynamic solution over a long time horizon . _ to demonstrate the impact of these results , we further apply the proposed mosp approach to a dynamic network resource allocation task , where online management of resources is sought without knowing future network states .existing algorithms include first- and second - order methods in the dual domain , which are tailored for time - invariant deterministic formulations .to capture the temporal variations of network resources , stochastic formulation of network resource allocation has been extensively pursued since the seminal work of ; see also the celebrated stochastic dual gradient method in .these stochastic approximation - based approaches assume that the time - varying costs are i.i.d . or generally samples from a stationary ergodic stochastic process .however , performance of most stochastic schemes is established in an asymptotic sense , considering the ensemble of per slot averages or infinite samples across time .clearly , stationarity may not hold in practice , especially when the stochastic process involves human participation .inheriting merits of the oco framework , the proposed mosp approach operates in a fully online mode without requiring non - causal information , and further admits finite - sample performance analysis under a sequence of non - stochastic , or even adversarial costs and constraints .relative to existing works , the main contributions of the present paper are summarized as follows . 1 .we generalize the standard oco framework with only adversarial costs in to account for both adversarial costs and constraints .different from the regret analysis in , performance here is established relative to the best dynamic benchmark , via metrics that we term dynamic regret and fit .we develop an mosp algorithm to tackle this novel oco problem , and analytically establish that mosp yields simultaneously sub - linear dynamic regret and fit , provided that the accumulated variations of per - slot minimizers and constraints are sub - linearly growing with time .our novel oco approach is tailored for dynamic resource allocation tasks , where mosp is compared with the popular stochastic dual gradient approach .relative to the latter , mosp remains operational in a broader practical setting without probabilistic assumptions .numerical tests demonstrate the gain of mosp over state - of - the - art alternatives ._ outline_. the rest of the paper is organized as follows . the oco problem with long - term constraintsis formulated , and the relevant performance metrics are introduced in section ii . the mosp algorithm and its performance analysisare presented in section iii .application of the novel oco framework and the mosp algorithm in network resource allocation , as well as corresponding numerical tests , are provided in section iv .section v concludes the paper . _notation_. denotes expectation , stands for probability , stands for vector and matrix transposition , and denotes the -norm of a vector .inequalities for vectors , e.g. , , are defined entry - wise .the positive projection operator is defined as ^+:=\max\{\mathbf{a},\mathbf{0}\} ] denotes the constraint function with entry , and is a convex set .the formulation extends the standard oco framework to accommodate adversarial time - varying constraints that must be satisfied in the long term . complemented by algorithm development and performance analysis to be carried in the following sections , the main contribution of the present paper is incorporation of long - term and time - varying constraints to markedly broaden the scope of oco . regarding performance of online decisions ,static regret is adopted as a metric by standard oco schemes , under time - invariant and strictly satisfied constraints .the static regret measures the difference between the online loss of an oco algorithm and that of the best fixed solution in hindsight . extending the definition of static regret over slots to accommodate time - varying constraints, it can be written as ( see also ) where the best static solution is obtained as a desirable oco algorithm in this case is the one yielding a sub - linear regret , meaning .consequently , implies that the algorithm is `` on average '' no - regret , or in other words , not worse asymptotically than the best fixed solution . though widely used in various oco applications , the aforementioned _ static regret _metric relies on a rather coarse benchmark , which may be less useful especially in dynamic settings .for instance , ( * ? ? ?* example 2 ) shows that the gap between the best static and the best dynamic benchmark can be as large as .furthermore , since the time - varying constraint is not observed before making a decision , its feasibility can not be checked instantaneously . in response to the quest for improved benchmarks in this dynamic setup , two metrics are considered here : _dynamic regret _ and _ dynamic fit_. the notion of dynamic regret ( also termed tracking regret or adaptive regret ) has been recently introduced in to offer a competitive performance measure of oco algorithms under time - invariant constraints .we adopt it in the setting of by incorporating time - varying constraints where the benchmark is now formed via a sequence of best dynamic solutions for the instantaneous cost minimization problem subject to the instantaneous constraint , namely clearly , the dynamic regret is always larger than the static regret in , i.e. , , because is always no smaller than according to the definitions of and .hence , a sub - linear dynamic regret implies a sub - linear static regret , but not vice versa . to ensure feasibility of online decisions , the notion of _dynamic fit _ is introduced to measure the accumulated violation of constraints ; under time - invariant long - term constraints or under time - varying constraints .it is defined as ^+\right\|.\end{aligned}\ ] ] observe that the dynamic fit is zero if the accumulated violation is entry - wise less than zero . however , enforcing is different from restricting to meet in each and every slot .while the latter readily implies the former , the long - term ( aggregate ) constraint allows adaptation of online decisions to the environment dynamics ; as a result , it is tolerable to have and .an ideal algorithm in this broader oco framework is the one that achieves both sub - linear dynamic regret and sub - linear dynamic fit .a sub - linear dynamic regret implies `` no - regret '' relative to the clairvoyant dynamic solution on the long - term average ; i.e. , ; and a sub - linear dynamic fit indicates that the online strategy is also feasible on average ; i.e. , .unfortunately , the sub - linear dynamic regret is not achievable in general , even under the special case of where the time - varying constraint is absent . for this reason ,we aim at designing and analyzing an online strategy that generates a sequence ensuring sub - linear dynamic regret and fit , under mild conditions that must be satisfied by the cost and constraint variations .in this section , a modified online saddle - point method is developed to solve , and its performance and feasibility are analyzed using the dynamic regret and fit metrics .consider now the per - slot problem , which contains the current objective , the current constraint , and a time - invariant constraint set . with denoting the lagrange multiplier associated with the time - varying constraint , the online ( partial ) lagrangian of can be expressed as where remains implicit .for the online lagrangian , we introduce a modified online saddle point ( mosp ) approach , which takes a modified descent step in the primal domain , and a dual ascent step at each time slot .specifically , given the previous primal iterate and the current dual iterate at each slot , the current decision is the minimizer of the following optimization problem where is a positive stepsize , and is the gradient is non - differentiable .the performance analysis still holds true for this case .] of primal objective at . after the current decision is made , and are observed , and the dual update takes the form ^{+}=\big[\bm{\lambda}_t+\mu \mathbf{g}_t(\mathbf{x}_t)\big]^{+}\ ] ] where is also a positive stepsize , and is the gradient of online lagrangian with respect to ( w.r.t . ) at . the primal gradient step of the classical saddle - point approach in is tantamount to minimizing a first - order approximation of at plus a proximal term .we call the primal - dual recursion and as a modified online saddle - point approach , since the primal update is not an exact gradient step when the constraint is nonlinear w.r.t .however , when is linear , and reduce to the approach in .similar to the primal update of oco with long - term but time - invariant constraints in , the minimization in penalizes the exact constraint violation instead of its first - order approximation , which improves control of constraint violations and facilitates performance analysis of mosp .* initialize : * primal iterate , dual iterate , and proper stepsizes and .update primal variable by solving .observe the current cost and constraint .update the dual variable via .we proceed to show that for mosp , the dynamic regret in and the dynamic fit in are both sub - linearly increasing if the variations of the per - slot minimizers and the constraints are small enough . before formally stating this result, we assume that the following conditions are satisfied .[ ass.0 ] for every , the cost function and the time - varying constraint in are convex .[ ass.1 ] for every , has bounded gradient on ; i.e. , ; and is bounded on ; i.e. , . [ ass.2 ] the radius of the convex feasible set is bounded ; i.e. , .[ ass.3 ] there exists a constant , and an interior point such that .assumption [ ass.0 ] is necessary for regret analysis in the oco setting .assumption [ ass.1 ] bounds primal and dual gradients per slot , which is also typical in oco .assumption [ ass.2 ] restricts the action set to be bounded .assumption [ ass.3 ] is slater s condition , which guarantees the existence of a bounded lagrange multiplier . under these assumptions ,we are on track to first provide an upper bound for the dynamic fit .[ them1 ] define the maximum variation of consecutive constraints as ^+\right\|\ ] ] and assume the slack constant in assumption [ ass.3 ] to be larger than the maximum variation^+ > \max_{\mathbf{x}\in { \cal x}} ] , which is valid when the region defined by is large enough , or , the trajectory of is smooth enough across time . ] ; i.e. , . then under assumptions [ ass.0]-[ass.3 ] and the dual variable initialization , the dual iterate for the mosp recursion - is bounded by and the dynamic fit in is upper - bounded by where , , , and are as in assumptions [ ass.1]-[ass.3 ] .see appendix [ app.thm1 ] .theorem [ them1 ] asserts that under a mild condition on the time - varying constraints , is uniformly upper - bounded , and more importantly , its scaled version upper bounds the dynamic fit .observe that with a fixed primal stepsize , is in the order of , thus a larger dual stepsize essentially enables a better satisfaction of long - term constraints .in addition , a smaller leads to a smaller dynamic fit , which also makes sense intuitively . in the next theorem, we further bound the dynamic regret .[ them2 ] under assumptions [ ass.0]-[ass.3 ] and the dual variable initialization , the mosp recursion - yields a dynamic regret where is the accumulated variation of the per - slot minimizers defined as and is the accumulated variation of consecutive constraints ^+\right\|\!.\!\end{aligned}\ ] ] see appendix [ app.thm2 ] .theorem [ them2 ] asserts that mosp s dynamic regret is upper - bounded by a constant depending on the accumulated variations of per - slot minimizers and time - varying constraints as well as the primal and dual stepsizes .while the dynamic regret in the current form is hard to grasp , the next corollary shall demonstrate that can be very small .based on theorems [ them1]-[them2 ] , we are ready to establish that under the mild conditions for the accumulated variation of constraints and minimizers , the dynamic regret and fit are sub - linearly increasing with . [ ref.coro1 ] under assumptions [ ass.0]-[ass.3 ] and the dual variable initialization ,if the primal and dual stepsizes are chosen such that , then the dynamic fit is upper - bounded by in addition , if the temporal variations of optimal arguments and constraints satisfy and , then the dynamic regret is sub - linearly increasing , i.e. , plugging into , the bound in readily follows .likewise , we have from that considering the upper bound on the dual iterates in , it follows that , which implies that therefore , we deduce that , if and .observe that the sub - linear regret and fit in corollary [ ref.coro1 ] are achieved under a slightly `` strict '' condition that and .the next corollary shows that this condition can be further relaxed if a - priori knowledge of the time - varying environment is available .[ ref.coro2 ] consider assumptions [ ass.0]-[ass.3 ] are satisfied , and the dual variable is initialized as . if there exists a constant such that the temporal variations satisfy and , then choosing the primal and dual stepsizes as leads to the dynamic fit and the corresponding dynamic regret corollary [ ref.coro2 ] provides valuable insights for choosing optimal stepsizes in non - stationary settings . specifically , adjusting stepsizes to matchthe variability of the dynamic environment is the key to achieving the optimal performance in terms of dynamic regret and fit .intuitively , when the variation of the environment is fast ( a larger ) , slowly decaying stepsizes ( thus larger stepsizes ) can better track the potential changes .theorems [ them1 ] and [ them2 ] are in the spirit of the recent work , where the regret bounds are established with respect to a dynamic benchmark in either deterministic or stochastic settings. however , do not account for long - term and time - varying constraints , while the dynamic regret analysis is generalized here to the setting with long - term constraints .interesting though , sub - linear dynamic regret and fit can be achieved when the dynamic environment consisting of the per - slot minimizer and the time - varying constraint _ does not vary on average _ , that is , and are sub - linearly increasing over .although the dynamic benchmark in is more competitive than the static one in , it is worth noting that the sequence of the per - slot minimizer in is not the optimal solution to problem . defining the sequence of optimal solutions to as , it is instructive to see that computing each minimizer in only requires one - slot - ahead information ( namely , and ) , while computing each within requires information over the entire time horizon ( that is , and ) .for this reason , we use the subscript `` off '' in to emphasize that this solution comes from offline computation with information over slots .note that for the cases without long - term constraints , the sequence of offline solutions coincides with the sequence of per - slot minimizers .regarding feasibility , exactly satisfies the long - term constraint , while the solution of mosp satisfies on average under mild conditions ( cf .corollary 1 ) . for optimality , the cost of the online decisions attained by mosp is further benchmarked by the offline solutions . to this end , define mosp s _ optimality gap _ as [ subeq.opt-gap ] intuitively , if are close to , the dynamic regret is able to provide an accurate performance measure in the sense of .specifically , one can decompose the optimality gap as where corresponds to the dynamic regret in capturing the regret relative to the sequence of per - slot minimizers with one - slot - ahead information , and is the difference between the performance of per - slot minimizers and the offline optimal solutions .although the second term appears difficult to quantify , we will show next that is driven by the accumulated variation of the dual functions associated with the instantaneous problems . to this end , consider the dual function of the instantaneous primal problem , which can be expressed by minimizing the online lagrangian in at time , namely likewise , the dual function of over the entire horizon is where equality ( a ) holds since the minimization is separable across the summand at time , and equality ( b ) is due to the definition of the per - slot dual function in . as the primal problems and are both convex , slater s condition in assumption [ ass.3 ] implies that strong duality holds .accordingly , in can be written as which is the difference between the dual objective of the static best solution , i.e. , , and that of the per - slot best solution for , i.e. , . leveraging this special property of the dual problem, we next establish that can be bounded by the variation of the dual function , thus providing an estimate of the optimality gap .[ prop1 ] define the variation of the dual function from time to as and the total variation over the time horizon as .then the cost difference between the best offline solution and the best dynamic solution satisfies where is the minimizer of the instantaneous problem , and solves with all future information available . combined with, it readily follows that where is defined in , and in .instead of going to the primal domain , we upper bound via the dual representation in . letting denote any slot in , we have the first inequality comes from the definition .note that if , the proposition readily follows from .we will prove this inequality by contradiction .assume there exists a slot such that , which implies that where inequalities ( a ) and ( c ) come from the fact that is the accumulated variation over slots , and hence , while ( b ) is due to the hypothesis above .note that in contradicts the fact that is the maximizer of .therefore , we have , which completes the proof .the following remark provides an approach to improving the bound in proposition 1 .although the optimality gap in appears to be at least linear w.r.t . , one can use the `` restarting '' trick for dual variables , similar to that for primal variables in the unconstrained case ; see e.g. , . specifically ,if the total variation is known a - priori , one can divide the entire time horizon into sub - horizons ( each with slots ) , and restart the dual iterate at the beginning of each sub - horizon . by assuming that is sub - linear w.r.t . , one can guarantee that always exists . in this case , the bound in can be improved by where the two summands are sub - linear w.r.t . provided that over each sub - horizon is sub - linear ; i.e. , .interested readers are referred to for details of this restarting trick , which are omitted here due to space limitation .in this section , we solve the network resource allocation problem within the oco framework , and present numerical experiments to demonstrate the merits of our mosp solver .consider the resource allocation problem over a cloud network , which is represented by a directed graph with node set and edge set , where and .nodes considered here include mapping nodes collected in the set , and data centers collected in the set ; i.e. , we have ., mapping node has an exogenous workload plus that stored in the queue , and schedules workload to data center .data center serves an amount of workload out of the assigned as well as that stored in its queue .the thickness of each edge is proportional to its capacity.,scaledwidth=50.0% ] [ fig : system ] per time , each mapping node receives an exogenous data request , and forwards the amount to each data center in accordance with bandwidth availability . each data center schedules workload according to its resource availability . regarding as the weight of a virtual outgoing edge from data center , edge set contains all the links connecting mapping nodes with data centers , and all the `` virtual '' edges coming out of the data centers .the node - incidence matrix is formed with the -th entry for compactness , collect the data workloads across edges in a resource allocation vector ^{\top}\in \mathbb{r}^{e}_+ ] .then , the aggregate ( endogenous plus exogenous ) workloads of all nodes are given by . when the -th entry of is positive , there is service residual at node ; otherwise , node over - serves the current workload arrival .assume that each data center and mapping node has a local data queue to buffer unserved workloads . with ^{\top} ] , where ^+ ] .the overall system diagram is depicted in fig .[ fig : system ] . for each data center ,the power cost depends on a time - varying parameter , which captures the energy price and the renewable generation at data center during slot .the bandwidth cost characterizes the transmission delay and is parameterized by a time - varying scalar .scalars and can be readily extended to vector forms . to keep the exposition simple, we use scalars to represent time - varying factors at nodes and edges . per slot , the instantaneous cost aggregates the costs of power consumed at all data centers plus the bandwidth costs at all links , namely where the objective can be also written as with ^{\top} ] , which can be nicely regarded as a scaled version of the relaxed queue dynamics in , with .in addition to simple closed - form updates , mosp can also afford a fully decentralized implementation by exploiting the problem structure of network resource allocation , where each mapping node or data center decides the amounts on all its _ outgoing links _ , andonly exchanges information with its _ one - hop neighbors_. per time slot , the primal update at mapping node includes variables on all its outgoing links , given by [ eq.dist-net ] {0}^{\bar{x}^{jk}}\!\!\!,\;\forall k\in{\cal k}\ ] ] and the dual update reduces to ^{+}.\ ] ] likewise , for data center , the primal update becomes ^{\bar{y}^k}\ ] ] where ^{\bar{y}^k}\!\!:=\min\{\bar{y}^k,\max\{\cdot\,,0\}\} ] appearing in the cost and constraint are assumed to be independent realizations of a random variable .constitute a sample path from an ergodic stochastic process , which converges to a stationary distribution ; see e.g. , .] in an sa - based stochastic optimization algorithm , per time , a policy first observes a realization of the random variable , and then ( stochastically ) selects an action .however , in contrast to minimizing the _ observed cost _ in the oco setting , the goal of the stochastic resource allocation is usually to minimize the limiting average of the _ expected cost _ subject to the so - termed stability constraint , namely [ eq.stoc-prob ] \label{eq.stoc - proba}\\ \text{s .t.}~~&\mathbf{q}_{t+1}=\left[\mathbf{q}_t+\mathbf{a}\mathbf{x}_t+\mathbf{b}_t\right]^{+}\!,~\forall t\label{eq.stoc - probb}\\ & \lim_{t\rightarrow \infty}\frac{1}{t}\sum_{t=1}^t \mathbb{e}\left[\mathbf{q}_t\right ] \leq \mathbf{0}\label{eq.stoc - probc}\end{aligned}\ ] ] where he expectation in is taken over and the randomness of and induced by all possible sample paths via ; and the stability constraint implies a finite bound on the accumulated constraint violation .in contrast to the observed costs in , each decision is evaluated by all possible realizations in here . however , as in couples the optimization variables over an infinite time horizon , is intractable in general .prior works have demonstrated that can be tackled via a tractable stationary relaxation , given by [ eq.stoc-relax ] \\ \text{s .t.}~&\lim_{t\rightarrow \infty}\frac{1}{t}\sum_{t=1}^t \mathbb{e}\left[\mathbf{a}\mathbf{x}_t+\mathbf{b}_t\right ] \leq \mathbf{0}\label{eq.stoc - relaxb}\end{aligned}\ ] ] where the time - coupling constraints and are relaxed to the limiting average constraint .such a relaxation can be verified similar to the queue relaxation in ; see also .note that is still challenging since it involves expectations in both costs and constraints , and the distribution of is usually unknown .even if the joint probability distribution function were available , finding the expectations would not scale with the dimensionality of .a common remedy is to use the stochastic dual gradient ( sdg ) iteration ( a.k.a .lyapunov optimization ) . specifically , with denoting the multipliers associated with the expectation constraint, the sdg method first observes one realization at each slot , and then performs the dual update as ^{+},\;\forall t\end{aligned}\ ] ] where is the dual iterate at time , is the stochastic dual gradient , and is a positive ( and typically constant ) stepsize .the actual allocation or the primal variable appearing in needs be found by solving the following sub - problems , one per slot for the considered network resource allocation problem , sdg in - entails a well - known cost - delay tradeoff .specifically , with denoting the optimal objective , sdg can achieve an -optimal solution such that \!\leq\ !f^*\!+{\cal o}(\mu) ] .therefore , reducing the optimality gap will essentially increase the average network delay .the optimality of sdg is established relative to the offline optimal solution of , which can be thought as the time - average _ optimality gap _ in under the oco setting .interestingly though , the optimality gap under the stochastic setting is equivalent to the ( expected ) dynamic regret , since their ( expected ) difference \}_{t=1}^t) ] and ], we set the bandwidth cost of each link as .the resource capacities at all data centers are uniformly randomly generated from ] , and the delay - tolerant workload arrives at each mapping node according to a uniform distribution over ] , while with i.i.d .noise uniformly distributed over ] , which completes the proof by taking norms on both sides and using the dual upper bound . per slot , the primal update is the minimizer of the optimization problem in ; hence , where ( a ) uses the strong convexity of the objective in ; see also ( * ? ? ?* corollary 1 ) . adding in yields where ( b ) is due tothe convexity of , and ( c ) comes from the fact that and the per - slot optimal solution is feasible ( i.e. , ) such that .next , we bound the term by where is an arbitrary positive constant , and ( d ) is from the bound of gradients in assumption [ ass.1 ] .plugging into , we have where equality ( e ) follows by choosing to obtain . using the dual drift bound in lemma [ lemma1 ] again , we have ^++\!\frac{\mu m^2}{2}\!+\frac{\alpha g^2}{2}\nonumber\\ \stackrel{(h)}{\leq}&f_t(\mathbf{x}^*_t)\!+\!\frac{1}{2\alpha}\big(\|\mathbf{x}^*_t\!-\!\mathbf{x}_t\|^2\!-\!\|\mathbf{x}^*_t\!-\!\mathbf{x}_{t+1}\|^2\big)\!+\!\|\bm{\lambda}_{t+1}\|v(\mathbf{g}_t)\nonumber\\ & \qquad\qquad+\frac{\mu m^2}{2}\!+\frac{\alpha g^2}{2 } \end{aligned}\ ] ] where( f ) follows from ; ( g ) uses non - negativity of and the gradient upper bound ; and ( h ) follows from the cauchy - schwartz inequality and the definition of the constraint variation in . by interpolating intermediate terms in , we have that where ( i ) follows from the radius of in assumption [ ass.2 ] such that . plugging into, it readily leads to summing up over , we find where ( j ) uses the upper bound of in that we define as , and ( k ) follows from the definition of accumulated variations in .the definition of dynamic regret in finally implies that where ( ) follows since : i ) due to the compactness of ; ii ) ; and , iii ) if .this completes the proof .a. jadbabaie , a. rakhlin , s. shahrampour , and k. sridharan , `` online optimization : competing with dynamic comparators , '' in _ intl .conf . on artificial intelligence and statistics _ ,san diego , ca , may 2015 .a. mokhtari , s. shahrampour , a. jadbabaie , and a. ribeiro , `` online optimization in dynamic environments : improved regret rates for strongly convex problems , '' in _ proc .ieee conf . on decision and control _ , las vegas , nv , dec . 2016 .n. chen , j. comden , z. liu , a. gandhi , and a. wierman , `` using predictions in online optimization : looking forward with an eye on the past , '' in _ proc .acm sigmetrics _ , antibes juan - les - pins , france , jun .2016 , pp . 193206. l. l. andrew , s. barman , k. ligett , m. lin , a. meyerson , a. roytman , and a. wierman , `` a tale of two metrics : simultaneous bounds on competitiveness and regret , '' in _ proc . annual conf .on learning theory _ , princeton , nj , jun .2013 .s. paternain and a. ribeiro , `` online learning of feasible strategies in unknown environments , '' _ ieee trans ._ , to appear , 2016 .[ online ] .available : https://arxiv.org/pdf/1604.02137v1.pdf a. beck , a. nedic , a. ozdaglar , and m. teboulle , `` an gradient method for network resource allocation problems , '' _ ieee trans .control of network systems _ , vol . 1 , no . 1 ,6473 , mar .2014 .l. tassiulas and a. ephremides , `` stability properties of constrained queueing systems and scheduling policies for maximum throughput in multihop radio networks , '' _ ieee trans ._ , vol .37 , no . 12 , pp .19361948 , dec . 1992 .a. g. marques , l. m. lopez - ramos , g. b. giannakis , j. ramos , and a. j. caamao , `` optimal cross - layer resource allocation in cellular networks using channel - and queue - state information , '' _ ieee trans . veh ._ , vol .61 , no . 6 , pp. 27892807 , jul .2012 .t. chen , x. wang , and g. b. giannakis , `` cooling - aware energy and workload management in data centers via stochastic optimization , '' _ieee j. sel .topics signal process . _ ,10 , no . 2 ,402415 , mar .
existing approaches to online convex optimization ( oco ) make sequential one - slot - ahead decisions , which lead to ( possibly adversarial ) losses that drive subsequent decision iterates . their performance is evaluated by the so - called _ regret _ that measures the difference of losses between the online solution and the _ best yet fixed _ overall solution in _ hindsight_. the present paper deals with online convex optimization involving adversarial loss functions and adversarial constraints , where the constraints are revealed after making decisions , and can be tolerable to instantaneous violations but must be satisfied in the long term . performance of an online algorithm in this setting is assessed by : i ) the difference of its losses relative to the _ best dynamic _ solution with one - slot - ahead information of the loss function and the constraint ( that is here termed _ dynamic regret _ ) ; and , ii ) the accumulated amount of constraint violations ( that is here termed _ dynamic fit _ ) . in this context , a modified online saddle - point ( mosp ) scheme is developed , and proved to simultaneously yield sub - linear dynamic regret and fit , provided that the accumulated variations of per - slot minimizers and constraints are sub - linearly growing with time . mosp is also applied to the dynamic network resource allocation task , and it is compared with the well - known stochastic dual gradient method . under various scenarios , numerical experiments demonstrate the performance gain of mosp relative to the state - of - the - art . constrained optimization , primal - dual method , online convex optimization , network resource allocation .
edwin t. jaynes wrote a beautiful article in 1957 advocating a reinterpretation of statistical mechanics in light of shannon s mathematical theory of communication . instead of working with ensembles of systems, jaynes posed the problem in the following terms : suppose we know the expected values of a set of functions of the microscopic state of a system , what is the best estimate for the average value of some other function ?without access to , which is never available in the lab , the best that can be done is to pick a probability distribution over the states and then calculate the expected value of as $ ] .but then we encounter the problem of which distribution to choose , because the average values do not provide enough information to determine uniquely .we need an additional criterion .therefore , argued jaynes , one should use the distribution that maximises the shannon entropy functional \equiv -\sum _ i\rho(z_i ) \ln(\rho(z_i)),\ ] ] subject to the constraints imposed by the information available .any other distribution would imply an unjustified bias in the probabilities .shannon s functional applies only to discrete distributions , but the gibbs - jaynes entropy functional is analogous to for continuous sets of states , as in the case of points in phase space . \equiv -k_b \int_{\gamma}\rho(z)\ln\left(\frac{\rho(z)}{m(z)}\right)\ dz = -k_b \mathrm{tr}\left[\rho \ln\left(\frac{\rho}{m}\right)\right].\ ] ] in classical hamiltonian dynamics , the measure turns out to be ( is the number of particles and is planck s constant ) . stands for the whole phase space and for boltzmann s constant .jaynes s maximum entropy formalism allows us not only to derive equilibrium statistical mechanics from the point of view of statistical inference , but also to select probability distributions in more general situations , when the expected values of several arbitrary phase functions have been established , even if they are not dynamical invariants . working out such distributionsconstitutes the key step for projection operator techniques in the theory of transport processes .recent developments in nonequilibrium statistical mechanics have borrowed another tool from information theory known as the kullback - leibler divergence , which measures how `` different '' is from . a well - known result in information theorystates that , with equality holding only when almost everywhere . when an equilibrium ensemble is used as the reference distribution , , there is a simple connection between the gibbs - jaynes entropy ( [ s ] ) and the _ relative entropy _ , defined here as \equiv -k_b\ d(\rho\|\rho^{eq}).\ ] ] the link can easily be established the moment we realise that the equilibrium ensemble is a stationary solution of liouville s equation , and that must therefore be a function of the dynamical invariants only .let stand for the invariants for the microstate , such as the energy , the linear momentum , and so forth .we then know that is some function .the integral of over all the states that satisfy should equal the probability of finding the system in a state compatible with these values of the invariants . = \phi(i)\mathrm{tr}\left[\delta(i - i)\right ] = p(i),\ ] ] where is the dirac delta function .solving for , we end up with },\ ] ] which means that the function in the denominator may be thought of as the `` number of microstates '' that satisfy , when the expression for the equilibrium distribution ( [ rho_eq ] ) is substituted into the definition of the relative entropy ( [ def - relative_entropy ] ) , we find & = & -k_b \int_{\gamma } \rho(z ) \ln\left(\frac{h^{3n}\rho(z ) } { \frac{p(i(z ) ) } { \omega(i(z))}}\right)\ dz \\ & = & -k_b \int_{\gamma } \rho(z ) \ln\left(h^{3n}\rho(z)\right)\ dz + k_b \int_{\gamma } \rho(z ) \ln\left(\frac{p(i(z))}{\omega(i(z))}\right)\ dz \nonumber \\ & = & s[\rho ] + k_b \int \mathrm{tr}\left[\rho \delta(i - i)\right ] \ln\left(\frac{p(i)}{\omega(i)}\right)\ di , \nonumber\end{aligned}\ ] ] ( the latter integral is extended over all the values of ) . hence maximisingthe relative entropy functional is equivalent to maximising the gibbs - jaynes functional _ as long as this last integral is constant for the distributions allowed by the constraints_. a reasonable way to meet this condition requires = p(i).\ ] ] in other words , the agreement between both maximisation strategies is based on the assumption that the unknown distribution generates the same probability distribution over the dynamical invariants as the equilibrium ensemble .if equation ( [ probability_assumption ] ) is conceded then , given the probability distribution at equilibrium , we can follow two different paths to calculate the least biased distribution compatible with our information about the system .an anonymous reviewer pointed out that condition ( [ probability_assumption ] ) could be relaxed when designates the canonical ensemble ( [ canonical_ensemble ] ) , for then we can simply assume that the expected energy calculated with leads to the same result as when it is calculated with and then prove that the final integral in equation ( [ delta_s_theorem ] ) becomes independent of .\ ] ] hence , in the canonical case condition ( [ probability_assumption ] ) may be relaxed to = \mathrm{tr}[\rho^{eq } i]\ ] ] similarly , for a microcanonical , it is enough to require that vanishes whenever does .let us illustrate the two alternative maximisation routes by working out the classic example of the probability distribution for a system in contact with a heat bath at temperature .we shall start with the gibbs - jaynes functional and introduce two habitual assumptions .first , we will imagine that the system and the reservoir have been isolated from the rest of the universe , and second , we will disregard the interaction energy between them , which we assume is very small compared with their internal energies , so that the hamiltonian , which remains constant , may be expressed as a sum of two terms , the former corresponding to our system , and the latter to the reservoir .a typical setup might also include the measured values of several macroscopic variables , such as concentrations or hydrodynamic velocity fields .we denote these values and equate them to the averages for the corresponding functions , of the microstate ,\ ] ] where is the unknown distribution . given the one - to - one correspondence between the equilibrium temperature and the total internal energy, we include an additional constraint for the expected value of , ,\ ] ] although we do not yet know the actual number represented by . finally, we must ensure that is properly normalised , = 1.\ ] ] we now wish to find the distribution that maximises the entropy functional subject to all the constraints ( [ lambda_constraints])-([normalisation ] ) . following the standard method of lagrange multipliers ,we add constraint terms to the gibbs - jaynes functional ( [ s ] ) to obtain & = & -k_b\mathrm{tr}\left[\rho\ \ln\left(h^{3n}\rho\right)\right ] -k_b\sum_{i=1}^k\lambda_i\left(\mathrm{tr}\left[\rho f_i\right]-f_i\right ) \\ & & -k_b\beta\left(\mathrm{tr}\left[\rho h\right ] - e\right ) -k_b\mu\left(\mathrm{tr}\left[\rho\right ] - 1 \right ) .\nonumber\end{aligned}\ ] ] the , and are all lagrange multipliers. the functional derivative of with respect to should vanish for the least biased distribution , frequently called the _ relevant _ distribution , , and this equation leads us to substitution of into the constraints should allow us , in principle , to calculate the lagrange multipliers .equation ( [ normalisation ] ) , for example , determines the value of .when combined with ( [ rel_rho ] ) , we find that } = \frac{1}{z},\ ] ] so our probability distribution becomes equation ( [ hamiltonian ] ) implies that the partition function factors into the product of an integral over and another over .we will refer to these factors as and . where and are the number of particles in the system and reservoir , respectively .we define the entropy as the maximum value of the gibbs - jaynes entropy functional . insertingthe expression for into ( [ s ] ) reveals the following link between the partition function and the entropy : = k_b\ln(z ) + k_b\sum_{i=1}^k \lambda_i f_i + k_b\beta e.\ ] ] factoring according to ( [ z1z2 ] ) , the entropy separates neatly into two terms , where and stand for the expected values of and . on the right of equation ( [ s_is_extensive ] )we find the entropies of the system and reservoir considered separately .in other words , suppose we had isolated the system from the heat bath and had then calculated the entropies for both independently by maximising the gibbs - jaynes functional , with the same constraints on the average values of and the normalisation of , but changing the constraints on the expected energies to for the system and for the bath , where and are the same values as in equation ( [ s_is_extensive ] ) .then the expressions for the entropies , for the system and for the reservoir , would read we will show below that the sum equals in ( [ s_is_extensive ] ) because the set of equations ( [ s_is_extensive ] ) , ( [ ss_and_sr ] ) and ( [ thermal_equilibrium ] ) illustrates the well - known fact that entropy is an extensive quantity if the interaction between subsystems is small enough to be disregarded .the equality of the three lagrange multipliers in ( [ thermal_equilibrium ] ) can be verified by comparing the average values calculated using ( [ total_rho_bar ] ) to the averages for the system and reservoir considered independently , noting that the lagrange multiplier is related to the temperature of the heat bath according to therefore , equation ( [ thermal_equilibrium ] ) shows that the temperatures of the system and the bath must equal the same value for the relevant distribution , and equation ( [ total_rho_bar ] ) then becomes to calculate the probability distribution for , we can now integrate over the degrees of freedom of the reservoir , that is , the answer to our problem , known as the generalised canonical probability distribution , is the least biased probability distribution for a system at constant temperature that also satisfies the constraints ( [ lambda_constraints ] ) . carrying out the integral ( [ integral_over_z_r ] ) , we obtain },\ ] ] where the trace should now be interpreted as an integration over .now consider the derivation of the generalised canonical distribution ( [ gcd ] ) from the relative entropy ( [ def - relative_entropy ] ) . in that case , we do not need to pay attention to the reservoir , so we maximise the functional , which includes the relative entropy and constraints ( [ lambda_constraints ] ) and ( [ normalisation ] ) . -k_b\sum_{i=1}^k\lambda_i\left(\mathrm{tr}\left[\rho f_i\right]-f_i\right ) -k_b\mu\left(\mathrm{tr}\left[\rho\right ] - 1 \right).\ ] ] as before , we calculate the functional derivative of with respect to and find that the equilibrium distribution for a system at constant temperature is the well - known canonical ensemble , }.\ ] ] by ensuring that is normalised , we determine } { \mathrm{tr}\left[e^{-\sum_{i=1}^k \lambda_i f_i(z_s ) -\frac{1}{k_bt}h_s(z_s)}\right]}.\ ] ] and substituting ( [ canonical_ensemble ] ) and ( [ normalisation_factor ] ) into ( [ rel_rho_2 ] ) , we recover the generalised canonical probability distribution ( [ gcd ] ) . jaynes s maximum entropy formalism guided us to the desired solution , but the path we had to follow was not as direct as the relative entropy route . furthermore ,in the former derivation , we found ourselves describing the effect of the reservoir in terms of the total energy . butknowledge of the energy in the reservoir , , clearly has no bearing on our problem because heat baths are characterised by their temperature , not their internal energy , and it is a good thing that eventually drops out of the equations .therefore , if we already know the equilibrium ensemble , perhaps it is easier to derive the relevant distribution from the relative entropy functional .nevertheless , the functional form of equation ( [ gcd ] ) could have been inferred from the gibbs - jaynes entropy ( [ s ] ) by a simpler procedure that does not contemplate the reservoir .the idea is to use the constraints for the average values ( [ lambda_constraints ] ) , normalisation ( [ normalisation ] ) and an extra constraint for the expected value of the unknown energy .the maximum entropy formalism then leads to an expression analogous to ( [ gcd ] ) , but with an unknown coefficient before .all the extra work with the reservoir in the subsection on the gibbs - jaynes derivation was carried out to establish that the temperature in equation ( [ gcd ] ) was equal to the temperature of the reservoir ( note that we have not assumed thermal equilibrium between the reservoir and system of interest ) . when the same result was derived from the relative entropy functional ( [ def - relative_entropy ] ) we did not have to do any of this extra work because the relevant information was already captured in the equilibrium distribution .the preceding discussion might give the impression that the relevant distribution may always be expressed as where stands for the appropriate normalisation factor .but this rule may lead to incorrect conclusions if applied carelessly . to see why ,let us examine a slightly more general problem .whenever we are dealing with macroscopic systems in experiments , the exact number of atoms or molecules remains unknown .let represent the coordinates and momenta of a system of particles .the probability distributions and functions will now depend on the dimensionality of phase space , so we will write them with a subindex to emphasise this dependence .the gibbs - jaynes entropy functional ( [ s ] ) can be generalised to and the constraints on the average values now read where represents the phase space for particles . similarly ,the expression for the relative entropy turns into the obvious generalisation of ( [ relevant_eq ] ) must be note that and represent joint probability densities for and , and are therefore not normalised to one , but rather where p(n ) represents the probability of particles in the system .imagine an isolated system for which we know the probability distribution for the total energy and number of particles .both and are dynamical invariants , and so is the probability .hence we should find the same probability for and at equilibrium . if we use lagrange s method to maximise ( [ jaynes_entropy ] ) subject to ( [ macro_f_constraints ] ) and ( [ probability_constraint ] ) , we derive the set of relevant distributions but in general these functions are formally different from our previous expression for ( [ macrocanonical_rho_eq ] ) .the disagreement between the two methods dissolves when we carry out the operations carefully .it might seem at first that there is no need to include the constraint ( [ probability_constraint ] ) when we maximise the functional for the relative entropy , because all the relevant information about should already be included in the equilibrium distribution .however , we do in fact have to specify that the distribution we are looking for must lead to the same value as the equilibrium distribution when both are integrated over a given constant energy manifold .when using the relative entropy ( [ relative_entropy ] ) , the correct functional to maximise is thus the last term above includes a lagrange multiplier for each pair of and , as required by equation ( [ probability_constraint ] ) .once again , we equate the derivatives of to zero and solve for the relevant distribution to find this distribution can be identified with ( [ macrocanonical_rho_eq ] ) as long as is interpreted as a function of and .in other words , if we define then we can simply insert ( [ relative_solution ] ) into ( [ probability_constraint ] ) and solve for , to determine recalling the expression for the equilibrium distribution ( [ rho_eq ] ) , equations ( [ def - mathcal_z])-([rho_eq_n ] ) can be used to convert ( [ relative_solution ] ) into ( [ jaynes_solution ] ) , so the two methods once again lead to the same result , as they should .in the literature , relative entropy has been applied mainly to the calculation of nonequilibrium free energy differences and dissipated work . in that context , and are both distributions that can be realised physically , such as the equilibrium ensembles of a given hamiltonian . by contrast , in the present paper we have used the relative entropy functional to determine _ relevant distributions _ , which need not be realised physically , because they represent the least - biased distribution that is consistent with the information available .double - well potential confining lennard - jones particles .the dashed line marks the average energy per particle .,scaledwidth=40.0% ] within the theory of mori - zwanzig transport processes , relevant distributions have become a crucial tool to derive generalised transport equations .consider the one - dimensional isolated double - well potential drawn in figure [ doublewell ] , which confines one hundred particles that interact with each other through the lennard - jones potential .let the relevant variable represent the number of particles on the right , where is the heaviside step function and the position of particle .the mori - zwanzig theory of nonequilibrium transport allows us to write exact transport equations for the average value of any phase function .in particular , for the relevant variable in ( [ relevant_variable ] ) , the theory produces the following transport equation for the average value , the first term on the right is known as the organized drift , , while is called the after - effect function . herewe will concentrate only on the organized drift , defined as = \mathrm{tr}\left[\bar{\rho}\sum_{i=1}^n\delta(q_i)\frac{p_i } { m_i}\right],\ ] ] as a first crude approximation to the time rate of change for .that is to say , we are assuming that . in ( [ organised_drift ] )we have applied the liouville operator to to calculate how is related to the momenta and masses of the particles . if we follow jaynes s maximum entropy route to determine the relevant distribution for equal to some value , then we get },\ ] ] with the lagrange multiplier chosen to satisfy the constraint on the average value of .when we insert ( [ doublewell_rhobar ] ) in ( [ organised_drift ] ) and integrate , the organised drift vanishes because it is the integral of an odd function , due to the presence of the momenta . surprisingly , in many cases this conclusion ( [ no_drift ] ) is incorrect .suppose we choose an initial state with all the particles on the left .if the average energy per particle lies below the height of the potential barrier , then the system can never reach a state with all the particles on the right , simply because there is not enough energy to get them all over the barrier .note , though , that when we switch the signs of all the coordinates in our initial state we create a new inaccessible state with the same total energy as before .in other words , the system is not ergodic for some values of the energy , and so it does not explore the complete surface in phase space .let designate the final stationary distribution reached by the system , to distinguish it from the microcanonical implied by the maximum entropy approach . calculating the relevant distribution from the relative entropy functional ( [ def - relative_entropy ] ) , the expression for the organised drift becomes . } { \mathrm{tr}\left[\rho^{ref}e^{-\lambda_{n_r}f}\right]}\ ] ] even though is unknown , we can sample it by means of a molecular dynamics simulation . after the simulation run, the system has traversed a set of points , so we estimate with number of particles with a positive coordinate value versus time for a system of lennard - jones particles initially to the left of the potential barrier in figure [ doublewell ] .a classic runge - kutta fourth order algorithm with time step equal to was used to integrate the equations of motion .the total energy remained constant to four significant figures .numerical integration with the organised drift from figure [ simulated_drift ] allowed us to estimate the average value of as a function of time ( dashed line ) . ]figure [ simulation_run ] represents the number of particles on the right of the double - well potential in figure [ doublewell ] as a function of time .we started the simulation with all the particles on the left and an average energy per particle below the height of the potential barrier .the average kinetic temperature calculated over the whole run ( time steps ) equals ( mean standard deviation ) .when the particles on either side of the barrier were considered separately , the average kinetic temperature remained the same , but the standard deviation doubled on the right of the well .organised drift versus average number of particles to the right of the potential barrier , , calculated with equation ( [ average_drift]).,scaledwidth=50.0% ] figure [ simulated_drift ] shows the organised drift calculated with ( [ average_drift ] ) as a function of the average number of particles to the right of the potential barrier .numerical integration of equation ( [ dfdt ] ) by setting generated the dashed line shown in figure [ simulation_run ] , which agrees qualitatively with the general trend of the simulation .the deviations observed are not very surprising , considering that we have completely neglected the after - effect function . in summary, even though we have neglected the memory effects in the equation for the organised drift ( [ organised_drift ] ) calculated with the relative entropy ( as opposed to the gibbs - jaynes entropy method ) , we have achieved a very good description of the transport over the energy barrier .the results are especially interesting because the dynamical evolution took place under non - ergodic conditions .the method presented here could also be applied in principle to the numerical calculation of the integral of in ( [ dfdt ] ) , but this would involve a much greater computational cost , so we have deferred these calcuations to future research .in the context of the maximum entropy formalism , relative entropy ( [ def - relative_entropy ] ) has pleasant features .its maximum value , , obviously corresponds to equilibrium , and it enables us to calculate the relevant distribution with less effort .furthermore , the relevant distribution turns into the equilibrium ensemble when the lagrange multipliers vanish . with expressions like ( [ rel_rho_2 ] ) or ( [ relative_solution ] )this fact lies in plain sight .given the equilibrium ensemble and a set of constraints on average values , , the relevant distribution can be calculated immediately by following these rules : first , write then ensure that is normalised by writing = 1\ ] ] and solving for the partition function .finally , the lagrange multipliers can be calculated , at least in principle , by inserting into the constraints on the average values , , and solving for the .the constraints for isolated systems must be considered carefully , because knowledge of the equilibrium ensemble reveals the probability distribution over the whole set of values of the dynamical invariants through = p(i).\ ] ] this information must be taken into account , so in this case we write then we ensure that is normalised by writing = p(i),\ ] ] and solving for .the relevant ensemble can then be used in conjunction with the other constraints to find the value of the unknown lagrange multipliers .the relative entropy method relies on our knowledge of the equilibrium ensemble , and it provides no clues regarding how to calculate this probability distribution , unlike jaynes s method .however , this might also be interpreted as a virtue .when the system has concealed dynamical invariants which have not been taken into account , maximising the gibbs - jaynes entropy will not generally reproduce the measured average values faithfully .this would signal the existence of missing information .by contrast , if the equilibrium ensemble has been determined by other means or if we are able to sample it effectively ( with molecular dynamics , for example ) , then we have simultaneously determined the probability distribution for all the dynamical invariants .we may then simply write the relevant distribution in terms of the equilibrium ensemble and , in principle , use it to calculate nonequilibrium quantities like the coarse - grained free energy or green - kubo transport coefficients .we would like to express our gratitude to the anonymous reviewers of this article for their insightful comments .jaynes , e. t. : information theory and statistical mechanics . the physical review , * 106 * , 620 - 630 ( 1957 ) shannon , c. e. : a mathematical theory of communication. bell system technical journal , * 27 * , 379 - 423 and 623 - 656 ( 1948 ) jaynes , e. t. : information theory and statistical mechanics , in statistical physics , 181 - 218 .w. a. benjamin , inc ., new york ( 1963 ) kawasaki , k. , and gunton , j. d. : theory of nonlinear transport processes : nonlinear shear viscosity and normal stress effects , physical review a * 8 * , 20482064 ( 1973 ) grabert , h. : projection operator techniques in nonequilibrium statistical mechanics , 29 - 32 .springer - verlag , berlin - heidelberg - new york ( 1982 ) zubarev , d. : statistical mechanics of nonequilibrium processes , 89 - 98 .wiley , berlin ( 1996 ) gaveau , b. , schulman , l. s. : a general framework for non - equilibrium phenomena : the master equation and its formal consequences .physics letters a * 229 * 347 - 353 ( 1997 ) qian , h. : relative entropy : free energy associated with equilibrium fluctuations and nonequilibrium deviations .physical review e * 63 * , 042103 ( 2001 ) kawai , r. , parrondo , j. m. r. , and c. van der broeck : dissipation : the phase - space perspective , physical review letters * 98 * , 080602 ( 2007 ) shell , m. s. : the relative entropy is fundamental to multiscale and inverse thermodynamic problems , the journal of chemical physics * 129 * , 144108 ( 2008 ) vaikuntanathan , s. , and jarzynski , c. : dissipation and lag in irreversible processes .europhysics letters , * 87 * , 60005 ( 2009 ) horowitz , j. , and jarzynski , c. : illustrative example of the relationship between dissipation and relative entropy .physical review e * 79 * , 021106 ( 2009 ) roldn , e. , and parrondo , j. m. r. : entropy production and kullback - leibler divergence between stationary trajectories of discrete systems .physical review e , * 85 * , 031129 ( 2012 ) crooks , g. e. , and sivak , d. a. : measures of trajectory ensemble disparity in nonequilibrium statistical dynamics . journal of statistical mechanics : theory and experiment , p06003 ( 2012 ) crooks , g. e. : on thermodynamic and microscopic reversibility .journal of statistical mechanics : theory and experiment p07008 ( 2012 ) sivak , d. a. , and crooks , g. e. : near equilibrium measurements of nonequilibrium free energy .physical review letters , * 108 * 150601 ( 2012 ) kullback , s. , and leibler , r. a. : on information and sufficiency .annals of mathematical statistics * 22 * , 79 - 86 ( 1951 )
the maximum entropy formalism developed by jaynes determines the relevant ensemble in nonequilibrium statistical mechanics by maximising the entropy functional subject to the constraints imposed by the available information . we present an alternative derivation of the relevant ensemble based on the kullback - leibler divergence from equilibrium . if the equilibrium ensemble is already known , then calculation of the relevant ensemble is considerably simplified . the constraints must be chosen with care in order to avoid contradictions between the two alternative derivations . the relative entropy functional measures how much a distribution departs from equilibrium . therefore , it provides a distinct approach to the calculation of statistical ensembles that might be applicable to situations in which the formalism presented by jaynes performs poorly ( such as non - ergodic dynamical systems ) . [ the final publication is available at springer via http://dx.doi.org/10.1007/s10955-014-0954-6 ]
deep neural networks ( dnn ) have been widely adopted in several applications such as object classification , pattern recognition and regression problems [ 1 ] .although dnns achieve high performance in many applications , this comes at the expense of a large number of arithmetic and memory access operations for both training and testing [ 2 ] .therefore , dnn accelerators are highly desired [ 3 ] .fpga - based dnn accelerators are favorable since fpga platforms support high performance , configurability , low power consumption and quick development process [ 3 ] . on the other hand ,implementing a dnn or a convolutional neural network ( cnn ) on an fpga is a challenging task since dnns and cnns require a large amount of resources [ 4 ] , [ 5 ] and [ 6 ] .dnns consist of a number of hidden layers that work in parallel , and each hidden layer has a number of artificial neurons ( an ) [ 1 ] .each neuron receives signals from other neurons and computes a weighted - sum of these inputs .then , an activation function of the an is applied on this weighted - sum .one of the main purposes of the activation function is to introduce non - linearity into the network .the hyperbolic tangent is one of the most popular non - linear activation functions in dnns [ 1 ] .realizing a precise implementation of the hyperbolic tangent activation function in hardware entails a large number of additions and multiplications [ 7 ] .this implementation would badly increase the overall resources required for implementing a single an and a fully parallel dnn .therefore , approximations with different precisions and amount of resources are generally employed [ 7 ] .we propose a new high - accuracy approximation using the discrete cosine transform interpolation filter ( dctif ) [ 8 ] .the proposed dctif approximation achieves higher accuracy than the existing approximations , and it needs fewer resources than other designs when a high precision approximation is required .we also study the effect of approximating the hyperbolic tangent activation function on the performance of training and testing dnns .the rest of the paper is organized as follows : different tanh approximations are reviewed in section 2 .the operation principle of the proposed dctif approximation is described in section 3 . in section 4 ,an implementation of the proposed dctif approximation is detailed .section 5 is dedicated to the experimental results and a comparison with other approximations and discussion .finally , section 6 concludes the paper .the hardware implementation of a dnn is always constrained by the available computational resources [ 9 ] .the required computational resources to implement a dnn can be reduced by limiting the precision of the data representation [ 9 ] . on the other hand ,using bitwise dnns is another way to reduce the computational resources of a dnn .bitwise dnn replaces floating or fixed - point arithmetic operations by efficient bitwise operations [ 10 ] .however , this comes at the expense of the training and testing performance of the dnn .another approach to meet the constraints of the available computational resources is to approximate the activation function of the dnn .the selection of the tanh approximation accuracy as an activation function is one of the aspects that define the training and testing performance of the dnns [ 11 ] .high accuracy approximations lead to high training and testing performance of the dnn , and low accuracy approximations lead to poor dnn performance [ 11 ] .there are several approaches for the hardware implementation of the hyperbolic tangent activation function based on piecewise linear ( pwl ) , piecewise non - linear , lookup table ( lut ) and hybrid methods .all of these approaches exploit that the hyperbolic tangent function , shown in figure 1 , is negatively symmetric about the y - axis .therefore , the function can be evaluated for negative inputs by negating the output values of the same corresponding positive values and vice versa .armato et al .[ 12 ] proposed to use pwl which divides the hyperbolic tangent function into segments and employs a linear approximation for each segment . on the other hand ,zhang and his colleagues [ 13 ] used a non - linear approximation for each segment .although both methods achieve precise approximations for the hyperbolic tangent function , this comes at the expense of the throughput of the hardware implementation .lut - based approximations divide the input range into sub - ranges where the output of each sub - range is stored in a lut .leboeuf et al .[ 14 ] proposed using a classical lut and a range addressable lut to approximate the function .lut - based implementations are fast but they require more resources than pwl approximations in order to achieve the same accuracy. therefore , most of the existing lut - based methods limit the approximation accuracy to the range [ 0.02 , 0.04 ] .several authors noticed that the hyperbolic tangent function can be divided into three regions a ) pass region , b ) processing region ( pr ) and c ) saturation region , as shown in figure 1 .the hyperbolic tangent function behaves almost like the identity function in the pass region , and its value is close to 1 in the saturation region .some hybrid methods that combine luts and computations were used to approximate the non - linear pr .namin and his colleagues [ 15 ] proposed to apply a pwl algorithm for the pr . on the other hand , meher et al . [ 16 ] proposed to divide the input range of the pr into sub - ranges , and they implemented a decoder that takes the input value and selects which value should appear on the output port .finally , zamanloony et al .[ 7 ] introduced a mathematical analysis that defines the boundaries of the pass , processing and saturation regions of the hyperbolic tangent function based on the desired maximum error of the approximation .generally , activation function approximations with high error badly affect the performance of dnns in terms of their training and testing accuracies .approximations with higher accuracies are favorable in order to maintain the same learning capabilities and testing results compared to the exact activation function . therefore , we propose a high precision approximation of the hyperbolic tangent activation function while using a small amount of computational resources .the dct - based interpolation filter ( dctif ) interpolates data points from a number of samples of a function [ 6 ] .it was firstly introduced for interpolating fractional pixels from integer pixels in the motion compensation process of the latest video coding standard h.265 [ 6 ] .dctif can be used to approximate several non - linear functions .it interpolates values with a desired accuracy by controlling the number of samples involved in the interpolation process and the number of interpolated points between two samples .we propose to use dctif in order to approximate the hyperbolic activation function in dnns .the dct transformation used to generate dctif coefficients is defined by equation 1 , where _ l~max~ _ and _ l~min~ _ define the range of the given sample points used in the interpolation process , _ size _ is defined as ( _ l~max~ _ - _ l~min~ _ + _ 1 _ ) and the center position of a given size is _ center _ = ( _ l~max~ _ + _l~min~_)/_2_. by substituting equation 1 into the inverse dct formula defined in equation 2 , we get the dctif co - efficients generation formula for position _i+r _ as in equation 3 .as shown in figure 2 , let s assume that \{_p~2m~ _ } denotes a set of _ 2 m _ given sample points ( no . of dctif filter s tabs ) used to interpolate _p~i+r~ _ at fractional position _ i+r _ between two adjacent samples at positions_ i _ and _ i+1 _ of the function _ x(n)_. the parameter _ _ is a positive fractional number that is equal to ( 1/2^j^ ) where _ j _ is the number of interpolated points between two sample points .the parameter _ r _ is a positive integer that represents the position of the interpolated point between two sample points where it is [ 1 , 2^j^-1 ] .a fractional position value _ p~i+r~ _ is interpolated using an even number of samples when _r _ is equal to 1/2 , which means that the interpolated point is exactly between two adjacent samples . otherwise , _p~i+r~ _ is interpolated using an odd number of samples since the interpolated point is closer to one of the samples than the other .therefore , equation 3 is modified to generate the dctif co - efficients for even and odd numbers of tabs as in equations 4 and 5 , respectively .the dctif co - efficients can be smoothed using a smoothing window of size _ w _ [ 8 ] . for hardware implementation , the smoothed co- efficients are scaled by a factor of ( 2^s^ ) and rounded to integers , where _ s _ is a positive integer value .in addition , the scaled co - efficients should be normalized which means that their summation is equal to 2^s^. consequently , equation 6 defines the final dctif co - efficients . table 1 shows the generated dctif co - efficient values using different numbers of dctif tabs , _r _ values and scaling factors by substituting in equation 6 .the co - efficient values exihibit similarity among some _r _ positions .for example , the _ i+1/4 _ and _ i+3/4 _ positions have the same set of co - efficient values . moreover , at the _i+1/2 _ position , the set of co - efficients is symmetric about the center element .these properties can be exploited to reduce the implementation cost .a dctif approximation error analysis is presented in figure 3 .it can be seen that the dctif approximation error increases for small _ _ values .although a large _ _ value means that fewer points need to be interpolated , this comes at the expense of memory resources since more samples must be stored .a large value of _ s _ increases the accuracy of the approximation , but increases complexity as well because the interpolation coefficients take larger values , potentially expressed with more signed digits as shown in table 1 .moreover , using more dctif tabs comes at the expense of the computational resources as shown in table 2 .the proposed dctif approximation divides the input range of the hyperbolic tangent function into pass , processing and saturation regions as shown in figure 1 .the boundaries of these regions are computed based on the targeted maximum error of the approximation [ 7 ] .the output is equal to the input when the input is in the pass region .the proposed dctif approximation is utilized for the inputs in the processing region . in the saturation region , all the bits of the output port are set to one which represents the maximum value of the output signal . _ value and the scaling parameter_ s _ ] [ cols="^ " , ] + & 0.04 & 0.43279 & & 0.04 & 10.7 + & 0.02 & 0.78250 & & 0.02 & 16.4 + & 0.01 & 0.78976 & & 0.01 & 23.1 + & 0.001 & 0.84850 & & 0.001 & 31.1 + & 0.0001 & 0.87712 & & 0.0001 & 68.0 + & 0 & 0.90287 & & 0 & 68.1 + & 0.04 & 0.77945 & & 0.04 & 86.1 + & 0.02 & 0.80033 & & 0.02 & 86.9 + & 0.01 & 0.80068 & & 0.01 & 86.9 + & 0.001 & 0.84581 & & 0.001 & 86.9 + & 0.0001 & 0.85014 & & 0.0001 & 94 .. 1 + & 0 & 0.86097 & & 0 & 94.1 +the authors would like to thank ahmed el - sheikh , awny m. el - mohandes and hamza bendaoudi for their insightful comments on our work .d. hunter , h. yu , m. s. pukish , j. kolbusz , and b. m. wilamowski , `` selection of proper neural network sizes and architectures - a comparative study , '' in ieee transactions on industrial informatics , vol .228 - 240 , 2012 .s. himavathi , d. anitha , and a. muthuramalingam , `` feedforward neural network implementation in fpga using layer multiplexing for effective resource utilization , '' in ieee transactions on neural networks , vol .18 , no .3 , pp .880 - 888 , 2007 .j. qiu , j. wang , s. yao , k. guo , b. li , e. zhou , j. yu , t. tang , n. xu , s. song , and y. wang , `` going deeper with embedded fpga platform for convolutional neural network , '' in proceedings of the international symposium on field - programmable gate arrays .26 - 35 , 2016 . c. zhang , p. li , g. sun , y. guan , b. xiao , and j. cong , `` optimizing fpga - based accelerator design for deep convolutional neural networks , '' in proceedings of the international symposium on field - programmable gate arrays .161 - 170 , 2015 .b. zamanlooy , and m. mirhassani , `` efficient vlsi implementation of neural networks with hyperbolic tangent activation function , '' in ieee transactions on very large scale integration ( vlsi ) systems , vol .39 - 48 , 2014 .k. ugur , a. alshin , e. alshina , f. bossen , w. j. han , and j. h. park , `` motion compensated prediction and interpolation filter design in h. 265/hevc , '' in ieee journal of selected topics in signal processing , vol . 7 , no . 6 , pp .946 - 956 , 2013 .k. basterretxea , j. m. tarela , i. del campo , and g. bosque , `` an experimental study on nonlinear function computation for neural / fuzzy hardware design , '' in ieee transactions on neural networks , vol .266 - 283 , 2007 .a. armato , l. fanucci , e. p. scilingo , and d. de rossi , `` low - error digital hardware implementation of artificial neuron activation functions and their derivative , '' in microprocessors and microsystems , vol .557 - 567 , 2011 .k. leboeuf , a. h. namin , r. muscedere , h. wu , and m. ahmadi , `` high speed vlsi implementation of the hyperbolic tangent sigmoid function , '' in convergence and hybrid information technology international conference on .ieee , vol .1 , pp . 1070 - 1073 , 2008 .a. h. namin , k. leboeuf , r. muscedere , h. wu , and m. ahmadi , `` efficient hardware implementation of the hyperbolic tangent sigmoid function , '' in international symposium on circuits and systems .ieee , pp .2117 - 2120 , 2009 .
implementing an accurate and fast activation function with low cost is a crucial aspect to the implementation of deep neural networks ( dnns ) on fpgas . we propose a high - accuracy approximation approach for the hyperbolic tangent activation function of artificial neurons in dnns . it is based on the discrete cosine transform interpolation filter ( dctif ) . the proposed architecture combines simple arithmetic operations on stored samples of the hyperbolic tangent function and on input data . the proposed dctif implementation achieves two orders of magnitude greater precision than previous work while using the same or fewer computational resources . various combinations of dctif parameters can be chosen to tradeoff the accuracy and complexity of the hyperbolic tangent function . in one case , the proposed architecture approximates the hyperbolic tangent activation function with 10 ^ -5^ maximum error while requiring only 1.52 kbits memory and 57 luts of a virtex-7 fpga . we also discuss how the activation function accuracy affects the performance of dnns in terms of their training and testing accuracies . we show that a high accuracy approximation can be necessary in order to maintain the same dnn training and testing performances realized by the exact function .
with a widespread use of simple motion - tracking devices e.g. nintendo wii remoteor accelerometer units in cell phones , the importance of motion - based interfaces in human - computer interaction ( hci ) systems has become unquestionable .commercial success of early motion - capture devices led to the development of more robust and versatile acquisition systems , both mechanical , e.g. cyberglove systems cyberglove , measurand shapewrap , dgtech dg5vhand and optical e.g. microsoft kinect , asus wavi xtion .also , the interest in the analysis of a human motion itself , , has increased in the past few years . while problems related to gesture recognition received much attention , an interesting yet less explored problem is the task of recognising a human based on his gestures .this problem has two main applications : the first one is the creation of a gesture - based biometric authentication system , able to verify access for authenticated users .the other task is related to personalisation of applications with a motion component . in such scenarioan effective classifier is required to recognise between known users .the goal of our experiment is to classify humans based on motion data in the form of natural hand gestures .today s simple motion - based interfaces usually limit users options to a subset of artificial , well distinguishable gestures or just detection of the presence of body motion .we argue that an interface should be perceived by the users as natural and adapt to their needs . while modern motion - capture systems provide accurate recordings of human body movement , creation of a hci interface based on acquired data is not a trivial task .many popular gestures are ambiguous thus the meaning of a gesture is usually not obvious for an observer and requires parsing of a complex context .there are differences in body movement during the execution of a particular gesture performed by different subjects or even in subsequent repetitions by the same person .some gestures may become unrecognisable with respect to a particular capturing device , when important motion components are unregistered , due to device limitations or its suboptimal calibration .we aim to answer the question if high - dimensional hand motion data is distinctive enough to provide a basis for personalisation component in a system with motion - based interface . in our works we concentrated on hand gestures , captured with two mechanical motion - capture systems .such approach allows to experiment with reliable multi - source data , obtained directly from the device , without additional processing .we used a gesture database of twenty two natural gestures performed by a number of participants with varying execution speeds .the gesture database is described in .we compare the effectiveness of three established classifiers namely linear discrimination analysis ( lda ) , support vector machines ( svm ) and k - nearest neighbour ( k - nn ) .the following experiment scenarios are considered in this paper : * human recognition based on the performance of one selected gesture ( e.g. ` waving a hand',`grasping an object ' ) .user must perform one specified gesture to be identified .* the scenario when instead of one selected gesture , a set of multiple gestures is used both for training and for testing .user must perform one of several gestures to be identified . * the scenario when different gestures are used for training and for testing of the classifier .user is identified based on one of several gestures , none of which were used for training the classifier .the paper is organized as follows : section 2 ( related work ) presents a selection of works on similar subjects , section 3 ( method ) describes the experiment , results are presented in section 4 ( results ) , along with authors remarks on the subject .existing approaches to the creation of an hci interface that are based on dynamic hand gestures can be categorized according to : the motion data gathering method , feature selection , the pattern classification technique and the domain of application .hand data gathering techniques can be divided into : device - based , where mechanical or optical sensors are attached to a glove , allowing for measurement of finger flex , hand position and acceleration , e.g. , and vision - based , when hands are tracked based on the data from optical sensors e.g. .a survey of glove - based systems for motion data gathering , as well as their applications can be found in , while provides a comprehensive analysis of the integration of various sensors into gesture recognition systems .while non - invasive vision - based methods for gathering hand movement data are popular , device - based techniques receive attention due to widespread use of motion sensors in mobile devices .for example presents a high performance , two - stage recognition algorithm for acceleration signals , that was adapted in samsung cell phones .extracted features may describe not only the motion of hands but also their estimated pose .a review of literature regarding hand pose estimation is provided in .creation of a gesture model can be performed using multiple approaches including hidden markov models e.g. or dynamic bayesian networks e.g. . for hand gesture recognition ,application domains include : sign language recognition e.g. , robotic and computer interaction e.g. , computer games e.g. and virtual reality applications e.g. . relatively new application of hci elements are biometric technologies aimed to recognise a person based on their physiological or behavioural characteristic .a survey of behavioural biometrics is provided in where authors examine types of features used to describe human behaviour as well as compare accuracy rates for verification of users using different behavioural biometric approaches .simple gesture recognition may be applied for authentication on mobile devices e.g. in authors present a study of light - weight user authentication system using an accelerometer while a multi - touch gesture - based authentication system is presented in .typically however , instead of hand motion more reliable features like hand layout or body gait are employed . despite their limitations , linear classifiers proved to produce good results for many applications , including face recognition and speech detection . in ldais used for the estimation of consistent parameters to three model standard types of violin bow strokes .authors show that such gestures can be effectively presented in the bi - dimensional space . in , the lda classifierwas compared with neural networks ( nn ) and focused time delay neural networks ( tdnn ) for gesture recognition based on data from a 3-axis accelerometer .lda gave similar results to the nn approach , and the tdnn technique , though computationally more complex , achieved better performance .an analysis of lda and the pca algorithm , with a discussion about their performance for the purpose of object recognition is provided in .svm and k - nn classifiers were used in for the purpose of visual category recognition .a comparison of the effectiveness of these method is classification of human gait patterns is provided in .thorough analysis of a gesture dataset used in the experiments , along with a discussion on the benefits of naturality of hci interface elements can be found in .pca analysis of the same dataset together with visualization of eigengestures can be found in .the general idea is to recognise a gesture performer .experiment data consist of data from ` iitis gesture database ' that contains natural gestures performed by multiple participants .three classifiers will be used .pca will be performed on the data to reduce its dimensionality .a set of twenty - two natural hand gesture classes from ` iitis gesture database ' , tab .[ id : table : gesturelist ] , was used in the experiments .gestures used in this experiments were recorded with two types of hardware ( see fig .[ fig : gestures_intro ] ) .first one was the dgtech dg5vhand motion capture glove , containing 5 finger bend sensors ( resistance type ) , and three - axis accelerometer producing three acceleration and two orientation readings .sampling frequency was approximately 33 hz .the second one was cyberglove systems cyberglove with a cyberforce system for position and orientation measurement .the device produces 15 finger bend , three position and four orientation readings with a frequency of approximately 90 hz . during the experiment ,each participant was sitting at the table with the motion capture glove on their right hand . before the start of the experiment ,the hand of the participant was placed on the table in a fixed initial position . at the command given by the operator sitting in front of the participant ,the participant performed the gestures .each gesture was performed six times at natural pace , two times at a rapid pace and two times at a slow pace .gestures number 2 , 3 , 7 , 8 , 10 , 12 , 13 , 14 , 15 , 17 , 18 , 19 , 21 are periodical and in their case a single performance consisted of three periods .the termination of data acquisition process was decided by the operator .figure [ fig : lda_projection ] presents the result of performing lda ( further described in subsection [ lda ] ) on the dataset : projection of the dataset on the first two components of for both devices .it can be observed that many gestures are linearly separable . in the majority of visible gesture classes ,elements are centred around their respectable mean , with an almost uniform variance .potential conflicts for small number of gestures may be observed for local regions of the projected data space .a motion capture recording performed with a device with sensors generates a time sequence of vectors . for the purpose of our workeach recording was linearly interpolated and re - sampled to samples , generating data matrices \in{\mathbb{r}}^{m\times t} ] be the data matrix , where are data vectors with zero empirical mean .the associated covariance matrix is given by . by performing eigenvalue decomposition of such that eigenvalues of are ordered in descending order obtains the sequence of principal components ] linear discriminant analysis thoroughly presented in is a supervised , discriminative technique producing an optimal linear classification function , which transforms the data from dimensional space into a lower - dimensional classification space .let the _ between - class scatter matrix _ be defined as follows where denotes mean of class means .e. , and is the number of samples in class . let _ within - class scatter matrix _ be where is the total number of the samples in all classes .the eigenvectors of matrix ordered by their respective eigenvalues are called the canonical vectors . by selecting first canonical vectors and arranging them row by row as the projection matrix any vector be projected onto a lower - dimensional feature space .using lda one can effectively apply simple classifier e.g. for -class problem .a vector is classified to class if following inequality is observed all . denotes euclidean norm .note that when the amount of available data is limited , lda technique may result in the matrix that is singular . in this caseone can use moore - penrose pseudoinverse .matrix is replaced by moore - penrose pseudoinverse matrix and canonical vectors are eigenvectors of the matrix .the -nearest neighbour ( -nn ) method classifies the sample by assigning it to the most frequently represented class among k nearest samples .it may be described as follows .let be a training set where denotes class labels , and , are feature vectors .for a nearest neighbour classification , given a new observation , first a nearest element of a learning set is determined with euclidean distance and resulting class label is .usually , instead of only one observation from , most similar elements are considered .therefore , counts of class labels for are determined for each class where denotes dirac delta .the class label is determined as most common class present in the results note that in case of multiple classes or single class and even there may be a tie in the top class counts ; in that case results may be dependent on data order and behaviour of implementation .support vector machines ( svm ) presented in are supervised learning methods based on the principle of constructing a hyperplane separating classes with the largest margin of separation between them .the margin is the sum of distances from the hyperplane to closest data points of each class .these points are called support vectors .svms can be described as follows .let be a set of linearly separable training samples where denotes class labels .we assume the existence of a -dimensional hyperplane ( denotes dot product ) separating in .the distance between separating hyperplanes satisfying and is .the optimal separating hyperplane can be found by minimising under the constraint for all .when the data is not linearly separable , a hyperplane that maximizes the margin while minimizing a quantity proportional to the misclassification errors is determined by introducing positive slack variables in the equation [ equation : svm:1 ] , wchich becomes : and the equation ( [ equation : svm:2 ] ) is changed into : where is a penalty factor chosen by the user , that controls the trade off between the margin width and the misclassification errors .when the decision function is not a linear function of the data , an initial mapping of the data into a higher dimensional euclidean space is performed as and the linear classification problem is formulated in the new space .the training algorithm then only depends on the data through dot product in of the form .the mercer s theorem allows to replace by a positive definite symmetric kernel function , e.g. gaussian radial - basis function , for .our objective was to evaluate the performance of user identification based on performed gestures . to this end , in our experiment class labelsare assigned to subsequent humans performing gestures ( performers ids were recorded during database acquisition ) .three experiment scenarios were investigated , differing by the range of gestures used for recognition .the three classification methods described before were used , evaluated in two - stage -fold cross validation scheme .three scenarios related to data labelling were prepared : * scenario a. human classification using specific gesture .each gesture was treated as a separated case , and a classifier was created and verified using samples from this particular gesture .* scenario b. human classification using a set of known gestures .data from multiple gestures was used in the experiment . whenever the data was divided into a teaching and testing subset , proportional amount of samples for each gesture were present in both sets . *scenario c. human classification using separate gesture sets . in this scenariothe data from multiple gestures was used , similarly to scenario b. however , teaching subset was created using different gestures than a testing subset .the three classifiers were used , with the following parameter ranges : * lda , with number of features ; * -nn , with number of neighbours ; * svm , with radial basis function ( rbf ) and .common parameters values found by cross - validation : * lda , features * -nn , neighbours for scenarios b , c , for scenario c ; * svm , . the parameter selection and classifier performance evaluationwas performed by splitting the available data into training and testing subset in two - stage -fold cross validation ( c.v . )scheme , with .inner c.v . stage corresponds to grid search parameter optimization and model selection .the outer stage corresponds to final performance evaluation .the pca was performed on the whole data set before classifier training .the amount of principal components was chosen empirically as ..classification accuracy ( % ) for three considered scenarios . [cols="^,^,^,^,^,^,^ " , ] the accuracy of the classifiers for three discussed scenarios is presented in tab .[ id : table : mean_accuracy ] .confusion matrices for experiments b , c are presented on figure [ fig : confusion_matrices ] .high classification accuracy can be observed for scenarios and when a classifier is provided with training data for specific gesture . in scenario c , however , the accuracy corresponds to a situation when a performer is recognised based on an unknown gesture .while the classification accuracy is lower than in previous scenarios , it should be noted that the classifier was created using a limited amount of high - dimensional data .the difference between the accuracy for both devices can be explained by significantly higher precision of a cyberglove device , where hand position is captured using precise rig instead of an array of accelerometers .c| c c c c & & lda & -nn & svc + & & & & + & & & & + & & & & + & & & & results of the experiment show that even linear classifiers can be successfully employed for recognition of human performers based on their natural gestures .relatively high accuracy for experiment c indicates that the general characteristics of a human natural body movement is highly discriminative , even for different gesture patterns .while mechanical devices used in experiments provide accurate measurements of body movements , they may be replaced by less cumbersome data gathering device e.g. microsoft kinect .experiments confirm that natural hand gestures are highly discriminative and allow for an accurate classification of their performers .applications of such solution allow e.g. to personalise tools and interfaces to suit the needs of their individual users .however , a separate problem lies in the detection of particular gesture performer based on general hand motion .such task requires deeper understanding of motion characteristics as well as identification of individual features of human motion .the work of m. romaszewski and p. gomb has been partially supported by the polish ministry of science and higher education project nn516482340 `` experimental station for integration and presentation of 3d views '' .work by p. gawron was partially suported by polish ministry of science and higher education project nn516405137 `` user interface based on natural gestures for exploration of virtual 3d spaces '' .open source machine learning library scikit - learn was used in experiments .we would like to thank z. puchaa and j. miszczak for fruitful discussions .hyeran byun and seong - whan lee .applications of support vector machines for pattern recognition : a survey . in _ proceedings of the first international workshop on pattern recognition with support vector machines _ , svm 02 , pages 213236 , london , uk , uk , 2002 .springer - verlag .h. cooper , b. holt , and r. bowden . .in thomas b. moeslund , adrian hilton , volker krger , and leonid sigal , editors , _ visual analysis of humans : looking at people _ , chapter 27 , pages 539 562 .springer , october 2011 .p. gawron , p. gomb , j.a .miszczak , and z. puchaa . .in t. czachrski , s. kozielski , and u. staczyk , editors , _ man - machine interactions 2 _ , volume 103 of _ advances in intelligent and soft computing _ , pages 4956 .springer berlin / heidelberg , 2011 .p. gomb , m. romaszewski , s. opozda , and a. sochan . choosing and modeling the hand gesture database for a natural user interface . in e. efthimiou , g. kouroupetroglou , and s .- e .fotinea , editors , _ gesture and sign language in human - computer interaction and embodied communication _ , volume 7206 of _ lecture notes in computer science _ , pages 2435 .springer berlin heidelberg , 2012 .f. pedregosa , g. varoquaux , a. gramfort , v. michel , b. thirion , o. grisel , m. blondel , p. prettenhofer , r. weiss , v. dubourg , j. vanderplas , a. passos , d. cournapeau , m. brucher , m. perrot , and e. duchesnay . ., 12:28252830 , 2011 .n. rasamimanana , e. flty , and f. bevilacqua . .in s. gibet , n. courty , and j .- f .kamp , editors , _ gesture in human - computer interaction and simulation _ , volume 3881 of _ lecture notes in computer science _ , pages 145155 .springer berlin / heidelberg , 2006 .wall , a. rechtsteiner , and l.m .in d. p. berrar , w. dubitzky , and m. granzow , editors , _ a practical approach to microarray data analysis _ , chapter 5 , pages 91109 .kluwel , norwell , m.a . , march 2003 .x. wang and x. tang . .in _ computer vision and pattern recognition , 2004 .cvpr 2004 .proceedings of the 2004 ieee computer society conference on _ , volume 2 , pages ii259ii265 vol.2 , june-2 july 2004 .h. zhang , a.c .berg , m. maire , and j. malik . .in _ proceedings of the 2006 ieee computer society conference on computer vision and pattern recognition - volume 2 _ , cvpr 06 , pages 21262136 , washington , dc , usa , 2006 .ieee computer society .x. zhang , x. chen , w .- h .wang , j .- h .yang , v. lantz , and k .- q .in _ proceedings of the 14th international conference on intelligent user interfaces _ , iui 09 , pages 401406 , new york , ny , usa , 2009 .
the goal of this work is the identification of humans based on motion data in the form of natural hand gestures . in this paper , the identification problem is formulated as classification with classes corresponding to persons identities , based on recorded signals of performed gestures . the identification performance is examined with a database of twenty - two natural hand gestures recorded with two types of hardware and three state - of - art classifiers : linear discrimination analysis ( lda ) , support vector machines ( svm ) and k - nearest neighbour ( k - nn ) . results show that natural hand gestures allow for an effective human classification . + + keywords : gestures ; biometrics ; classification ; human identification ; lda ; k - nn ; svm = 1
for various reasons , such as camera optics , alignment errors , filter irregularities , non - flat ccds , and ccd manufacturing defects , the mapping of the square array of square pixels of a detector onto the tangent - plane projection of the sky requires a non - linear transformation .positions measured within the pixel grid need to be corrected for geometric distortion ( gd ) before they can be accurately compared with other positions in the same image , or compared with positions measured in other image . while almost all scientific programs must make use of the distortion solution , most are relatively insensitive to it .so long as each detector pixel is mapped to within a fraction of a pixel of its true location in a distortion - corrected reference frame , the resampling accuracy is not compromised .for this reason , the formal requirement for distortion calibration for wfc3 was 0.2 pixel to enable ` multidrizzle ` in the calibration pipeline to generate stacked associations .kozhurina - platais et al .( 2009 ; isr 2009 - 33 ) and mcclean et al .( 2010 ; isr 2009 - 041 ) have demonstrated that the current official solution is accurate to 0.05 pixel ( 2 mas ) , clearly meeting the needs of the pipeline .scientific programs that are focused on astrometry make more stringent demands on the distortion solution .these programs must analyze the raw , un - resampled pixels of the distorted images in order to attain the highest possible position accuracy .similarly , these programs require a more accurate distortion solution to relate measured positions to one another .in general , positions can be measured to a precision of 0.01 pixel for a well - exposed star , so we would like to have a distortion solution that is at least this accurate .such a precision is well below the formal calibration requirements , but if it can be shown that the solution is stable at that level , then an improved calibration will enable many astrometry - related programs .in a previous paper ( bellini & bedin 2009 , hereafter paper i ) , we used the limited set of dithered exposures taken during smov ( servicing mission observatory verification ) and an existing acs / wfc astrometric reference frame as a flat - field to derive a set of 3-order - polynomial correction coefficients to represent the geometric distortion in wfc3/uvis .the solution was derived independently for each of the two ccds for each of the three broad - band ultraviolet filters f225w , f275w and f336w .we found that by applying our correction it was possible to remove the gd over the entire area of each chip to an average 1-d accuracy of 0.025 pixels ( i.e. , 1 mas ) . at the time , the lack of sufficient observations collected at different roll angles and dithers prevented a more accurate self - calibration - based solution .the calibration data collected over the past year has enabled us to undertake the next step in modeling the gd of wfc3/uvis detectors .the large number of dithers and multiple roll angles now available allow us to construct a 2010-based reference frame that is free of the proper - motion ( pm ) errors inherent in the previous 2002-epoch acs catalog .the calibration data has enabled us to extend the gd solution to the rest of the major broad - band filters and to improve the accuracy of the solution to about 0.008 pixel .this is below the precision with which we can measure a well - exposed stars in a single exposure .improvements beyond this are unlikely , as breathing and other temporal variations make a more accurate `` average '' solution unnecessary . as we will see in sect .[ sec2 ] , the uvis chips contain distortion with very complex variations on relatively small spatial scales , which would be very unwieldy to model with simple polynomials .therefore we chose to model the gd with two components : a simple 3-order polynomial and an empirically - derived table of residuals ( the look - up table ) .these look - up tables ( one per chip / filter combination ) simultaneously absorb _i ) _ the complicated effects caused by the non - perfectly uniform and flat surfaces of the filters ( for an example in the case of acs / wfc , see anderson & king 2006 , hereafter ak06 ) and _ ii ) _ the astrometric imprint of manufacturing defects on the uvis ccd pixel arrays .this approach was pursued with success on both acs / hrc and acs / wfc detectors ( anderson 2002 , ak06 ) , for which the gd correction based on polynomials plus look - up tables is still the best available to date .the data used in this paper come from the cen calibration field , and were collected under programs pid-11452 ( pi - quijano ) and pid-11911 ( pi - sabbi ) during 4 epochs : 2009 july 15 ( 2009.54 ) , 2010 january 1214 ( 2010.04 ) , 2010 april 2829 ( 2010.33 ) and 2010 june 30july 4 ( 2010.51 ) . duringthe january and april 2010 runs guide - star failures made it necessary to repeat some of the exposures of pid-11911 .moreover , between the first and the second epoch the telescope focus changed significantly . the summary of the observations used in this work is given in table [ tab1 ] .in addition to large dithers of the size of pixels ( i.e. ) , at least 2 orientations were available for each filter .star positions and fluxes were obtained as described in paper i , using a spatially - variable psf - fitting method , which will be described in detail in a subsequent paper of this series .the gd solution was constructed in three stages . we began by treating each chip independently .we first solved for the 3-order polynomial that provides the lion s share of the correction for each chip . after subtracting this global component of the solution , we were then able to see and model the fine - scale component of the solution , caused by detector and filter irregularities . finally , with the solution for each chip nailed down , we found the global linear parameters that mapped both chips into a convenient meta - chip reference frame .in paper i we derived a 3-order polynomial solution for the gd for filters f225w , f275w and f336w ; here our aim in this section is to obtain the polynomial solution for seven other broad - band filters for which suitable observations are available , namely : f390w , f438w , f555w , f606w , f775w , f814w and f850lp . whereas in paper i we had to use an astrometric reference catalog that was taken several years prior in order to extract the gd solution, we can now perform a self - calibration of the gd thanks to the improved number of images at different roll angles offered by the new data set .self - calibrations can often be more accurate than calibrations that reference a standard field , since stars in standard fields can move due to proper motions and since the brightness range where stars in the catalog are well measured may not correspond to the brightness range where stars in the calibration exposures are well measured .furthermore , the images may have different crowding issues and measurement qualities from the catalog .we followed the prescriptions given in anderson & king ( 2003 ) for wfpc2 and subsequently used by the same authors to derive the gd correction for the acs / hrc ( anderson & king 2004 ) and for the acs / wfc ( ak06 ) .the same strategy was also used by two of us to calibrate a ground - based instrument ( see bellini & bedin 2010 for details ) .briefly , we started with the f336w gd - solution of paper i as first guess to correct star positions for the seven redward of f336w ( from f390w to f850lp ) and created a master frame for each filter independently .we then performed the iterative procedure described in paper i to improve the polynomial coefficients . with a better gd solution ,we then re - constructed the master frames and repeated the entire process three times . a fourth repetition of the procedure provided negligible improvement . the 3-order polynomial coefficients for all 10 of the broad - band filters are hard - coded in the `fortran ` subroutine available at the website described given in section [ sec : conc ] .as an example , the coefficients for filter f606w are reported in table [ tab2 ] ( coefficients for all the ten filters are shown in table a in the electronic version of the paper ) .the next step in the procedure was to examine the residuals from the polynomial solution . to do thiswe transformed the master - frame position for each star back into the raw frame of each exposure using a conformal linear transformation and the inverse distortion solution .the residual was then the difference between where the star was observed in the frame and where it should have been in that frame , according to the master frame . in figure[ fig : res_before ] we plot this residual as a function of raw chip coordinate .each black dot represents a single star measured in single exposure .we combine all the exposures together to see the overall trends .individual residuals were then averaged within small ( -pixel size ) cells ( conveniently defined as described in the next section ) and are plotted as arrow vectors ( magnified by a factor of 2500 ) .these residuals exhibit a pattern with unexpected abrupt discontinuities ( denoted with the solid red lines ) .we note that while some of the large - scale trends could be partially removed by adopting higher - order polynomials , these discontinuities , which have about the same amplitude as the remaining large - scale trends ( .02 pixels ) , would still remain .these residuals are very similar to those seen in kozhurina - platais et al .( 2010 ) .it is interesting to note that these .02 pixel systematic trends are perfectly consistent with the larger - than - expected residual dispersion already noted in paper i. these fine - scale trends were simply washed out by the large internal motions of the cen stars ( .15 wfc3/uvis pixel ) over the 7-year baseline between the reference - frame observations of go-9442 in 2002 and the wfc3/uvis observations of pid-11452 in 2009 .the top panel of fig .[ fig : fetta ] shows the residuals plotted against the coordinates , for stars measured within a 100-pixel - tall strip , centered at , extracted from chip[1 ] .this is the bottom chip , and the first extension in the ` _ flt ` file .this chip is named uvis2 , but to avoid ambiguity ( and to maintain the convention in paper i ) we will refer to it with brackets , according to its order within the ` fits`-image extensions . ] . constraining the residuals within a vertical strip highlights the sharp changes in the residual trend .it is important to note that these discontinuities are present in _ both _ axes of the wfc3/uvis detectors and not only along a single axis , as was the case for wfpc2 ( anderson & king 1999 ) or acs / wfc ( anderson 2002 ) . the bottom panel of fig .[ fig : fetta ] shows that these boundaries show up as single - row excursions of up to % in the f814w flat fields .this was also seen for wfpc2 and acs / wfc .these discontinuities can be explained as small manufacturing defects in the ccds , analogous to those found for the wfpc2 and acs / wfc detectors .these defects arose from an imperfect alignment between the silicon wafer and the mask used to generate the ccds pixel boundaries during lithographic projection ( see kozhurina - platais et al .indeed , at the location where these repositioning errors are found , pixels are wider or narrower in one direction . as a consequence ,these pixels are respectively brighter or fainter on the flat field , since they collect more or less light .this also leads to the observed astrometric discontinuities .the -axis pattern of these lithographic features on wfc3/uvis ccds are a single pixel wide in the flat field and have a period of 675 pixels along the axis for both chips , extending back and forth from the central position . , pixels . ]note that this implies that we are left with the first and last 23.5-pixel - wide vertical strips where the discontinuity is repeated .the horizontal feature along the axis consists of a single 2-pixel - wide discontinuity , centered at pixels for chip[1 ] and at pixels for chip[2 ] .these discontinuities are located symmetrically with respect to the gap between the two chips ( horizontal solid red lines in fig .[ fig : res_before ] ) . a careful look at fig .[ fig : fetta ] reveals hints of finer discontinuities on both astrometry ( top panel ) and flat field ( bottom panel ) , but at much lower amplitudes ( a few thousandths of a pixel and % ) , however it s hard to assess their significance .our aim here was to find a simple correction of the gd following the basic principle : `` we see a systematic error , and we empirically find a correction for it '' .we therefore decided to keep a polynomial of the third order , as we did in paper i , to remove most of the gd ( down to the mas level ) , and then to use a single look - up table ( one for each chip and filter ) to correct all the smaller - scale positional systematic errors .this approach was able to provide accuracies down to 0.008 pixel ( mas ) , as we will show in the next section .we set up the look - up table as follows .first we defined boundaries alongside the lithographic feature discontinuities ( red solid lines in fig . [fig : res_before ] ) .we subdivided each of the 675--pixel - wide regions into six 112.5-pixel - wide sub - regions , for a total of 36 such subdivisions plus two 23.5-pixel - wide regions at the left and right edges of each chip . to maintain a similar sampling along the axis, we defined 18 sub - divisions . asthe horizontal component of the lithographic feature divides the two chips in two parts of 911 and 1140 pixels tall , we made 8 subdivisions for the 911--pixel one ( 113.875 pixels each ) and 10 for the other ( 114 pixels each ) . at the endwe produced an array of almost - square cells , plus two 23.5-pixel - wide strips at the short edges of each chip , each made of 18 rectangular cells ( grey dashed lines in figs .[ fig : res_before ] ) .cell dimensions were ultimately dictated by the necessity to have enough grid - points to finely sample the gd and to have an adequate number of stars within each cell to robustly measure the value of the grid points in the look - up table .we always had more than 30 stars in each cell to constrain the value of the table , even for the filters with the fewest number of well - exposed stars .typically we had well over 100 stars per cell .figure [ fig : scheme ] shows an example of the geometry adopted for the look - up tables .thick solid lines mark detector edges , dashed lines identify lithographic discontinuities , while dotted lines highlight cell borders .we used stars within each cell to compute -clipped median positional residuals and , which are assigned to the corresponding grid point ( open circles ) .when a cell adjoins either detector edges or lithographic discontinuities , the grid point is displaced to the edge of the cell , as shown .we use a semicircle to indicate when a grid point corresponds to a discontinuity . for any given location on the chip , the look - up table correction is given by a bi - linear interpolation among the surrounding four grid points ( as illustrated by the arrows in fig .[ fig : scheme ] ) .in the two 23.5-pixel - wide strips the correction is given by a linear interpolation of the two closest grid points along the axis .to derive the look - up table we used a master frame which is itself affected by these lithographic features , since it was constructed from positions that had not been corrected for them .once we had a first estimate of the tabular corrections , we re - determined an improved master frame by correcting our raw catalogs with both the polynomial and the look - up table component .we repeated the whole process of building up master frames and improving the table values three times .a fourth iteration proved to offer negligible improvement .figure [ fig : res_after ] shows positional residuals in the raw reference system , after our final look - up table plus polynomial gd correction is applied .residual vectors are now magnified by a factor 100 .all the lithographic features and all other high - frequency patterns seen on fig .[ fig : res_before ] appear to be completely removed .one of the best estimates of the true errors of the gd solution is given by the magnitude of the dispersion ( computed as the 68.27 percentile ) of the positional residuals ( rms ) of each star ( ) observed in each image ( ) , which have been gd - corrected and transformed into the master frame .these are computed as : } = \sqrt { ( x_i^{t_{j}}-x_{{\rm master},i})^2 + ( y_i^{t_{j}}-y_{{\rm master},i})^2 } .\ ] ] only stars observed at least in 3 individual images are used to compute the rms .figure [ fig : res ] shows these dispersions as a function of the corresponding instrumental magnitude for different filters.$ ] ) . ]saturation typically sets in brightward of .5 ( marked by the left blue vertical line ) . for well - exposed stars ( typically between magnitude .5 and .5 , but we include fainter stars for f225w and f275w filters to improve the statistics ) , the 68.27 percentile levels of the positional rms are marked by red horizontal lines .the corresponding 1-d values are displayed at the top of each panel , in units of both pixels and mas .the best results are obtained for redder filters .here we used all the images listed in table [ tab1 ] ( two to four different epochs for each filter ) to compute positional rms .we will see in the next section that , by selecting only images within the same epoch , positional dispersions are even smaller .we noted that very blue and very red stars behave differently with respect to our gd correction when observed through the bluest filters ( kozhurina - platais et al .( 2011 ) see a similar effect ) .this is probably due to a chromatic effect induced by fused - silica ccd windows within the optical system , which refract differently blue and red photons and have a sharp increase of the refractive index below 4000 ( george hartig , personal communication ) . as a consequence ,the f225w , f275w and f336w filters are the most affected . to better understand how much astrometry will suffer from this phenomenon, we performed the following test .we chose the f225w and f275w data sets for which the effect is maximized and both extreme horizontal branch ( ehb , with a temperature above 40 ) and red giant branch ( rgb , with a temperature of ) stars have about the same luminosity ( i.e. , our positional dispersions have the same size ) .we also required images to be taken within the same epoch 2010.04 ( 9 exposures for each filter ) , to avoid cluster internal - motion effects . in the f225w vs. f225w cmd we selected relatively bright ( f225w.7 ) , unsaturated stars of intermediate color ( marked in green on the left panel of fig .[ fig : brtest ] ) and used them to compute a linear transformation from the f275w master frame into the f225w frame . on the same cmd we also selected other groups of stars with the same luminosity criteria : ( i ) a blue set made up of ehb and hot white - dwarf stars ; ( ii ) a red set containing intermediate rgb stars and ( iii ) a final purple set populated by rgb - tip stars .comparing star positional residuals we found that green stars are distributed around ( 0,0 ) , as we would have expected since their positions formed the basis for the transformations , while stars of the other groups were found to have residuals located in significantly different positions ( up to pixels away ) .on the right panel of fig . [fig : brtest ] we show the vector - point diagram for selected stars , which are color coded as on the left panel of the same figure .median position residuals for each group of stars are marked by full circles of the same colors .the size of the circle indicates the formal error in the median . a systematic trend of the displacements as a function of stellar color is clear .further investigations will be required to fully characterize this chromatic effect .the globular cluster cen has the largest internal - proper - motion dispersion among galactic globular clusters despite being more than two times farther than the closest ones ( 4.7 kpc , van der marel & anderson 2010 , versus 2.2 kpc for m 4 and 2.3 kpc for ngc 6397 , harris 1996 , dec .2010 revision ) . in this sectionwe show that , by applying our gd solution , we are able to measure this dispersion in just a few months . to minimize chromatic effects , we used only f814w images and created three distortion - free frames , one for each of the three available epochs ( =2010.04 , =2010.33 , =2010.51 , hereafter , , respectively ) .we selected well - exposed , unsaturated stars ( instrumental magnitude ) in common to all the three epochs , and compared the positional displacements seen between and . detection of the intrinsic motion of the stars would reveal itself as a correlation between the displacements and , proportional to the ratio of the respective time baselines . on the contrary , had these displacements been dominated by random errors , we would expect no correlation at all between the two displacements , or at least a correlation with a different slope .this is shown on the left panels of fig .[ fig : ipm ] , where we see a clear correlation between the two coupled epochs in both the - and -directions ( top and bottom panels , respectively ) .note that the slope of the straight line simply corresponds to the ratio between the two time base - lines and it is not a fit to the data points .if we assume a gaussian distribution for the observed displacements , the dispersions along the line , and those perpendicular to it , can give us an estimate of the intrinsic proper - motion and the errors dispersion , respectively .in the following we will derive a crude estimate for both .we used the symbols and [ in mas ] to indicate the standard deviation of the displacements and .we then considered that the _ observed _ proper - motion dispersion , , is related to the intrinsic proper - motion dispersion ( ) and the measurement errors ( ) by the following equation : .\ ] ] therefore can be obtained as : .\ ] ] the quantity can also be computed as the ratio between the displacements dispersions and their time baseline : and to be similar , a reasonable working hypothesis . ],\ ] ] from which follows the relation : .\ ] ] we defined the dispersion of data points along the line in fig.[fig : ipm ] as , which is an observable quantity and related to the others by the equation ,\ ] ] which implies .\ ] ] the latter , substituted in eq .[ eq2 ] , gives },\ ] ] which relates the observed proper - motion dispersion to the observed dispersion along the expected correlation .assuming any deviation from the line to represents the total error of the data point , an estimate of can be obtained by the dispersion of data points perpendicularly to the line : },\ ] ] top - right panel of fig .[ fig : ipm ] shows the histograms of the -displacement dispersions along ( filled circles ) and perpendicular ( open circles ) to the expected correlations .poisson error bars and gaussian best fits are also shown .standard deviations and are computed as the 68.27 percentile of the distribution residuals around their median value .the bottom - right panel shows the same for -axis displacement dispersions . taking the average of and from the displacements along and , we had mas and mas . by using the previous equations [ eq1 ] , [ eq3 ] and [ eq4 ], we obtained an intrinsic dispersion of mas yr , which is in remarkable agreement with the value mas yr measured by anderson & van der marel ( 2010 ) using a time baseline almost 9 times larger ( 4.07 years vs. 172 days ! ) , and more images .this result is even more astonishing if we consider that we did not use local transformations ( see , e.g. , bedin et al .2003 ; anderson et al .2006 , anderson & van der marel 2010 ) which would further reduce systematic residuals in the gd solution , as well as any correction for breathing ( which can introduce small low - order terms ) or charge - transfer inefficiency ( which is already plaguing this new camera ; see figure [ fig : cti ] ) . as a final external check on the achieved accuracy, we can assume the uncertainties on to cancel out and and to equally contribute to ( ) .having exposures per epoch , we can infer a 1-d positional accuracy of 0.008 pixels , .3 mas ( ) , which is consistent with the value reported in fig .[ fig : res ] .it should also be noted that , at this level of accuracy , there is a considerable interplay between the derived psf models and the gd solution , which might play some role on the achievable astrometric precision .on account of breathing and other phenomena , the distortion solution is not perfectly stable over time .the impact of breathing is not well enough understood to predict its impact on the distortion , so the best we can hope to achieve is an average distortion solution and a sense of how stable the solution is about this average .it is typically the low - order terms that are the most time - variable , so it made sense above to construct the best single - chip - based solution first and only later to consider the less accurate larger - scale terms that relate the two chips to a common frame .we do that here with the understanding , that for the highest - precision differential astrometry , it is always best to perform the measurements as locally as possible , provided that there are an adequate number of reference stars .some projects , however have few reference objects and require knowledge of the distortion over the full extent of the field of view ( fov ) .we solved for these global terms in several stages .first , we put the two chips into a meta coordinate frame .next , we solved for the linear skew terms present in the combined system .then we solved for the overall scaling , so that all filters would have the same scaling , in terms of pixels per arcsecond .finally , we solved for the positional offset between the filters .we based our meta - coordinate frame on the distortion - corrected frame of the bottom chip ( chip[1 ] ) .the bottom chip is already in this system , so we simply needed to find the transformation that put the top chip ( chip[2 ] ) into this same system . since there is a gap of about 35 pixels between the frames , the two chips of a single exposure naturally have no sources in commonthis means that we will have to use an intermediate set of observations to accomplish the mapping .we took all the f606w observations and identified all pairs of observations that had either an offset of more than 500 pixels or an orientation difference of more than 10 degrees .( if the pointings are too similar then the overlap will not be sufficient . )since we wanted as many different pairs of exposures as possible , we also incorporated images from pid-12094 ( pi - kozhurina - platais ) , which provides additional roll angles .there were a total of 32 variously overlapping f606w exposures , so in general we could work with 992 overlapping pairs . for each qualifying pair , we found which chip(s ) in the first exposure had significant overlap with both chips in the second exposure .we first corrected all positions for distortion using the solution found above .we next used the positions of common stars to solve for the 4-parameter conformal linear transformation from the top chip of the second exposure into the overlapping chip of the first exposure .we then solved for the analogous transformation from the overlapping chip to the bottom chip of the second exposure . by combining these two linear transformations, we were able to bootstrap positions in the top chip to positions in the bottom chip . using all pairs of exposures with good intermediate - chip overlap, we found a transformation of the form : where and to relate the distortion - corrected coordinates in the top chip into the system of the bottom frame .this was found for each exposure where both of its chips overlapped significantly with a chip from a different exposure .we then found average values for the six parameters .this gave us a rough meta - frame system .since the single - chip overlaps can be limited , we then iterated this solution , this time using both chips in the comparison exposure ( by means of the meta solution ) to examine any residuals in the positioning of the top chip relative to the bottom .we converged upon : for the f606w data set .this means that the central pixel of the top chip ( 2048,1024 ) , where and are zero , is located at coordinate = in the distortion - corrected frame of the bottom chip .the near - conformal nature of this transformation ( and ) demonstrates that the skew was accurately measured in the original solution .now that we had a master frame for the f606w exposures , we used the f606w data set as comparison images to solve for the same parameters for the other filters .we would expect them to be similar , but since the solution for each filter was found independent of the other filters , there could be small scale or orientation changes that would impact the chip[1 ] frame , and hence the location of chip[2 ] in chip[1 ] coordinates .table [ tab3 ] provides these inter - chip transformation parameters for all filters .[ cols="^ , > , > , > , > , > , > , > , > , > " , ] the solution for each chip / filter combination described above was performed without reference to other chips and filters .the goal of this section is to tie everything together into a common system . in the coordinate transformations above, we always solved for a scale before we examined residuals .the reason for this is that the scale of the telescope is always changing , partly due to breathing and partly due to velocity aberration . now that the offset and rotation for each chip / filter combination had been determined to place each of the chips into a common master frame , the overall scale was the last of the linear parameters to be solved for . to do this, we constructed a master frame based on only the f606w exposures using only transformations allowed for offset and rotation , but no scale changes .this way , the frame would represent the average scale of the f606w exposures .we then transformed the positions of the stars in each of the exposures for each filter ( 203 in total , including the pid-12094 data ) into this reference frame and took note of the scale factor of the linear transformation .we plot this scale factor for each exposure on the top panels of figure [ fig : scale_by_filt ] .the images are ordered by filter and the filters are separated by a vertical dotted line .it is clear that there is some trend with filter , but there is considerable intra - filter scatter as well .this scatter could be due to velocity aberration or breathing .we divided each of the above scale measurements by the ` vafactor ` keyword , taken from the image header .it reports the expected special - relativistic variation in the plate scale , which is related to the dot - product between the average telescope velocity vector during the exposure and the direction to the target ( see cox & gilliland 2002 ) . after making this deterministic correction, we found that the global exposures for each filter were now consistent to within 0.01 pixel ( as shown on the bottom panels of fig .[ fig : scale_by_filt ] ) .we solved for the average scaling for each filter and included this correction in the meta distortion solution .the average scaling we found for each filter is given in the eighth column of table [ tab3 ] .they are in good qualitative agreement with the findings in figure 2 of kozhurina - platais et al .we note that the linear terms of uvis appear to be considerably more stable than those of acs / wfc .figure 6 of anderson ( 2006 ) shows that the acs fov changes in radius by .03 pixel due to breathing , while the residuals here are about 0.01 pixel .we want emphasize that breathing can introduce errors larger than 0.008 pixel into the distortion solution .our single - chip - based gd correction was generated comparing largely half - chip to half - chip positions ( due to the dithering scheme of the exposures ) .therefore , the global effect due to breathing is reduced to about a quarter with respect to what it would be by comparing whole chips to whole chips ( since errors tend to be quadratic ) .wedge effects can cause individual filters to induce small shifts in the positions of stars on the chip . since the zeropoint of our distortion solution for each filter is entirely arbitrary, we have the flexibility of choosing it in a convenient way .we would like the distortion solution to have the property that , if a star is observed through two different filters with no dither , the two filters will report the same distortion - corrected position for the star , even if the wedge effect may cause the apparent positions on the chip to differ by more than a pixel .we identified two sets of consecutive exposures of the cen central field that had no commanded ` pos - targs ` between them and found the offset for each filter that best registered its distortion - corrected positions with those of the f606w images .these offsets are given in the last two columns of table [ tab3 ] .based on the agreement between the two comparisons we have for each filter , these offsets should be accurate to about 0.05 pixel .note that the offset for f775w is quite large ( 1.47 pixel ) ; the others are typically 0.5 pixel .the distortion solution presented here was constructed to match the pixel scale and orientation of the center of chip[1 ] . to determine the absolute scale of this frame , we compared the commanded offsets in arcseconds ( from the commanded pos_targs ) with the achieved offsets ( in pixels ) .pid-11911 contained visits in which 9 exposures in f606w were taken in a 3 grid with offsets of 40 between the exposures .we measured the achieved offsets from the central exposure in distortion - corrected pixels and found that the grid spacing in pixels corresponded to 1005.67 pixels in one visit and 1005.79 in the second visit .this implies a plate scale for our frame of pixels per arcsecond , which corresponds to mas pixel .this value is consistent with the independent estimate of the scale given in paper i ( mas , internal errors only ) , which is based on the best knowledge of the acs / wfc absolute scale .after establishing all of these global parameters as described , we noticed that the frame we had adopted , which was centered on the central pixel of chip[1 ] ( the bottom chip ) , extended into negative coordinates in the lower left of the frame .this is due to our choice of center and the nature of the intrinsic skew present in the detector .since it is often convenient to work with positive coordinates , we decided to add 200 to the meta coordinate system in , so that positions within the detector would always have positive coordinate values .by using a large number of well dithered data - sets taken with different roll angles , we have modeled the gd of wfc3/uvis by means of a self - calibration .the solution is an improvement with respect to paper i and consists of a set of third - order - correction coefficients plus a finely - spaced look - up table of residuals for each chip in 10 filters , namely : f225w , f275w , f336w , f390w , f438w , f555w , f606w , f775w , f814w and f850lp .the hybrid solution has been shown to correct the manufacturing defects and the high - spatial frequency residuals seen in paper i. the use of these corrections removes the distortion over the entire area of each chip to an accuracy significantly better than .01 pixel ( i.e. better than .4 mas ) . as a demonstrative test we applied our solution to f814w exposures collected at different epochs and were able to measure the internal motion of cen in just few months , finding values consistent with the most recent determinations .the initial solution was constructed for each chip for each filter independently .this is because we did not want any possible motion between the chips or large - scale breathing non - linearities to impact our solution .once we had the solution for each filter and chip , we unified them into a single common coordinate system .we found the linear transformation that took the top chip into the frame of the bottom chip for each filter , we found the relative scalings between the different filters and finally we found the offsets caused by the wedge - effect of each filter .we make our ` fortran ` routine ` wfc3_uvis_gc.f ` publicly available ( at the url ` www.stsci.edu/\simjayander/wfc3uv_gc/ ` ) to the astronomical community .it requires 4 quantities in input : the raw positions and , the chip number and the filter . in outputit produces corrected positions and in the meta - coordinate frame .in addition to the ` fortran ` code , we will also provide the solution in the form of simple fits images .the `` image_format '' directory of the website above contains forty images , one for each coordinate / chip / filter combination . as an example , the image gc_wfc3uvis_f606w_chip1x.fits is a 4096 real image with each pixel in the image giving the coordinate of the distortion - corrected position of that pixel . to distortion - correct decimal pixel locations , the image can simply be interpolated bi - linearly .these images should make our solution accessible for those who use languages other than ` fortran ` .finally , we want to state clearly to the reader that this is not the official gd calibration .the idctab file , which contains independent polynomial calibrations for the 10 uvis filters , is installed in stsci s calibration database system and is used for the opus pipeline processing of rectified ` drz ` images ( kozhurina - platais et al .2009 , 2010 ) .the two solutions were designed for different goals .ours was constructed with high - precision differential astrometry in mind , while the official solution was more focused on the absolute transformation onto the focal plane .we thank alceste z. bonanos for polishing the manuscript .a.b . acknowledges the support by the stsci under the 2008 graduate research assistantship program and by miur under program prin2007 ( prot .20075tp5k9 ) .kozhurina - platais , v. , cox , c , petro , l. , dulude , m. , and mack , j. _ `` multi - wavelength geometric distortion solution for wfc3/uvis and ir '' _ in proceedings of the 2010 hst calibration workshop , eds .s. deustua and c. oliveira ( baltimore : stsci ) , in press ( 2010 ) .
we present an improved geometric - distortion solution for the _ hubble space telescope _ uvis channel of wide field camera 3 for ten broad - band filters . the solution is made up of three parts : ( 1 ) a 3-order polynomial to deal with the general optical distortion , ( 2 ) a table of residuals that accounts for both chip - related anomalies and fine - structure introduced by the filter , and ( 3 ) a linear transformation to put the two chips into a convenient master frame . the final correction is better than 0.008 pixel ( mas ) in each coordinate . we provide the solution in two different forms : a ` fortran ` subroutine and a set of ` fits ` files , one for each filter / chip / coordinate .
reaction paths in large molecular systems , such as biomolecules , provide critical information regarding structural intermediates ( transitions states ) and barrier heights .the search for these paths has a long history in the applied math research commnity ( e.g. , ) , as well as in the field of biomolecular computation .many approaches must start from an initial guess for the reaction path ( such as a straight line between two states ) , effectively limiting the search to a single pathway . on the other hand ,`` targeted '' and `` steered '' md approaches are capable of finding multiple pathways by repeated simulation ( from differing initial conditions ) forced to reach the desired end state .the recently - introduced soft - ratcheting approach is also capable of `` blindly '' determining multiple reaction pathways .it differs from the targeted and steered approaches in the following ways : ( i ) monotonic progress toward the target state is not enforced , permitting a wider range of reaction pathways ; ( ii ) soft - ratcheting is applied in the context of stochastic dynamics , although this does not prevent the inclusion of explicit solvent molecules ; and ( iii ) a probability weight ( `` score '' ) is associated with each trajectory generated , which in principle also permits estmates of the reaction rates within the dynamic importance sampling ( dims ) formulation discussed by woolf and by zuckerman and woolf .we note that reaction - rate estimates have not yet been produced by the soft - ratcheting algorithm , because such estimates require trajectories sampled with a near - optimal distribution ( i.e. , as would occur in unbiased dynamics ; see below ) .the soft - ratcheting procedure is simple and is motivated by the metropolis algorithm and the `` exponential transformation '' used in nuclear importance sampling ( e.g. , ) .related methods include the weighted - ensemble brownian dynamics " approach of huber and kim and the `` contra md '' scheme of harvey and gabb .the process is : ( a ) generate an unbiased step ; ( b ) if the step moves toward the target state , accept it ; ( c ) if it moves away , accept it with a probability ( i.e. , `` softly '' ) that increases in the forward direction ; ( d ) repeat , while estimating the probability score for all accepted steps .we emphasize that the non - monotonicity embodied in ( b ) and ( c ) , and the existence of the score in ( d ) distinguish this method from previous multiple - pathway methods .the guiding concept behind soft - ratcheting is _ not _ to force the system ( which necessarily perturbs the dynamics ) but to try to allow the system to proceed in a possible , if unlikely , way .of course , rare stochastic events are just what we seek ! note too that , unlike the trajectory sampling methods introduced by pratt and pursued by chandler and coworkers as well as those of eastman , grnbech - jensen and doniach , the _ overall _ effect of the soft - ratcheting algorithm is non - metropolis in nature ( despite the motivation ) : _ trajectories do not evolve from one another and are statistically independent_. the metropolis idea is only used to ensure that a given trajectory successfully reaches the target state . in this important sense , the soft - ratcheting algorithm comes under the independent - trajectory rubric of the dims method .in essence there is no more theoretical underpinning to producing soft - ratcheted trajectories than that already sketched in the introduction : using a physically - but - arbitrarily chosen _ acceptance probability function _ for step increments , one accepts all forward steps , and backward steps are accepted with a probability which decreases the more `` negative '' the increment .see fig .[ fig : p - accept ] . here, the forward direction is simply some vector in configuration space that points from the initial to the target state perhaps a dihedral angle in a dihedral transition .the algorithm is sufficiently robust ( see results section ) that advance knowledge of the reaction path and the true reaction coordinate is not necessary . when generating a series of soft - ratcheted crossing events in a single long trajectory , it is convenient to use a simple _ threshold _ device .this means only that trajectories are permitted to perform unbiased dynamics in small regions near the `` centers '' of the beginning and end states , and biased ( i.e. , soft - ratcheted ) dynamics begin only when the threshold is reached .the idea is to allow the trajectory to explore different parts of the stable states , with an eye toward finding exit points to different pathways .such exploration , of course , must take place within the limits of available computer time ! as noted below, our use of the threshold requires further investigation and optimization , though it appears to perform the task of permitting exploration of alternative exit points from a stable state .it is only when one wishes to associate a score with a trajectory that some analysis must be undertaken .the dynamic importance sampling ( dims ) approach requires the probability score for use in rate estimates , moreover .specifically , the probability score used in dims calculations is the ratio of two quantities : ( i ) the probability that the given trajectory would occur in an unbiased simulation a known quantity ; and ( ii ) the probabilty that the given trajectory was produced in the biased ( e.g. , soft - ratcheting ) simulation .further details of the full dims formulation may be found in refs . and are beyond the scope of the present report . herewe focus solely on computing the probability that the soft - ratcheting algorithm produced a given trajectory ( ii ) , which unfortunately does not follow directly from the simple acceptance probability used to generate the trajectory .this section gives full details of generating the probability score ( i.e. , ratio ) required by dims .briefly , however , assume progress towards the target is measured in terms of a scalar `` distance , '' , which is larger at the target state than the initial : each step corresponds to an increment , with positive increments moving toward the target . from any starting configuration , one can define the _ unbiased _ distribution of increments , which is simply the projection of the more fundamental distribution of configurations , , onto the coordinate .the distribution of increments typically is nearly gaussian with a mean which may be either positive or negative .however , once certain backward steps are rejected due to the acceptance function in the soft - ratcheting procedure ( specified below ) , the distribution is shifted forward in a non - trivial way to become the biased distribution , . estimating the ratio of values of these two distributions for every accepted step ( though not the entire distributions ) is the task at hand .the multi - dimensional case reduces to a simple scalar description in terms of increments , but we include it for completeness .we assume ( although it is not necessary for the formalism ) that the initial and final states of interest in our molecule do not require the full all - atom configuration , but rather a subset of coordinates , say , . if the target point is the `` center '' of state b , say , , then one can always measure the distance to that point , ^{1/2 } \;,\ ] ] where it may be necessary to consider the closest distance if the coordinates represent true angles . for a step from to , one can then define a one - dimensional change in distance by in essence , since distance from the target is always a scalar quantity , one need only consider a one - dimensional description to estimate probability scores .the acceptance function for increments is very simple and is specified by the simulator .the function used in the present work is illustrated in fig .[ fig : p - accept ] and is written } & \mbox{if } { \delta \varphi } < 0 \ ; , \end{array } \right.\ ] ] where is a parameter which controls the width of the ( backwards ) decay depicted in fig .[ fig : p - accept ] .the gradual decay to zero is the `` softness '' of soft - ratcheting : many backwards steps will be accepted . with specified, the final task toward generating the required probability score is to consider the relation between the unbiased and soft - ratcheted ( biased ) distribution .the probability ( density ) that the soft - ratcheting algorithm will generate a given increment , , is proportional to the product of the unbiased probability of generating the increment , , and the acceptance probability , : where is the required normalization factor , given by the fraction of steps initiated at which would be accepted by the soft - ratcheting procedure .as noted , the biased distribution , , has been shifted forward in the direction because the acceptance function partially suppresses backward steps .the desired probability score for a single step is then the ratio deriving from ( [ single - step - prob ] ) , namely , to truly calculate the normalization factor , one would have to initiate a large number of steps from the point and compute the fraction accepted by the soft - ratcheting acceptance function .as that procedure would be very computationally expensive , we instead use the sequence of nearby _ attempted _ steps , both accepted and rejected , to estimate the probability that soft - ratcheted steps were accepted in a given local region of configuration space .the final score is simply the product of the single - step scores ( [ single - step - ratio ] ) .the results of this preliminary report may be summarized in three points : ( i ) the soft - ratcheting algorithm is capable of generating reaction pathways _ rapidly _ in a fraction of the time which would be required by unbiased simulation : see fig . [fig : short - time ] ; ( ii ) the scores associated with each crossing trajectory permit the generation of a most - important _ ensemble _ of events as in fig .[ fig : long - time ] , which can give more detailed information about the full `` valley '' of the pathway ; and ( iii ) the associated scores , in principle , permit _ rate estimates _ within the dynamic importance sampling formulation . in figure[ fig : short - time ] , one sees the rapidity with which the soft - ratcheting algorithm generates crossing trajectories .the same three pathways are found in 1/70th of the simulation time . in absolute terms , the 10 nsec . of simulation time used in generating the soft - ratcheting trajectories appears quite long ; however , this time may be significantly reduced by adjusting the threshold level ( see sec .[ sec : theory ] ) from the preliminary value used to generate the depicted results .figure [ fig : long - time ] illustrates the capacity of the soft - ratcheting algorithm to generate an `` important '' ( highly weighted ) ensemble of crossing events .the large set of trajectories shown in the figure clearly gives a better description of the pathway valleys than the sparse events generated by unbiased simulation .figure [ fig : long - time ] also demonstrates the agreement between the weight estimate discussed previously ( used to select the depicted trajectories ) and the unbiased results .the higher - weighted trajectories coincide strongly with the unbiased events . the large cluster of soft - ratcheted trajectories in the region deserves comment . because there is only a single unbiased event in that region , it is not obvious whether the relatively widely dispersed soft - ratcheted trajectories are `` correct '' i.e. , whether such an ensemble of trajectories would be found in a long unbiased simulation , with many events in the region .examination of the adiabatic energy surface ( not shown ) does indicate that the channel in question is indeed significantly wider than the two pathways crossing , though perhaps not quite to the extent suggested by the soft - ratcheting trajectories of fig .[ fig : long - time ] .several means of improving the soft - ratcheting procedure are possible , of which we mention two .first , to increase the speed with which transition trajectories are generated really , to decrease the waiting interval between crossing events one can reduce the size of the threshold region ( sec . [sec : theory ] ) in which purely _ unbiased _ dynamics are performed .the threshold region was intended to permit trajectories to explore a multiplicity of potential `` exit points '' from the stable state .however , the `` softness '' of the soft - ratcheting algorithm should , by itself , permit a substantial degree of this kind of exploration , and it may be possible to use a very small threshold region .second , a more optimal ( i.e. , higher - scoring ) ensemble of trajectories presumably can be obtained by systematic estimation of parameter .in fact , the promising preliminary results presented in sec . [ sec : results ] were based on an _ ad - hoc _ choice .it is a simple matter to study in more detail an unbiased distribution of increments , and then use this data to systematically inform the choice of .moreover , one can imagine attempting to bias trajectories forward in a focussed conical region of dihedral angles , rather than simply according to ( hyper)planes of constant .ultimately , it will also be important to compare the soft - ratcheted paths ( which presumably represent the stochastic dynamics in a faithul way ) with those generated by explicitly - solvated molecular dyanmics simulation .that is , how does the addition of explicit solvent alter the paths ?of course , this comparison will only be possible in small molecules like the alanine dipeptide and other small peptides , but it will provide a crucial validation of the technique .we have given motivation and details for the `` soft - ratcheting '' algorithm for determining reaction pathways in molecular systems governed by stochastic dynamics .the method generates independent transition trajectories which will not be trapped in a single channel ( pathway ) , and hence is capable of finding multiple channels .although a final state is always targeted on average , the algorithm permits `` backward '' steps with a suppressed probability .the trajectories are thus ratcheted forward , but only softly : see fig .[ fig : p - accept ] .the capacities of the approach were demonstrated in figs .[ fig : short - time ] and [ fig : long - time ] for an all - atom model of the alanine dipeptide molecule evolving according to overdamped langevin dynamics with the amber potential . beyond rapidly generating multiple pathways , as other existing approaches are presently able to do, the soft - ratcheting algorithm has the potential also to estimate reaction rates and free energy differences via the dynamic importance sampling ( dims ) framework . the soft - ratcheting algorithm associates a score ( see sec . [sec : theory ] ) with each transition trajectory it generates .the scores , in turn , may be used in principle to estimate kinetics and free energy differences . at present, however , we note that initial results showed that further parameterization and/or refinement of the algorithm are necessary before efficiency can be obtained in rate and free energy calculations .we gratefully acknowledge funding provided by the nih ( under grant gm54782 ) , the aha ( grant - in - aid ) , the bard foundation , and the department of physiology .jonathan sachs offered many helpful comments on the manuscript .w. d. cornell , p. cieplak , c. i. bayly , i. r. gould , k. m. merz , d. m. ferguson , d. c. spellmeyer , t. fox , j. w. caldwell , and p. a. kollman . a 2nd generation force - field for the simulation of proteins , nucleic - acids , and organic - molecules ., 117:51795197 , 1995 .
we discuss the `` soft - ratcheting '' algorithm which generates targeted stochastic trajectories in molecular systems _ with scores corresponding to their probabilities_. the procedure , which requires no initial pathway guess , is capable of rapidly determining multiple pathways between known states . monotonic progress toward the target state is _ not _ required . the soft - ratcheting algorithm is applied to an all - atom model of alanine dipeptide , whose unbiased trajectories are assumed to follow overdamped langevin dynamics . all possible pathways on the two - dimensional dihedral surface are determined . the associated probability scores , though not optimally distributed at present , may provide a mechanism for estimating reaction rates .
suppose that we observe where is a -dimensional predictor and is the response variable .we consider a standard linear model for each of observations with e and var .we also assume the predictors are standardized and the response variable is centered , with the dramatic increase in the amount of data collected in many fields comes a corresponding increase in the number of predictors available in data analyses . for simpler interpretation of the underlying processes generating the data ,it is often desired to have a relatively parsimonious model .it is often a challenge to identify important predictors out of the many that are available .this becomes more so when the predictors are correlated .as a motivating example , consider a study involving near infrared ( nir ) spectroscopy data measurements of cookie dough .near infrared reflectance spectral measurements were made at 700 wavelengths from 1100 to 2498 nanometers ( nm ) in steps of 2 nm for each of 72 cookie doughs made with a standard recipe .the study aims to predict dough chemical composition using the spectral characteristics of nir reflectance wavelength measurements . here , the number of wavelengths is much bigger than the sample size .many methods have been developed to address this issue of high dimensionality .section [ sect : review ] contains a brief review .most of these methods involve minimizing an objective function , like the negative log - likelihood , subject to certain constraints , and the methods in section [ sect : review ] mainly differ in the constraints used . in this paper , we propose a variable selection procedure that can cluster predictors using the positive correlation structure and is also applicable to data with .the constraints we use balance between an norm of the coefficients and an norm for pairwise differences of the coefficients .we call this procedure a _ hexagonal operator for regression with shrinkage and equality selection _ , horses for short , because the constraint region can be represented by a hexagon. the hexagonal shape of the constraint region focuses selection of groups of predictors that are positively correlated .the goal is to obtain a homogeneous subgroup structure within the high dimensional predictor space .this grouping is done by focusing on spatial and/or positive correlation in the predictors , similar to supervised clustering .the benefits of our procedure are a combination of variance reduction and higher predictive power .the remainder of the paper is organized as follows .we introduce the horses procedure and its geometric interpretation in section [ sect : model ] .we provide an overview of some other methods in section [ sect : review ] , relating our procedure with some of these methods . in section [sect : compute ] we describe the computational algorithm that we constructed to apply horses to data and address the issue of selection of the tuning parameters .a simulation study is presented in section [ sect : simstudy ] .two data analyses using horses are presented in section [ sect : analysis ] .we conclude the paper with discussion in section [ sect : conclusion ] .in this section we describe our method for variable selection for regression with _ positively _ correlated predictors .our penalty terms involve a linear combination of an penalty for the coefficients and another penalty for pairwise differences of coefficients .computation is done by solving a constrained least - squares problem .specifically , estimates for the horses procedure are obtained by solving with and is a thresholding parameter . .[ fig : region1 ] as we describe in section [ sect : review ] , some methods like elastic net and oscar can group correlated predictors , but they can also put negatively correlated predictors into the same group .our method s novelty is its grouping of _ positively _ correlated predictors in addition to achieving a sparsity solution .figure [ fig : region1](c ) shows the hexagonal shape of the constraint region induced by ( [ eqn : horses ] ) , showing schematically the tendency of the procedure to equalize coefficients only in the direction of .the lower bound of prevents the estimates from being a solution only via the second penalty function , so the horses method always achieves sparsity .we recommend , where is the number of predictors .this ensures that the constraint parameter region lies between that of the norm and of the elastic net method , i.e. the set of possible estimates for the horses procedure is a subset of that of elastic net . in other words ,horses accounts for positive correlations up to the level of elastic net . with ,the horses parameter region lies within that of the oscar method .[ fig : likelihood ] in a graphical representation in the plane , the solution is the first time the contours of the sum of squares loss function hit the constraint regions .figure 2 gives a schematic view .figure [ fig : horses2 ] shows the solution for horses when there is negative correlation between predictors .horses treats them separately by making . on the other hand ,horses yields when predictors are positively correlated , as in figure [ fig : horses3 ] .the following theorem shows that horses has the exact grouping property . as the correlation between two predictors increases , the predictors are more likely to be grouped together .our proof follows closely the proof of theorem 1 in and is hence relegated to an appendix .let and be the two tuning parameters in the horses criterion . given data with centered response and standardized predictors , let be the horses estimate using the tuning parameters .let be the sample correlation between covariates and .for a given pair of predictors and , suppose that both and are distinct from the other .then there exists such that if then $.}\ ] ] furthermore , it must be that the strength with which the predictors are grouped is controlled by .if is increased , any two coefficients are morely likely to be equal .when and are positively correlated , theorem 1 implies that predictors and will be grouped and their coefficient estimates almost identical .this brief review can not do justice to the many variable selection methods that have been developed .we highlight several of them , especially those that have links to our horses procedure .while variable selection in regression is an increasingly important problem , it is also very challenging , particularly when there is a large number of highly correlated predictors . since the important contribution of the least absolute shrinkage and selection operator ( lasso ) method by ,many other methods based on regularized or penalized regression have been proposed for parsimonious model selection , particularly in high dimensions , e.g. elastic net , fused lasso , oscar and group pursuit methods .briefly , these methods involve penalization to fit a model to data , resulting in shrinkage of the estimators .many methods have focused on addressing various possible shortcomings of the lasso method , for instance when there is dependence or collinearity between predictors . in the lasso ,a bound is imposed on the sum of the absolute values of the coefficients : where and .the lasso method is a shrinkage method like ridge regression , with automatic variable selection . due to the nature of the penalty term, lasso shrinks each coefficient and selects variables simultaneously .however , a major drawback of lasso is that if there exists collinearity among a subset of the predictors , it usually only selects one to represent the entire collinear group .furthermore , lasso can not select more than variables when .one possible approach is to cluster predictors based on the correlation structure and to use averages of the predictors in each cluster as new predictors . used this approach for gene expression data analysis and introduce the concept of a _ super gene_. however , it is sometimes desirable to keep all relevant predictors separate while achieving better predictive performance , rather than to use an average of the predictors .the hierarchical clustering used in for grouping does not account for the correlation structure of the predictors .other penalized regression methods have also been proposed for grouped predictors .all these methods except group pursuit work by introducing a new penalty term in addition to the penalty term of lasso to account for correlation structure .for example , based on the fact that ridge regression tends to shrink the correlated predictors toward each other , elastic net uses a linear combination of ridge and lasso penalties for group predictor selection and can be computed by solving the following constrained least squares optimization problem , the second term forces highly correlated predictors to be averaged while the first term leads to a sparse solution of these averaged predictors . proposed oscar ( octagonal shrinkage and clustering algorithm for regression ) , which is defined by by using a pairwise norm as the second penalty term , oscar encourages equality of coefficients .the constraint region for the oscar procedure is represented by an octagon ( see figure [ fig : region1 ] ) . unlike the hexagonal shape of the horses procedure , the octagonal shape of the constraint region allows for grouping of negatively as well as positively correlated predictors .while this is not necessarily an undesirable property , there may be instances when a separation of positively and negatively correlated predictors is preferred . unlike elastic net and oscar , fused lasso introduced to account for _ spatial _ correlation of predictors .a key assumption in fused lasso is that the predictors have a certain type of ordering .fused lasso solves the second constraint , called a _ fusion penalty _ , encourages sparsity in the differences of coefficients .the method can theoretically be extended to multivariate data , although with a corresponding increase in computational requirements .note that the fused lasso signal approximator ( flsa ) in can be considered as a special case of horses with design matrix .we also want to point out that our penalty function is a convex combination of the norm of the coefficients and the norm of the pairwise differenc es of coefficients .therefore , it is not a straightforward extension of fused lasso in which each penalty function is constrained separately . extended fused lasso by considering all possible pairwise differences and called it clustered lasso .however , the constraint region of clustered lasso does not have a hexagonal shape . as a result , clustered lassodoes not have the _ exact _ grouping property of oscar .consequently , suggested to use a data - argumentation modification such as elastic net to achieve exact grouping .finally , the group pursuit method of is a kind of supervised clustering . with a regularization parameter and a threshold parameter , they define and estimate using horses is a hybrid of the group pursuit and fused lasso methods and addresses some limitations of the various methods described above . for example, oscar can not handle the high - dimensional data while elastic net does not have the exact grouping property .a crucial component of any variable selection procedure is an efficient algorithm for its implementation . in this sectionwe describe how we developed such an algorithm for the horses procedure .the matlab code for this algorithm is available upon request .we also discuss here the choice of optimal tuning parameters for the algorithm . solving the equations for the horses procedure ( [ eqn : horses ] )is equivalent to solving its lagrangian counterpart where and with . to solve ( [ lagr_obj ] ) to obtain estimates for the horses procedure , we modify the pathwise coordinate descent algorithm of .the pathwise coordinate descent algorithm is an adaptation of the coordinate - wise descent algorithm for solving the 2-dimensional fused lasso problem with a non - separable penalty ( objective ) function .our extension involves modifying the pathwise coordinate descent algorithm to solve the regression problem with a fusion penalty .as shown in , the proposed algorithm is much faster than a general quadratic program solver .furthermore , it allows the horses procedure to run in situations where .our modified pathwise coordinate descent algorithm has two steps , the descent and the fusion steps . in the descent step ,we run an ordinary coordinate - wise descent procedure to sequentially update each parameter given the others .the fusion step is considered when the descent step fails to improve the objective function . in the fusion step , we add an equality constraint on pairs of to take into account potential fusions and do the descent step along with the constraint . in other words ,the fusion step moves given pairs of parameters together under equality constraints to improve the objective function .the details of the algorithm are as follows : * descent step : + the derivative of ( [ lagr_obj ] ) with respect to given , , is where the s are current estimates of the s and is a subgradient of .the derivative ( [ eqn : deriv_obj ] ) is piecewise linear in with breaks at unless . ** if there exists a solution to , we can find an interval which contains it , and further show that the solution is where , and . ** if there is no solution to , we let * fusion step : + if the descent step fails to improve the objective function , we consider the fusion of pairs of . for every single pair , we consider the equality constraint and try a descent move in .the derivative of ( [ lagr_obj ] ) with respect to becomes where .if the optimal value of obtained from the descent step improves the objective function , we accept the move .estimation of the tuning parameters and used in the algorithm above is very important for its successful implementation , as it is for the other methods of penalized regression .several methods have been proposed in the literature , and any of these can be used to tune the parameters of the horses procedure . -foldcross - validation ( cv ) randomly divides the data into roughly equally sized and disjoint subsets , ; .the cv error is defined by where is the estimate of for a given and using the data set without .generalized cross - validation ( gcv ) and bayesian information criterion ( bic ) are other popular methods .these are defined by where is the estimate of for a given and , is the degrees of freedom and here , the degrees of freedom is a measure of model complexity . to apply these methods , one must estimate the degrees of freedom . following for fused lasso, we use the number of distinct groups of non - zero regression coefficients as an estimate of the degrees of freedom .we numerically compare the performance of horses and several other penalized methods : ridge regression , lasso , elastic net , and oscar .we do this by generating data based on six models that differ on the number of data points , number of predictors , the correlation structure and the true values of the coefficients .the parameters for these six models are given in table [ table : models ] ..parameters for the models used in the simulation study . [ cols="^,^,^,^,^,<",options="header " , ] we analyze the data with the horses and oscar procedures and report the results in table 4 .although oscar and horses use the same definition of df , the oscar procedure groups predictors based on the _ absolute _ values of the coefficients . therefore the number of groups is not the same as the df in oscar . the results for lasso using 5-fold cross - validation and gcv can be found in .the 5-fold cross - validation oscar and horses solutions are similar .they select the exact same variables , but with slightly different coefficient estimates .since the sample size is only 20 and the number of predictors is 15 , the 5-fold cross - validation method may not be the best choice for selecting tuning parameters .however , using gcv , oscar and horses provide different answers .compared to the 5-fold cross - validation solutions , the oscar solution has one more predictor ( % base saturation ) while the horses solution has 3 additional predictors ( % base saturation , zinc , exchangeable acidity ) .more interestingly , in the oscar solution , % base saturation is not in the group measuring _ abundance of cations _ , while ph is . on the other hand ,the % base saturation variable is included in the _ abundance of cations _ group .the horses solution also produces an additional group of variables consisting of phosophorus and ph .we proposed a new group variable selection procedure in regression that produces a sparse solution and also groups positively correlated variables together .we developed a modified pathwise coordinate optimization for applying the procedure to data .our algorithm is much faster than a quadratic program solver and can handle cases with .such a procedure is useful relative to other available methods in a number of ways .first , it selects groups of variables , rather than randomly selecting one variable in the group as the lasso method does .second , it groups positively correlated rather than both positively and negatively correlated variables .this can be useful when studying the mechanisms underlying a process , since the variables within each group behave similarly , and may indicate that they measure characteristics that affect a system through the same pathways .third , the penalty function used ensures that the positively correlated variables do not need to be spatially close .this is particularly relevant in applications where spatial contiguity is not the only indicator of functional relation , such as brain imaging or genetics .a simulation study comparing the horses procedure with ridge regression , lasso , elastic net and oscar methods over a variety of scenarios showed its superiority in terms of sparsity , effective grouping of predictors and mse .it is desirable to achieve a theoretical optimality such as the oracle property of in high dimensional cases .one possibility is to extend the idea of the adaptive elastic net to the horses procedure. then we may consider the following penalty form : where are the adaptive data - driven weights .investigating theoretical properties of the above estimator will be a topic of future research .* proof of theorem 1 : * suppose the covariates are ordered such that their corresponding coefficient estimates satisfy and .let denote the unique nonzero values of the set of , so that . for each ,let denote the set of indices of the covariates whose estimates of regression coefficients are .let also be the number of elements in the set suppose that and both are non - zero . in addtion , let assume and for without loss of generality .the differentiation of the objective function ( [ eqn : horses - obj ] ) with respect to gives where and , and in the same way , the differentiation of ( [ eqn : horses - obj ] ) with respect to is and we have , by taking their differences , since is standardized , .this together with the fact that gives however , we find that is always larger than or equal to . thus , if - equivalently , - then we encounter a contradiction. osborne , b.g ., fearn , t. , miller , a.r . ,and douglas , s. ( 1984 ) . application of near infrared reflectance spectroscopy to compositional analysis of biscuits and biscuit doughs .food agr . _* 35 * 99 - 105 .
identifying homogeneous subgroups of variables can be challenging in high dimensional data analysis with highly correlated predictors . we propose a new method called hexagonal operator for regression with shrinkage and equality selection , horses for short , that simultaneously selects positively correlated variables and identifies them as predictive clusters . this is achieved via a constrained least - squares problem with regularization that consists of a linear combination of an penalty for the coefficients and another penalty for pairwise differences of the coefficients . this specification of the penalty function encourages grouping of positively correlated predictors combined with a sparsity solution . we construct an efficient algorithm to implement the horses procedure . we show via simulation that the proposed method outperforms other variable selection methods in terms of prediction error and parsimony . the technique is demonstrated on two data sets , a small data set from analysis of soil in appalachia , and a high dimensional data set from a near infrared ( nir ) spectroscopy study , showing the flexibility of the methodology . _ keywords and phrases : prediction ; regularization ; spatial correlation ; supervised clustering ; variable selection _
metascope is able to accurately analyze millions of sequencing reads in minutes .the metascope pipeline takes a file of sequencing reads in fastq or fasta format as input and produces a report file in xml format as output .the input file represents an host - associated metagenome sample and the aim of metascope is to determine the taxonomic and functional content of the non - host portion of the sample . for each organism detected in the input file, the report file contains an estimation of its relative amount , the list of all reads assigned to the organism and the list of all genes identified for the organism .the metascope pipeline ( see figure [ pipeline - fig ] ) is invoked using the command metascope _platform reads work output_. the four arguments specify the sequencing platform ( one of illumina , 454 , iontorrent or pacbio ) , an input file containing all reads in fastq format , a sample - specific working directory where intermediate files are to be placed , and the name of the output file .in addition , any part of the pipeline can be run individually with more control over programming parameters .first , the program sass is used to compare all reads in the input file against the host genome .all detected alignments between reads and the host genome are written to a file called host.m8 .second , a script called triage uses the alignments in host.m8 to count the host reads and to write all non - host reads to a file called non-host.fq .third , sass compares the set of all non - host reads against genbank .all found alignments are written to a file called metagenome.m8 .the number of reads and host reads , and the file of all metagenome alignments are provided as input to the metascope analyzer , which produces the final metascope report output.xml in xml format .the main computational bottleneck in metagenome analysis is the comparison of the reads against a host database , in the case of a host - associated sample , and then against a comprehensive collection of bacterial and viral sequences , such as genbank . to address this problem in an efficient manner, metascope introduces a new sequence alignment tool called sass ( an acronym for `` sequence alignment using spaced seeds '' ) .sass is written in c++ and uses seqan and boost .designed to target significant alignments with a bit score of at least 50 , sass aligns dna sequencing reads at about 50 - 100 times the speed of discontiguous megablast . like blast , sass attempts to exhaustively determine all significant alignments , which is crucial for accurate taxonomic analysis .fast pairwise alignment programs usually follow the seed - and - extend paradigm . in this two - phase approach ,first one searches for exact matches of small parts of the query sequence in the reference database , such seed matches are evaluated and those deemed promising are then followed up in an extend phase that aims at computing a full alignment .existing approaches typically employ an index data structure for the reference database in order to quickly compute all seed matches between the query sequence and the reference .for example , bowtie2 and bwa use a compressed fm - index , which is very memory efficient , but at the expense of a slower access time .in contrast , sass uses a hash table , which requires more memory , but is much faster .the high speed of the index allows sass to achieve high speed and good sensitivity even when aligning low quality reads such as produced by pacbio and ion torrent sequencers .most aligners employ a simple seed shape that consists of a short word of consecutive positions .the choice of seed length is based on a trade - off between sensitivity and speed .a hash table index permits the use of spaced seeds .these are longer seeds in which only a subset of positions are used ( see , as figure [ seed - figure ] ) .the number and exact layout of the utilized positions are called the weight and shape of the spaced seed , respectively .spaced seeds are known to perform better than simple seeds in terms of the speed / sensitivity trade - off . by default, sass uses a single spaced seed , 111110111011110110111111 . to sustain the high throughput achieved in the seeding phase, we attempt to avoid unnecessary smith - waterman computations in the extension phase . to this end, we evaluate seed matches using a modified version of myers bit vector algorithm for approximate string matching , which computes the edit distance between two short patterns encoded in machine words using fast bit - parallel operations . starting from the location of a seed match, an alignment is extended in both directions by block - wise invocation of myers algorithm in conjunction with a termination criterion based on the score gain .tentative scores are calculated that approximate the full blast score .a full banded smith - waterman alignment is only performed on the 100 ( by default ) best tentative alignments for a read , thus producing accurate standardized blast alignment scores for them . in the case of a host - associated sample , the first step is to identify all reads that come from the host organism . to address this ,sass is used to compare all reads against the host genome . for human, we used assembly release crch37 downloaded from ncbi in june 2013 .the output of sass is written to a file called host.m8 . to reduce the running time of this calculation , here we computeonly the approximated score for any alignment and do not perform a full smith - waterman calculation . based on the host alignments detected in the previous step ,a simple perl script called triage is then used to determine all reads that do not have a significant alignment to the host genome and only these reads are considered in the downstream analysis . here , an alignment is considered significant if it has an expected score of less than .these reads are placed in a file called non-host.fq .an additional file , counts.txt is generated that contain the total number of reads and the number of reads that have a significant alignment to the host genome .then sass is used to compare all non - host reads reads ( contained in non-host.fq ) against a large portion of genbank ( consisting of all bacterial , viral , phage and synthetic sequences ) , downloaded from ncbi in june 2013 .the resulting alignments are placed in a file called metagenome.m8 .sass uses two different indices for genbank , depending on the quality and quantity of the sequencing reads . for high quality and high quantity input samples sass uses an index that is optimized for speed ( using longer seeds ) whereas for samples of lesser sequencing quality and smaller size, sass uses an index that is optimized for sensitivity ( using shorter seeds ) . the files counts.txt and metagenome.m8 form the basis of metascope s taxonomy and gene content analysis , as described in the following sections .the number of reads and host reads , and the file of all metagenome alignments obtained using sass are provided as input to the metascope analyzer , a perl script that produces the final metascope report output.xml in xml format .the analyzer uses three criteria to decide which alignments are deemed significant and all non - significant alignments are ignored in all subsequent analysis steps .the first criterion is a minimum alignment bit score ( option minscore , default is 50 ) .second , for each read we only consider alignments that have maximal bit score , or that are within of the top score , where is set by a user option called .the third criterion , which is only applied to illumina reads , aims at ensuring that a significant alignment covers a large proportion of the corresponding read .because the quality of an illumina read tends to degrade toward the end of the read , we calculate the proportion of read covered as alignment length divided by `` covered prefix length '' , where the latter is the length of the prefix of the read up to the last base that is covered by the alignment . in more detail ,an alignment must fulfill to be deemed significant , where and are the alignment start and end position on the read and minover is a user - specified parameter ( default is ) . [ [ weighted - lca ] ] weighted lca + + + + + + + + + + + + the assignment of reads to taxa based on a set of alignments to a reference database is often performed using the naive lca algorithm in which a read is placed on the lowest - common ancestor of all taxa in the ncbi taxonomy for which the read has a high - scoring alignment to a corresponding sequence in the reference database .this approach is quite conservative and does not work well when there are multiple closely related references in the database , as these will move the assignment to higher level on the phylogenetic tree . to overcome this , metascope uses a new weighted lca algorithm that proceeds in two rounds . in the first round, the naive lca is applied to all reads . during this process , each reference sequenceis assigned a weight that is the number of reads that have a significant alignment to that reference sequence and for which the naive lca assigns the read to the same species that the reference sequence has .reference sequences that are not assigned a weight in this way are assigned weight . in the second round ,each read is then assigned to the lowest taxonomic node that lies above a fixed proportion ( user parameter lca default value ) of the sum of weights of reference sequences to which the read has a significant alignment .the lowest rank that we consider here is that of species .reads that are assigned to a sub - species or strain are moved up to the species level . to address the problem of over - aggressive taxa assignment , for each assigned taxa node , we calculate and report the average alignment identity between the reads assigned to this node and the reference sequences . if the average identity is below 90% for a species level taxa node , a low_identity tag is reported in the xml output to indicate that a species - level assignment might be too aggressive .[ [ strain - level - assignment ] ] strain level assignment + + + + + + + + + + + + + + + + + + + + + + + our implementation of the weighted lca assigns reads down to the level of species , but not further .if the user requests strain - level analysis ( option strain ) then the analyzer proceeds as follows . for each read that is assigned to a species node, we consider all alignments whose bit score are within percent of the best score for the read , where is determined by a user parameter strain_top ( default value 10% ) .if a significant proportion ( controlled by a user parameter strain_lca , default 80% ) of the best alignments agree on a strain and these alignments have high sequence identity ( controlled by parameter strain_iden , default 95% ) , then the read is tentatively assigned to that strain .a strain is reported , if a significant proportion ( controlled by parameter strain_report , default 80% ) of the reads previously assigned to the species are tentatively assigned to the strain .[ [ gene - prediction ] ] gene prediction + + + + + + + + + + + + + + + to decide which genes to report for a given read , metascope produces two separate lists of all genes that are partially covered by an alignment of the read .the first list is ranked by descending weight of reference sequence ( as described above ) and the second is ranked by descending coverage of genes ( that is , by the number of bases of the gene covered by any significant alignment of any read ) . by default , metascope reports the top five genes ( user parameter maxgene ) from each of the two ranked lists .[ [ data - source ] ] data source + + + + + + + + + + + all supporting data for metascope were downloaded from ncbi .the urls for the data source are as follows : * human genome data : ftp://ftp.ncbi.nlm.nih.gov/genomes/h_sapiens/ * genbank data in asn.1 format : ftp://ftp.ncbi.nih.gov/ncbi-asn1/ * genbank data in genbank format : ftp://ftp.ncbi.nih.gov/genbank/ * taxonomy data : ftp://ftp.ncbi.nlm.nih.gov/pub/taxonomy/ [ [ masking - human - like - reference - sequences ] ] masking human - like reference sequences + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + because virus and synthetic constructs often contain human sequences , we decided to mask all human - like regions in those sequences in our working version of the genbank microbial database . to do this , we built a sass index for the human genome reference in sensitive mode ( sass option index - mode 2 ) .all virus and synthetic construct sequences were then shredded into 100bp fragments with 50bp overlap .we used sass to align the shredded sequences against the human reference . if a shredded sequence aligned to the human genome with an alignment of 50 bases or more , and at least 80% identity , then the source region of the shredded sequence covered by the alignment was masked by replacing all nucleotides by n s .[ [ data - preprocessing ] ] data preprocessing + + + + + + + + + + + + + + + + + + dna sequences were first extracted from genbank asn.1 data .only sequences under genbank bct , vrl , phg , and syn sections were included .the four sections cover all genbank sequences from bacteria , archaea , virus , phage , and synthetic constructs .a mapping of gi numbers to ncbi taxon identifiers was extracted from the genbank asn.1 files .we also extracted taxonomic lineage information for each reference sequence from these files , rather than from the ncbi taxonomy dump file because only the asn.1 files contain the correctly labeled strain description of reference sequences .the ncbi taxonomy file was used to complement the asn.1-derived taxonomy data .information on protein coding regions , such as location , protein accession number , locus tag , description , was extracted from the genbank flat files .the set of scripts used to download process all reference data is distributed in the ` aux ` folder of the metascope package .the output of different sequencing platforms varies in three main aspects , namely the number of reads produced , the average read length and the sequence quality .individual illumina datasets usually consist of millions of reads with a read length of hundreds of base pairs .roche-454 datasets usually have less than one million reads , with a read length approaching 1000 bp .ion torrent datasets contain hundreds of thousands of reads , hundreds of base pairs long , with a lower level of quality than the afore mentioned datasets .finally , pacbio datasets are usually smaller yet , with read lengths of thousands of base pairs , with a very high level of errors . .default platform - specific settings used by metascope. [ cols="^,^,^,^",options="header " , ] to address these differences , metascope uses slightly different parameter settings depending on which sequencing platform was used to generate the input ( see supplement table [ settings - table ] ) . for datasets with higher error rates and smaller size , the pipeline uses sass sensitive genbank index so as to improve the detection of significant alignments in the presence of sequencing errors .moreover , in the taxonomic analysis of such data , the pipeline employs a relaxed lca with a top setting of 10% so as to help avoid unreliable placement of reads for pacbio data , but 5% for other platforms .because illumina datasets usually contain millions of reads , here a even small sequence error rate can lead to a large number of wrongly assigned reads .hence , for illumina we use a minover setting of 0.9 to ensure that significant alignments cover at least 90% of the high quality end of a read .the metascope parameters employed in the dtra challenge differ slightly from the default settings recommended in table [ settings - table ] due to the specific nature of the dtra testing datasets .their metagenomic reads appeared to have originated from organisms whose genome sequences are well represented in genbank and thus they usually have a top - scoring alignment to the correct species ( but also to many others ) .in this situation , we were able to set top to for all sequencing technologies except for pacbio , were was used to accommodate for the high rate of sequencing errors in pacbio data .the default value for the maxgene parameter ( that controls the number of genes reported per read ) is , as this value works well on all dtra challenge datasets .however , for the dtra challenge roche-454 datasets we used a value of so has to achieve a particularly high gene score so as to compensate for low organisms scores on the roche-454 test datasets .this research is partially supported by the national research foundation and ministry of education singapore under its research centre of excellence programme .all authors contributed equally to the development and implementation of the described software .
metascope is a fast and accurate tool for analyzing ( host - associated ) metagenome datasets . sequence alignment of reads against the host genome ( if requested ) and against microbial genbank is performed using a new dna aligner called sass . the output of sass is processed so as to assign all microbial reads to taxa and genes , using a new weighted version of the lca algorithm . metascope is the winner of the 2013 dtra software challenge entitled `` identify organisms from a stream of dna sequences '' . department of computer science and center for bioinformatics , university of tbingen , sand 14 , 72076 tbingen , germany life sciences institute , national university of singapore , # 05 - 02 28 , medical drive singapore 117456 singapore human longevity incorp . , singapore metagenomics is the study of microbes using dna sequencing . one major area of application is the human microbiome with the aim of understanding the interplay between human - associated microbes and disease . other areas of metagenomic research include water , waste - water treatment , soil and ancient pathogens . another envisioned area of application metagenomics is in bio - threat detection , for example , when a group of individuals becomes infected by an unknown agent and the goal is to quickly and reliably determine the identity of the pathogens involved . in 2013 the defense threat reduction agency ( dtra ) sponsored an algorithms competition entitled `` identify organisms from a stream of dna sequences '' with a one million dollar prize . proposed solutions `` must generate equivalent identification and characterization performance regardless of the sequencing technology used '' and `` must achieve this in a timeline that is substantively shorter than possible with currently available techniques . '' the challenge provided nine test datasets for analysis and proposed results were scored based on the correctness of organisms identified ( organisms score ) , reads assigned ( reads score ) and genes identified ( genes score ) . this paper describes the winning entry . such analysis requires the comparison of a large number of sequencing reads ( typically millions of reads ) against a large reference database ( typically many billions of nucleotides or amino acids ) . hence , tools that address this type of problem must be very fast . because current reference databases only represent a small fraction of the sequence diversity that exists in the environment , such tools must also be very sensitive . metascope performs very fast and very accurate analysis of metagenome datasets , including the removal of host reads , if required . metascope employs a new fast and sensitive dna aligner called sass . the aligner is first used to compare a given set of metagenomic sequencing reads against a host genome , if available , so as to discard reads that probably come from the host genome . the remaining reads are then compared against microbial genbank using sass . a second program called analyzer processes the output of sass and maps the reads to taxa and genes using a novel variant of the lca algorithm . the output is written in xml and can , for example , be loaded into the metagenome analysis program megan for further processing . like blast , sass uses a seed - and - extend approach to alignment . to achieve both high speed and high accuracy , sass uses spaced seeds , a hash - table for seed lookup and is implemented as a parallel algorithm in c++ . a crucial heuristic decision is when to extend a given seed match so as to compute a full alignment . sass uses myers bit vector algorithm and a gain - based termination criterion to decide this . in the context of host - genome filtering , the score obtained in this way is used as a proxy for the full local alignment score and the extension phase is not used . removal of all reads that align to the host genome does not completely resolve the problem of false positive taxon assignments because many viruses and vector sequences in genbank contain human sequences . hence , in a preprocessing step , we use sass to compare the viral and vector portion of genbank against the human genome and then mask every region of the reference sequences that has a significant alignment to some host sequence . the assignment of reads is often performed using the naive lca algorithm in which a read is placed on the lowest - common ancestor of all taxa in the ncbi taxonomy for which the read has a high - scoring alignment to a corresponding sequence in the reference database . as the naive lca algorithm analyses each read in isolation , in the presence of many similar reference sequences from different species , the result is often very unspecific placement of reads . to overcome this , metascope uses a new weighted lca algorithm that proceeds in two steps . first , the naive lca algorithm is used to assign a weight to each reference sequence , reflecting the number of reads that are assigned to the corresponding species and have a significant alignment to that reference sequence . then each read is placed on the taxon node that covers 75% ( by default ) of the total weight of all reference sequences that have a significant alignment with the read . metascope predicts genes based on which annotated genes the alignments of a read overlap with . a read will often align to many different reference sequences and so the potential number of genes to report for a single read can be quite large , containing many false positives . to address this , all genes that are partially overlapped by some alignment of a read are ranked by the weight of the corresponding reference sequence and by the proportion of the gene sequence covered by any reads , and a small number of top ranked genes are reported . the results obtained by metascope on the nine dtra datasets are listed in table [ results - table ] . slightly different algorithmic parameters are used based on the different sequencing platforms , as described in the materials section . the accuracy score ranges from and the run time ranges from to minutes per dataset . ll*8c ( 1 ) & ( 2 ) & ( 3 ) & ( 4 ) & ( 5 ) & ( 6 ) & ( 7 ) & ( 8) & ( 9 ) & ( 10 ) + name & sequencing & number & average & seq . & total & org . & reads & genes & time + & platform & of reads & length & acc . & score & score & score & score & ( mins ) + testing1 & pacbio & 92948 & 1883 & 83 & 90.074 & 100 & 85 & 85 & 7:48 + testing2 & pacbio & 98323 & 1837 & 83 & 98.747 & 100 & 98 & 98 & 8:24 + testing3 & ion - torrent & 379028 & 160 & 98 & 91.949 & 85 & 93 & 96 & 6:28 + testing4 & roche-454 & 399671 & 363 & 99 & 91.595 & 75 & 99 & 99 & 6:47 + testing5 & illumina & 5550655 & 150 & 100 & 91.538 & 93 & 99 & 82 & 6:14 + testing6 & illumina & 6038557 & 150 & 100 & 95.357 & 100 & 100 & 86 & 7:27 + testing7 & ion - torrent & 323028 & 159 & 98 & 92.258 & 83 & 99 & 94 & 4:20 + testing8 & roche-454 & 351799 & 263 & 99 & 96.843 & 100 & 100 & 90 & 4:49 + testing10 & illumina & 6164558 & 151 & 100 & 97.803 & 100 & 100 & 93 & 12:10 + we have also investigated the use of an intermediate assembly step ( except for pacbio reads ) . in more detail , all reads that did not align to the host genome ( human ) were presented to the newbler assembler as input . the obtained contigs and all unassembled reads were aligned against microbial genbank using sass . the output of this was then processed as described above and all assembled reads inherited the taxon assignment of their containing contigs . the results produced by this approach scored equally high as those reported in table [ results - table ] , but not better , so we did not pursue this further . we plan to make metascope freely available from http://www.metascope.net .